Re: Question about QueryCache

2021-02-27 Thread Haoyu Zhai
Thanks Mike and Adrien for confirming the behavior!
I checked again and debugged the unit case and found it is
IndexSearcher.createWeight will be recursively called when BooleanQuery is
creating weight (
https://github.com/apache/lucene-solr/blob/e88b3e9c204f907fdb41d6d0f40d685574acde97/lucene/core/src/java/org/apache/lucene/search/BooleanWeight.java#L59),
I missed this part when I previously checking the logic.

Best
Patrick

Adrien Grand  于2021年2月26日周五 下午1:02写道:

> It does recurse indeed! To reuse Mike's example, in that case the cache
> would consider caching:
>  - A,
>  - B,
>  - C,
>  - D,
>  - (C D),
>  - +A +B +(C D)
>
> One weakness of this cache is that it doesn't consider caching subsets of
> boolean queries (except single clauses). E.g. in the above example, it
> would never consider caching +A +B even if the conjunction of these two
> clauses occurs in many queries.
>
> Le ven. 26 févr. 2021 à 20:03, Michael McCandless <
> luc...@mikemccandless.com> a écrit :
>
>> Hi Haoyu,
>>
>> I'm pretty sure (but not certain!) that query cache is smart enough to
>> recurse through the full query tree, and consider any of the whole queries
>> it finds during that recursion.
>>
>> So e.g. a query like +A +B +(C D) would consider caching A, B, C D, or
>> the whole original +A +B +(C D) query.
>>
>> But I'm not sure!  Hopefully someone who knows more about query cache
>> might chime in.
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Mon, Feb 22, 2021 at 8:55 PM Haoyu Zhai  wrote:
>>
>>> Hi folks,
>>> I'm trying to understand how QueryCache works and one question
>>> popped out of my head was that is QueryCache caching
>>> 1. the whole query that being submitted to IndexSearcher or
>>> 2. it will recurse into the query and selectively caching some of the
>>> clauses (especially for BooleanQuery)?
>>>
>>> From my observation it is the former case but I just want to double
>>> check in case I missed anything.
>>>
>>> Thanks
>>> Patrick
>>>
>>


Re: jcc - Output unbuilt package

2021-02-27 Thread Andi Vajda



 Hi Phil,

On Sun, 28 Feb 2021, Phil wrote:


I currently use jcc to wrap a Java library for use in Python - it works
great.

The project I'm working on is moving it's package management from
traditional pip installs to Guix:
https://guix.gnu.org/

Guix handles python packages pretty well, and I have jcc running nicely on
there.

The problem I have is that Guix expects as an input a Python source
repo, but the jcc outputs a binary wheel.


I'm not sure what you mean by "binary wheel", I'm not familiar with that 
format. Yes, JCC's __main__.py documents that

 --bdist generate a binary distutils-based distribution
 or a setuptools-based .egg
 --wheel generate wheel using setuptools (requires wheel
 package: pip install wheel)
 --build generate the wrapper and compile it
 --compile recompile the (previously generated) module

I did not write the bdist nor the wheel support, they were contributed and I 
don't now that --wheel makes a binary wheel, specifically.
Note that you have binaries in whatever you distribute, if you consider that 
the JAR files or the .class files are binaries. They are required (.class 
files are) for JCC to operate as it uses reflection to do its job.


I see that --build or --compile causes JCC to ask python to compile the egg 
it produces. Without these flags, I think it'll just produce the .cpp files, 
and with --python, the python wrappers (also C++ code).


If you don't invoke JCC with --wheel, --bdist, --build or --compile, you get 
just source files (not countng .jar).


I just tried that on PyLucene (the project I originally wrote JCC for) and 
no compilation happens without (some of) these flags being set.

  On my Mac, with python3, the command line looks like:
$ python3 -m jcc --shared --arch x86_64
   --jar 
   --package   etc...
   --module  --mapping ... --sequence ...
   --exclude ...
   --resources ...
   --python lucene
   --version 8.6.1
   --files 10   (10 or 11 .cpp files are generated)
--> no binaries made


What I'd like is for jcc to put together a source package, complete with
jars, C/C++, and python wrapper and a setup.py that is called to
generate the wheel.  But to stop short of generating the wheel for me.
I've had a look at the gnerated build directory - I could see the C/C++
source and jars, but there didn't seem to be a setup.py to trigger an install?


The installation of the python extension built by JCC is triggered by 
passing --install to JCC's invocation.


Maybe what you actually want is to implement 'sdist' support for JCC ?
(again, not familiar with wheels, so I may not be making sense here).

Such a directory could be then be fed into Guix who would happily build 
the package and install it using the standard setup.py provided.


As long as GUIX knows how to drive a C++ compiler and linker, build python 
extensions (and knows how to build the libjcc shared library), you should 
be fine.


As a crude workaround I can try to unzip the contents of the produced 
wheel and stick the contents into a repo, with a new setup.py, and 
MANIFEST.in file that would simply copy across the jars and the previously 
build C/C++ library.  However having the setup.py building the library and 
installing it is a more elegant solution.


Any ideas?


Not much beyond what I just wrote, I don't understand enough about the 
problem you're trying to solve nor much about current Python extension 
deployment practices, I'm stuck in the days of plain setuptools.
I'm happy to integrate a patch/contribution from you if it makes sense to 
me.


Andi..


jcc - Output unbuilt package

2021-02-27 Thread Phil
Hi,

I currently use jcc to wrap a Java library for use in Python - it works
great.

The project I'm working on is moving it's package management from
traditional pip installs to Guix:
https://guix.gnu.org/

Guix handles python packages pretty well, and I have jcc running nicely on
there.

The problem I have is that Guix expects as an input a Python source
repo, but the jcc outputs a binary wheel.

What I'd like is for jcc to put together a source package, complete with
jars, C/C++, and python wrapper and a setup.py that is called to
generate the wheel.  But to stop short of generating the wheel for me.
I've had a look at the gnerated build directory - I could see the C/C++
source and jars, but there didn't seem to be a setup.py to trigger an install?

Such a directory could be then be fed into Guix who would happily build the 
package
and install it using the standard setup.py provided.

As a crude workaround I can try to unzip the contents of the produced wheel and 
stick the contents
into a repo, with a new setup.py, and MANIFEST.in file that would simply
copy across the jars and the previously build C/C++ library.  However
having the setup.py building the library and installing it is a more elegant 
solution.

Any ideas?

Thanks,
Phil.


Some small questions on streaming expressions

2021-02-27 Thread ufuk yılmaz
Hello all,

I’m trying to reindex from a collection to a new collection with a different 
schema, using streaming expressions. I can’t use REINDEXCOLLECTION directly, 
because I need to process documents a bit.

I couldn’t figure out 3 simple, related things for hours so forgive me if I 
just ask.

1) Is there a way to duplicate the value of a field of an incoming tuple into 
two fields?
I tried the select expression:
select(
echo("Hello"),
echo as echy, echo as echee
)

But when I use the same field twice, only the last “as” takes effect, it 
doesn’t copy the value to two fields:
{
  "result-set": {
"docs": [
  {
"echee": "Hello"
  },
  {
"EOF": true,
"RESPONSE_TIME": 0
  }
]
  }
}

I accomplished this by using leftOuterJoin, with same exact stream in 
left and right, joining on itself with different field names. But this has the 
penaly of executing the same stream twice, It’s no problem for small streams 
but in my case there will be a couple hundred million tuples coming from the 
stream.


2) Is there a way to “feed” one stream’s output to two different streams? Like 
feeding output of a stream source to two different stream decorator without 
executing the same stream twice?
3) Does the “let” stream hold its entire content in memory when a stream is 
assigned to a variable, or does it stream continuously too? If not, I imagine 
it can be used for my question 2.


I’m glad that Solr has streaming expressions.

--ufuk yilmaz

Sent from Mail for Windows 10



Configurable Postings Block Size?

2021-02-27 Thread Greg Miller
Hi folks!

I've been a bit curious to test out different block size configurations in
the Lucene postings list format, but thought I'd reach out to the community
here first to see what work may have gone into this previously. I'm
essentially interested in benchmarking different block size configurations
on the real-world application of Lucene I'm working on.

If my understanding of the code is correct, I know we're currently encoding
compressed runs of 128 docs per block, relying on ForUtil for
encoding/decoding purposes. It looks like we define this in
ForUtil#BLOCK_SIZE (and reference it in a few external classes), but also
know that it's not as simple as just changing that one definition. It
appears much of the logic in ForUtil relies on the assumption of 128
docs-per-block.

I'm toying with the idea of making ForUtil a bit more flexible to allow for
different block sizes to be tested in order to run the benchmarking I'd
like to run, but the class looks heavily optimized to generate SIMD
instructions (I think?), so that might be folly. Before I start hacking on
a local branch to see what I can learn, is there any prior work that might
be useful to be aware of? Anyone gone down this path and have some
learnings to share? Any thoughts would be much appreciated!

Cheers,
-Greg


Re: Select streaming expression, add a field to every tuple, replaceor raw not working

2021-02-27 Thread Joel Bernstein
Yeah, this is an error in the docs which needs to be corrected as this is a
common use case. The val function is the one to use. I will make the change
in the docs.



Joel Bernstein
http://joelsolr.blogspot.com/


On Fri, Feb 26, 2021 at 12:28 PM ufuk yılmaz 
wrote:

> I tried to debug this to the best of my ability, and it seems the correct
> name for the “raw” evaluator is “val”.
>
>
>
> Copied from StreamContext: val=class
> org.apache.solr.client.solrj.io.eval.RawValueEvaluator
>
>
>
> I think there’s a small error in stream evaluator documentation of 8.4
>
>
>
> https://lucene.apache.org/solr/guide/8_4/stream-evaluator-reference.html
>
>
>
> When I used “val” instead of “raw”, I got the expected response:
>
>
>
> select(
>
> search(
>
> myCollection,
>
> q="*:*",
>
> qt="/export",
>
> sort="id_str asc",
>
> fl="id_str"
>
> ),
>
> id_str,
>
> val(abc) as text
>
> )
>
>
>
> {
>
>   "result-set": {
>
> "docs": [
>
>   {
>
> "id_str": "deneme123",
>
> "text": "abc"
>
>   },
>
>   {
>
> "EOF": true,
>
> "RESPONSE_TIME": 70
>
>   }
>
> ]
>
>   }
>
> }
>
>
>
> --ufuk yilmaz
>
>
>
>
>
> Sent from Mail  for
> Windows 10
>
>
>
> *From: *ufuk yılmaz 
> *Sent: *26 February 2021 16:38
> *To: *solr-u...@lucene.apache.org
> *Subject: *Select streaming expression, add a field to every tuple,
> replaceor raw not working
>
>
>
> Hello all,
>
>
>
> Solr version 8.4
>
>
>
> I have a very simple select expression here. What I’m trying to do is to
> add a constant value to incoming tuples.
>
>
>
> My collection has only 1 document. Id_str is of type String. Other fields
> are Solr generated.
>
>
>
> {
>
> "_version_":1692761378187640832,
>
> "id_str":"experiment123",
>
> "id":"18d658b13b6b072f"}]
>
>   }
>
>
>
> My streaming expression:
>
>
>
> select(
>
> search(
>
> myCollection,
>
> q="*:*",
>
> qt="/export",
>
> sort="id_str asc",
>
> fl="id_str"
>
> ),
>
> id_str,
>
> raw(ttt) as text // Docs state that select
> works with any evaluator. “raw” here is a stream evaluator.
>
> )
>
>
>
> I also tried:
>
>
>
> select(
>
> search(
>
> myCollection,
>
> q="*:*",
>
> qt="/export",
>
> sort="id_str asc",
>
> fl="id_str"
>
> ),
>
> id_str,
>
> replace(text, null, withValue=raw(ttt)) as
> text //replace is described in select expression documentation. I also
> tried withValue=ttt directly
>
> )
>
>
>
> No matter what I do, response only includes id_str field, without any
> error:
>
>
>
> {
>
>   "result-set":{
>
> "docs":[{
>
> "id_str":" experiment123"}
>
>   ,{
>
> "EOF":true,
>
> "RESPONSE_TIME":45}]}}
>
>
>
> I also tried wrapping text value with quotes, that didn’t work too.
>
>
>
> What am I doing wrong?
>
>
>
> --ufuk yilmaz
>
>
>
> Sent from Mail for Windows 10
>
>
>
>
>


Guys, I think SolrCloud might be sic.

2021-02-27 Thread Mark Miller
So it’s been a while since I’ve brought up SolrCloud sickness. Plenty to
navigate and figure out in the meantime. Given the constraints of life,
there was a point I wanted to give and share some insight into what I could
see. But it quickly became clear that was not a great plan - just what I
was left with because I could not dedicate or plan beyond a small or very
long time range. It turns out that telling someone, take a peak through
these key holes, can you see what I see? 10-100x better!? The skys the
limit? Don’t you see it?! What? The bathrooms? Yeah yeah, no the bathrooms
are being demolished, look here through the key hole. More units tests and
modules you say? Exaggeration confusion? Yeah yeah, never mind.

None the less, you do not tear down what you don’t have a plan to address.
And you cannot start anew until you have acknowledged and accepted the
past. Well, who knows, I have my own codes. What’s a man/woman without a
code.

And so, while maybe I’m always wrong or unintelligent, I’ve only ever
expressed a truth I feel I could and can defend, if the chips started
falling.

And at this point, I’d like to say that SolrCloud might be sic.

There is some ongoing work and plenty that will be happening for some time.
But I have not tried to create a collection or many collections other than
mostly around the small numbers that tests do (with the exception of 100
collections, created over time, sometimes with Ishans stress system). A
couple to a few collections. A handful of shards. A handful of replicas.

The other day I figured I’d modify a test just to see what I’d be dealing
with when I get some fun times soon.

I used a single 512mb test jvm, nightly settings and fired up a 12x12
collection on 4 jetty Solr runners. 144 SolrCores. It essentially just
started up and returned. Shit, is that broken? The test is green? No way.
Ok.

I would have bet 0 dollars on that first run.

Yesterday, okay, I’m feeling good, let’s roll the dice. I fire up 24 shards
and 24 replicas for each. I’m not even optimistic, I mean this is cold and
blind at these numbers. And after what looks like a nice start, things
quickly deteriorate. Ok, ok, looks like maybe a limit adjust and maybe I’m
asking a little much of 512mb of Ram. So I just bump it to 2 gig for some
play headroom.

Bam. 6 seconds. Fully green and active cluster.  Sic.

That’s two runs, so yeah, I’ve got a future play date scheduled for more.
But, one, that’s indicative of a ridiculous amount of Solr code and
behavior. It’s a load bearing beam of action. But also, I’ve got similar
sicness buried all over the place.

So yeah, the system can rock. Yonik’s design sense was not the monster
after all, surprise ending. And interested parties could and will help take
it forward so very much further. Don’t mortgage the house yet, an Apache
release of my play time is not happening next week. But with the same shock
and horror and surety that I said SolrCloud looks sick, I revise to
SolrCloud looks sic. And it wants a second shot, and by god it will have it.



-- 
- Mark

http://about.me/markrmiller


Re: [ANNOUNCE] Apache Solr 8.8.1 released

2021-02-27 Thread Timothy Potter
Awesome! Thank you David and Tobias ;-)

On Sat, Feb 27, 2021 at 12:21 PM David Smiley  wrote:
>
> The corresponding docker image has been released as well:
> https://hub.docker.com/_/solr
> (credit to Tobias Kässmann for helping)
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Tue, Feb 23, 2021 at 10:39 AM Timothy Potter 
> wrote:
>
> > The Lucene PMC is pleased to announce the release of Apache Solr 8.8.1.
> >
> >
> > Solr is the popular, blazing fast, open source NoSQL search platform from
> > the Apache Lucene project. Its major features include powerful full-text
> > search, hit highlighting, faceted search, dynamic clustering, database
> > integration, rich document handling, and geospatial search. Solr is highly
> > scalable, providing fault tolerant distributed search and indexing, and
> > powers the search and navigation features of many of the world's largest
> > internet sites.
> >
> >
> > Solr 8.8.1 is available for immediate download at:
> >
> >
> >   
> >
> >
> > ### Solr 8.8.1 Release Highlights:
> >
> >
> > Fix for a SolrJ backwards compatibility issue when upgrading the server to
> > 8.8.0 without upgrading SolrJ to 8.8.0.
> >
> >
> > Please refer to the Upgrade Notes in the Solr Ref Guide for information on
> > upgrading from previous Solr versions:
> >
> >
> >   
> >
> >
> > Please read CHANGES.txt for a full list of bugfixes:
> >
> >
> >   
> >
> >
> > Solr 8.8.1 also includes bugfixes in the corresponding Apache Lucene
> > release:
> >
> >
> >   
> >
> >
> >
> > Note: The Apache Software Foundation uses an extensive mirroring network
> > for
> >
> > distributing releases. It is possible that the mirror you are using may not
> > have
> >
> > replicated the release yet. If that is the case, please try another mirror.
> >
> > This also applies to Maven access.
> >
> > 
> >

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [ANNOUNCE] Apache Solr 8.8.1 released

2021-02-27 Thread David Smiley
The corresponding docker image has been released as well:
https://hub.docker.com/_/solr
(credit to Tobias Kässmann for helping)

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Tue, Feb 23, 2021 at 10:39 AM Timothy Potter 
wrote:

> The Lucene PMC is pleased to announce the release of Apache Solr 8.8.1.
>
>
> Solr is the popular, blazing fast, open source NoSQL search platform from
> the Apache Lucene project. Its major features include powerful full-text
> search, hit highlighting, faceted search, dynamic clustering, database
> integration, rich document handling, and geospatial search. Solr is highly
> scalable, providing fault tolerant distributed search and indexing, and
> powers the search and navigation features of many of the world's largest
> internet sites.
>
>
> Solr 8.8.1 is available for immediate download at:
>
>
>   
>
>
> ### Solr 8.8.1 Release Highlights:
>
>
> Fix for a SolrJ backwards compatibility issue when upgrading the server to
> 8.8.0 without upgrading SolrJ to 8.8.0.
>
>
> Please refer to the Upgrade Notes in the Solr Ref Guide for information on
> upgrading from previous Solr versions:
>
>
>   
>
>
> Please read CHANGES.txt for a full list of bugfixes:
>
>
>   
>
>
> Solr 8.8.1 also includes bugfixes in the corresponding Apache Lucene
> release:
>
>
>   
>
>
>
> Note: The Apache Software Foundation uses an extensive mirroring network
> for
>
> distributing releases. It is possible that the mirror you are using may not
> have
>
> replicated the release yet. If that is the case, please try another mirror.
>
> This also applies to Maven access.
>
> 
>


Re: Solr Docker Dependencies Question

2021-02-27 Thread Martijn Koster


> On 25 Feb 2021, at 16:25, Mike Drob  wrote:
> 
> acl package provides setfacl, and

Used in the container for example here: 
https://github.com/docker-solr/docker-solr/commit/6f7a6e812247d00f9f3c293993e26d6d041c119e#diff-ea928530c80bf9f58a3fbf840228fde8ee6bc208cae85dd7c2abda473eea8d20R37

> gosu provides gosu, which look to be only used in tests? How much would we 
> miss them if they were gone?

previous discussion: 
https://github.com/docker-solr/docker-solr/issues/270#issuecomment-569420443

> Where do we use netcat?

https://github.com/docker-solr/docker-solr/blob/master/scripts/wait-for-zookeeper.sh#L49

> wget is used at a minimum to download jattach

https://github.com/docker-solr/docker-solr/blob/master/scripts/wait-for-solr.sh#L82

> procps is probably used in the startup scripts somewhere too for top or ps or 
> something similar to get a pid.

I think that was for these:

https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L685
https://github.com/apache/lucene-solr/blob/master/solr/bin/solr#L759

-- Martijn
-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Proposal for the Lucene Dependency after git repo split

2021-02-27 Thread Jan Høydahl
Please, no sub modules! It is honestly a mess. And besides, for 95% of Solr 
development work there are no Lucene changes, so having to compile Lucene every 
time is not logical.
I suppose you would be able to do some surgery on your local setup though, 
removing lucene dep from gradle and instead adding your Lucene checkout as a 
module dependency in your IDE?

Jan

> 27. feb. 2021 kl. 04:28 skrev Ishan Chattopadhyaya 
> :
> 
> I would prefer Lucene as a git submodule inside Solr. That way, I can work 
> with solr and Lucene together easily. The commit to which Lucene is pegged at 
> could be from a release branch or a tag.
> 
> On Sat, 27 Feb, 2021, 3:03 am Adrien Grand,  > wrote:
> FYI Elasticsearch has been regularly depending on builds of specific commits 
> of Lucene for this case of features that need changes both in Lucene and 
> Elasticsearch.
> 
> The workflow usually looks like this:
>  - Do work in Lucene.
>  - When it becomes clear that the next release of Lucene should happen before 
> the next feature freeze of Elasticsearch, we do a new build of Lucene and 
> upgrade Elasticsearch to it.
>  - Do work in Elasticsearch.
>  - When a new Lucene release is out, upgrade Elasticsearch to this Lucene 
> release.
> 
> We have done this dozens of times and it has worked well for us. We do the 
> same when a vote for a new Lucene release is about to start in order to check 
> whether it breaks anything in Elasticsearch.
> 
> The rest of the time (which is most of the time) Elasticsearch depends on an 
> actual Lucene release.
> 
> Le ven. 26 févr. 2021 à 19:29, Eric Pugh  > a écrit :
> Plus, isn’t this a reason for folks in the Solr side to continue to be 
> involved in the Lucene project?   It’s the inverse of the days when folks 
> wanted to cut releases of Lucene, and were waiting for Solr to be ready!
> 
> 
>> On Feb 26, 2021, at 1:26 PM, Houston Putman > > wrote:
>> 
>> I don't think Jan's workflow blocks Solr releases on Lucene releases. It 
>> just blocks this one feature branch merge on a Lucene release. Multiple Solr 
>> releases can happen between step 6 and step 7.
>> 
>> I completely agree with that being the workflow going forward with separate 
>> repos, Jan. It will unfortunately be a pain to integrate changes that affect 
>> both Lucene and Solr, but I think that's just a consequence of splitting the 
>> projects.
>> 
>> Neither option gives us everything we want, so here are the pros and cons in 
>> my opinion.
>> 
>> Using a snapshot lucene version
>> - Easier to make changes to lucene and solr concurrently
>> - Solr releases are blocked until the snapshot version being depended on is 
>> released.
>> - Builds may break at any time, and possibly for different sets of users 
>> depending on dependency caches.
>> 
>> Using a released lucene version
>> - Harder to update lucene and solr concurrently
>> - Solr can make releases independent of Lucene's release schedule
>> - Builds are stable and consistent.
>> 
>> Personally I think stability and the ability to own our own release schedule 
>> outweigh the benefits of being able to iterate on both projects 
>> concurrently. But it's definitely something that we should decide on as a 
>> community.
>> 
>> On Fri, Feb 26, 2021 at 12:43 PM Mike Drob > > wrote:
>> The part of this process that I really don't like is that Solr then still 
>> becomes beholden to Lucene's release schedule. We don't know how long step 7 
>> takes, and will be effectively blocked from making our own releases until 
>> that happens.
>> 
>> On Fri, Feb 26, 2021 at 8:51 AM Jan Høydahl > > wrote:
>> The developer workflow for adding something to both Lucene and Solr would be 
>> as any other dependency, right?
>> So say we are on Luene 9.0. This is the process in my head:
>> Adapt Lucene as needed, and "mvn install" lucene-9.1-SNAPSHOT to your local 
>> laptop (whatever command that is in gradle)
>> Make your Solr feature branch depend on lucene-9.1-SNAPSHOT instead of 
>> lucene-9.0.0 -hopefully Gradle will pick the local version over Apache Nexus 
>> version
>> Iterate 1-2 until happy
>> Make a Lucene PR and eventually commit the Lucene change
>> After next Jenkins build the feature is in Apache Nexus snapshot as 
>> lucene-9.1-SNAPSHOT
>> Now the Solr Pull Request will compile and can be tested by others
>> Wait until Lucene 9.1 release
>> Upgrade Solr's lucene dependency on 'main'
>> Merge Solr PR
>> Backporting will be a similar process, i.e. first backport and release in 
>> Lucene, then backport in Sol
>> Hmm, as I wrote this list I can understand why so many features were added 
>> only to Solr and not to Lucene in the early days :)
>> 
>> Jan
>> 
>>> 26. feb. 2021 kl. 14:22 skrev Gus Heck >> >:
>>> 
>>> Except I just finished helping a contributor with a feature that touches 
>>>