Re: [Vote] Bump the Lucene main branch to Java 21

2024-02-23 Thread Tomás Fernández Löbbe
SGTM!

+1

On Fri, Feb 23, 2024 at 11:04 AM Patrick Zhai  wrote:

> +1
>
> On Fri, Feb 23, 2024 at 9:34 AM Dawid Weiss  wrote:
>
>>
>> I'm fine with this requirement.
>>
>> +1.
>>
>> On Fri, Feb 23, 2024 at 12:24 PM Chris Hegarty
>>  wrote:
>>
>>> Hi,
>>>
>>> Since the discussion on bumping the Lucene main branch to Java 21 is
>>> winding down, let's hold a vote on this important change.
>>>
>>> Once bumped, the next major release of Lucene (whenever that will be)
>>> will require a version of Java greater than or equal to Java 21.
>>>
>>> The vote will be open for at least 72 hours (and allow some additional
>>> time for the weekend) i.e. until 2024-02-28 12:00 UTC.
>>>
>>> [ ] +1  approve
>>> [ ] +0  no opinion
>>> [ ] -1  disapprove (and reason why)
>>>
>>> Here is my +1
>>>
>>> -Chris.
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>


Re: [VOTE] Release Lucene 9.10.0 RC1

2024-02-15 Thread Tomás Fernández Löbbe
+1

SUCCESS! [1:22:54.621515]

On Thu, Feb 15, 2024 at 7:13 AM Robert Muir  wrote:

> On Thu, Feb 15, 2024 at 9:54 AM Uwe Schindler  wrote:
> >
> > Hi,
> >
> > My Python knowledge is too limited to fix the build script to allow to
> test the smoker with arbitrary JAVA_HOME dircetories next to the baseline
> (Java 11). With lots of copypaste I can make it run on Java 21 in addition
> to 17, but that looks like too unflexible.
> >
> > Mike McCandless: If you could help me to make it more flexible, it would
> be good. I can open an issue, but if you have an easy solution. I think of
> the following:
> >
> > JAVA_HOME must run be Java 11 (in 9.x)
> > At moment you can pass "--test-java17 ", but this one is also
> checked to be really java 17 (by parsing strings from its version output),
> but I'd like to pass "--test-alternative-java " multiple times and it
> would just run all those as part of smoking, maxbe the version number can
> be extracted to be printed out.
> >
> > To me this is a hopeless task with Python.
> >
> > Uwe
> >
> > Am 15.02.2024 um 12:50 schrieb Uwe Schindler:
> >
>
> I opened https://github.com/apache/lucene/issues/13107 as I have
> struggles with the smoketester java 21 support too. Java is moving
> faster these days, we should make it easier to maintain the script.
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Stefan Vodita as Lucene committter

2024-01-18 Thread Tomás Fernández Löbbe
Congratulations Stefan!

On Thu, Jan 18, 2024 at 10:45 AM Anshum Gupta 
wrote:

> Congratulations and welcome, Stefan!
>
> On Thu, Jan 18, 2024 at 7:55 AM Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> Hi Team,
>>
>> I'm pleased to announce that Stefan Vodita has accepted the Lucene PMC's
>> invitation to become a committer!
>>
>> Stefan, the tradition is that new committers introduce themselves with a
>> brief bio.
>>
>> Congratulations, welcome, and thank you for all your improvements to
>> Lucene and our community,
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>
>
> --
> Anshum Gupta
>


Re: [VOTE] Release Lucene 9.9.0 RC2

2023-11-30 Thread Tomás Fernández Löbbe
SUCCESS! [0:52:49.337126]

+1

On Thu, Nov 30, 2023 at 12:05 PM Benjamin Trent 
wrote:

> SUCCESS! [0:44:05.132154]
>
> +1
>
> On Thu, Nov 30, 2023 at 1:09 PM Chris Hegarty
>  wrote:
>
>> Please vote for release candidate 2 for Lucene 9.9.0
>>
>>
>> The artifacts can be downloaded from:
>>
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-9.9.0-RC2-rev-06070c0dceba07f0d33104192d9ac98ca16fc500
>>
>>
>> You can run the smoke tester directly with this command:
>>
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-9.9.0-RC2-rev-06070c0dceba07f0d33104192d9ac98ca16fc500
>>
>>
>> The vote will be open for at least 72 hours, and given the weekend in
>> between, let’s keep it open until 2023-12-04 12:00 UTC.
>>
>> [ ] +1  approve
>>
>> [ ] +0  no opinion
>>
>> [ ] -1  disapprove (and reason why)
>>
>>
>> Here is my +1
>>
>>
>> -Chris.
>>
>>


Re: Welcome Patrick Zhai to the Lucene PMC

2023-11-13 Thread Tomás Fernández Löbbe
Welcome, Patrick!

On Mon, Nov 13, 2023 at 3:56 PM Gus Heck  wrote:

> Welcome :)
>
> On Mon, Nov 13, 2023 at 1:15 PM Anshum Gupta 
> wrote:
>
>> Congratulations and welcome, Patrick!
>>
>> On Fri, Nov 10, 2023 at 12:05 PM Michael McCandless <
>> luc...@mikemccandless.com> wrote:
>>
>>> I'm happy to announce that Patrick Zhai has accepted an invitation to
>>> join the Lucene Project Management Committee (PMC)!
>>>
>>> Congratulations Patrick, thank you for all your hard work improving
>>> Lucene's community and source code, and welcome aboard!
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>
>>
>> --
>> Anshum Gupta
>>
>
>
> --
> http://www.needhamsoftware.com (work)
> https://a.co/d/b2sZLD9 (my fantasy fiction book)
>


Re: Welcome Guo Feng to the Lucene PMC

2023-10-27 Thread Tomás Fernández Löbbe
Congratulations Feng!

On Fri, Oct 27, 2023 at 4:02 AM Guo Feng  wrote:

> Thanks all. It is a great honor to join the PMC!
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Luca Cavanna to the Lucene PMC

2023-10-22 Thread Tomás Fernández Löbbe
Congratulations Luca!

On Sun, Oct 22, 2023 at 10:51 AM Michael Sokolov  wrote:

> Congratulations and welcome, Luca!
>
> On Sun, Oct 22, 2023 at 1:42 PM Julie Tibshirani 
> wrote:
> >
> > Congratulations Luca!!
> >
> > On Fri, Oct 20, 2023 at 1:45 AM Bruno Roustant 
> wrote:
> >>
> >> Welcome, congratulations!
> >>
> >> Le ven. 20 oct. 2023 à 10:02, Dawid Weiss  a
> écrit :
> >>>
> >>>
> >>> Congratulations, Luca!
> >>>
> >>> On Fri, Oct 20, 2023 at 7:51 AM Adrien Grand 
> wrote:
> 
>  I'm pleased to announce that Luca Cavanna has accepted an invitation
> to join the Lucene PMC!
> 
>  Congratulations Luca, and welcome aboard!
> 
>  --
>  Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [VOTE] Release Lucene 9.8.0 RC1

2023-09-23 Thread Tomás Fernández Löbbe
+1

SUCCESS! [0:49:28.203159]

On Fri, Sep 22, 2023 at 7:44 AM Adrien Grand  wrote:

> +1 SUCCESS! [0:54:58.932481]
>
> On Fri, Sep 22, 2023 at 4:18 PM Uwe Schindler  wrote:
> >
> > Hi,
> >
> > I verified the release with the usual tools and my workflow:
> >
> > Policeman Jenkins ran smoketester for me with Java 11 and Java 17:
> > https://jenkins.thetaphi.de/job/Lucene-Release-Tester/28/console
> >
> > SUCCESS! [1:10:15.704228]
> >
> > In addition I checked the changes entries and ran Luke with Java 21 GA
> > (released two days ago). All fine!
> >
> > +1 to release!
> >
> > Am 22.09.2023 um 07:48 schrieb Patrick Zhai:
> > > Please vote for release candidate 1 for Lucene 9.8.0
> > >
> > > The artifacts can be downloaded from:
> > >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-9.8.0-RC1-rev-d914b3722bd5b8ef31ccf7e8ddc638a87fd648db
> > >
> > > You can run the smoke tester directly with this command:
> > >
> > > python3 -u dev-tools/scripts/smokeTestRelease.py \
> > >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-9.8.0-RC1-rev-d914b3722bd5b8ef31ccf7e8ddc638a87fd648db
> > >
> > > The vote will be open for at least 72 hours, as there's a weekend, the
> > > vote will last until 2023-09-27 06:00 UTC.
> > >
> > > [ ] +1  approve
> > > [ ] +0  no opinion
> > > [ ] -1  disapprove (and reason why)
> > >
> > > Here is my +1 (non-binding)
> >
> > --
> > Uwe Schindler
> > Achterdiek 19, D-28357 Bremen
> > https://www.thetaphi.de
> > eMail: u...@thetaphi.de
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [VOTE] Release Lucene 9.7.0 RC1

2023-06-22 Thread Tomás Fernández Löbbe
Thanks Adrien!

SUCCESS! [0:43:17.143555]
+1

On Wed, Jun 21, 2023 at 7:37 AM Adrien Grand  wrote:

> Please vote for release candidate 1 for Lucene 9.7.0
>
> The artifacts can be downloaded from:
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-9.7.0-RC1-rev-ccf4b198ec328095d45d2746189dc8ca633e8bcf
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-9.7.0-RC1-rev-ccf4b198ec328095d45d2746189dc8ca633e8bcf
>
> The vote will be open for at least 72 hours i.e. until 2023-06-24 15:00
> UTC.
>
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
>
> Here is my +1
>
> --
> Adrien
>


Re: Welcome Chris Hegarty to the Lucene PMC

2023-06-20 Thread Tomás Fernández Löbbe
Congratulations, Chris!

On Mon, Jun 19, 2023 at 11:43 AM Mikhail Khludnev  wrote:

> Welcome, Chris.
>
> On Mon, Jun 19, 2023 at 12:53 PM Adrien Grand  wrote:
>
>> I'm pleased to announce that Chris Hegarty has accepted an invitation to
>> join the Lucene PMC!
>>
>> Congratulations Chris, and welcome aboard!
>>
>> --
>> Adrien
>>
>
>
> --
> Sincerely yours
> Mikhail Khludnev
>


Re: Lucene 9.7 release

2023-06-09 Thread Tomás Fernández Löbbe
+1
Thanks Adrien

On Fri, Jun 9, 2023 at 9:19 AM Michael McCandless 
wrote:

> +1, thanks Adrien!
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Fri, Jun 9, 2023 at 12:11 PM Patrick Zhai  wrote:
>
>> +1, thank you Adrien!
>>
>> On Fri, Jun 9, 2023, 09:08 Adrien Grand  wrote:
>>
>>> Hello all,
>>>
>>> There is some good stuff that is scheduled for 9.7 already, I found the
>>> following changes in the changelog that look especially interesting:
>>>  - Concurrent query rewrites for vector queries.
>>>  - Speedups to vector indexing/search via integration of the Panama
>>> vector API.
>>>  - Reduced overhead of soft deletes.
>>>  - Support for update by query.
>>>
>>> I propose we start the process for a 9.7 release, and I volunteer to be
>>> the release manager. I suggest the following schedule:
>>>  - Feature freeze on June 16th, one week from now. This is when the 9.7
>>> branch will be cut.
>>>  - Open a vote on June 21st, which we'll possibly delay if blockers get
>>> identified.
>>>
>>> --
>>> Adrien
>>>
>>


Re: Lucene PMC Chair Greg Miller

2023-03-07 Thread Tomás Fernández Löbbe
Thanks Bruno! and Congratulations Greg!

On Tue, Mar 7, 2023 at 10:49 AM Patrick Zhai  wrote:

> Thank you Bruno and Greg!
>
> On Tue, Mar 7, 2023, 10:40 Mikhail Khludnev  wrote:
>
>> Thank you, Bruno. Congratulations, Greg.
>>
>> On Mon, Mar 6, 2023 at 8:16 PM Bruno Roustant 
>> wrote:
>>
>>> Hello Lucene developers,
>>>
>>> Lucene Program Management Committee has elected a new chair, Greg
>>> Miller, and the Board has approved.
>>>
>>> Greg, thank you for stepping up, and congratulations!
>>>
>>>
>>> - Bruno
>>>
>>
>>
>> --
>> Sincerely yours
>> Mikhail Khludnev
>> https://t.me/MUST_SEARCH
>> A caveat: Cyrillic!
>>
>


Re: Welcome Luca Cavanna as Lucene committer

2022-10-05 Thread Tomás Fernández Löbbe
Congratulations Luca!!

On Wed, Oct 5, 2022 at 2:19 PM Vigya Sharma  wrote:

> Congratulations Luca! And welcome...
>
> Vigya
>
> On Wed, Oct 5, 2022 at 3:36 PM Uwe Schindler  wrote:
>
>> Welcome Luca. This was long overdue. 
>>
>> Am 5. Oktober 2022 19:03:43 MESZ schrieb Adrien Grand > >:
>>>
>>> I'm pleased to announce that Luca Cavanna has accepted the PMC's
>>> invitation to become a committer.
>>>
>>> Luca, the tradition is that new committers introduce themselves with a
>>> brief bio.
>>>
>>> Congratulations and welcome!
>>>
>>> --
>>> Adrien
>>>
>> --
>> Uwe Schindler
>> Achterdiek 19, 28357 Bremen
>> 
>> https://www.thetaphi.de
>>
>
>
> --
> - Vigya
>


Re: Welcome Vigya Sharma as Lucene committer

2022-08-02 Thread Tomás Fernández Löbbe
Congrats Vigya!!

On Tue, Aug 2, 2022 at 5:35 AM Michael Gibney 
wrote:

> Welcome, Vigya!
>
> On Mon, Aug 1, 2022 at 12:35 PM Houston Putman  wrote:
> >
> > Welcome Vigya!
> >
> > On Mon, Aug 1, 2022 at 11:38 AM Alan Woodward 
> wrote:
> >>
> >> Congratulations and welcome, Vigya!
> >>
> >> - Alan
> >>
> >> On 28 Jul 2022, at 20:44, Vigya Sharma  wrote:
> >>
> >> Thanks everyone for the warm welcome. It is an honor to be invited as a
> Lucene committer, and I look forward to contributing more to the community.
> >>
> >> A little bit about me - I currently work for the Product Search team at
> Amazon, and am based out of the San Francisco Bay Area in California, US.
> >> I am interested in a wide variety of computer science areas, and, in
> the last few years, have focused more on distributed systems, concurrency,
> system software and performance. Outside of tech., I like spending my time
> outdoors - running, skiing, and long road trips. I completed my first
> marathon (the SFMarathon) last week, and now, getting this invitation has
> made this month a highlight of the year.
> >>
> >> I had known that Lucene powers some of the most popular search and
> analytics use cases across the globe, but as I've gotten more involved, the
> depth and breadth of this software has blown my mind. I am deeply impressed
> by what this community has built, and how it continues to work together and
> grow. It is a great honor to be trusted with committer privileges, and I
> look forward to learning and contributing to multiple different parts of
> the library.
> >>
> >> Thank you,
> >> Vigya
> >>
> >>
> >> On Thu, Jul 28, 2022 at 12:20 PM Anshum Gupta 
> wrote:
> >>>
> >>> Congratulations and welcome, Vigya!
> >>>
> >>> On Thu, Jul 28, 2022 at 12:34 AM Adrien Grand 
> wrote:
> 
>  I'm pleased to announce that Vigya Sharma has accepted the PMC's
>  invitation to become a committer.
> 
>  Vigya, the tradition is that new committers introduce themselves with
> a
>  brief bio.
> 
>  Congratulations and welcome!
> 
>  --
>  Adrien
> >>>
> >>>
> >>>
> >>> --
> >>> Anshum Gupta
> >>
> >>
> >>
> >> --
> >> - Vigya
> >>
> >>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [VOTE] Release Lucene/Solr 8.11.2 RC2

2022-06-13 Thread Tomás Fernández Löbbe
+1

SUCCESS! [1:02:16.559513]

On Mon, Jun 13, 2022 at 12:07 PM Mike Drob  wrote:

> Please vote for release candidate 2 for Lucene/Solr 8.11.2
>
> The artifacts can be downloaded from:
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.11.2-RC2-rev17dee71932c683e345508113523e764c3e4c80fa
>
> You can run the smoke tester directly with this command:
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.11.2-RC2-rev17dee71932c683e345508113523e764c3e4c80fa
>
> The vote will be open for at least 72 hours i.e. until 2022-06-16 20:00 UTC
>
> [ ] +1  approve
> [ ] +0  no opinion
> [ ] -1  disapprove (and reason why)
>
> Here is my +1
> 
>


Re: Welcome Guo Feng as Lucene committer

2022-01-28 Thread Tomás Fernández Löbbe
Welcome Feng!

On Fri, Jan 28, 2022 at 1:47 PM Houston Putman  wrote:

> Congrats Feng!
>
> On Fri, Jan 28, 2022 at 12:31 AM Michael Gibney 
> wrote:
>
>> Welcome, Feng!
>>
>> On Thu, Jan 27, 2022 at 2:23 PM Mayya Sharipova
>>  wrote:
>>
>>> Welcome and congratulations, Feng!
>>>
>>> On Wed, Jan 26, 2022 at 11:50 AM Namgyu Kim  wrote:
>>>
 Congratulations and welcome, Feng! :D

 On Tue, Jan 25, 2022 at 6:09 PM Adrien Grand  wrote:

> I'm pleased to announce that Guo Feng has accepted the PMC's
> invitation to become a committer.
>
> Feng, the tradition is that new committers introduce themselves with a
> brief bio.
>
> Congratulations and welcome!
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [VOTE] Release Lucene/Solr 8.10.1 RC1

2021-10-15 Thread Tomás Fernández Löbbe
+1

SUCCESS! [1:11:35.593338]

On Fri, Oct 15, 2021 at 11:41 AM Anshum Gupta 
wrote:

> +1 (binding)
>
> Smoke tester passed w/ JDK 8.
>
> SUCCESS! [1:18:04.585401]
>
> Also tried a basic app w/ indexing and search and everything I tested
> worked as expected.
>
> Thanks for doing this, Mayya!
>
> On Tue, Oct 12, 2021 at 5:00 PM Mayya Sharipova  wrote:
>
>> Please vote for release candidate 1 for Lucene/Solr 8.10.1
>>
>> The artifacts can be downloaded from:
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.10.1-RC1-rev2f24e6a49d48a032df1f12e146612f59141727a9
>>
>> You can run the smoke tester directly with this command:
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.10.1-RC1-rev2f24e6a49d48a032df1f12e146612f59141727a9
>>
>> The vote will be open for at least 72 hours i.e. until 2021-10-15 23:00
>> UTC.
>>
>> [ ] +1  approve
>> [ ] +0  no opinion
>> [ ] -1  disapprove (and reason why)
>>
>> Here is my +1 : SUCCESS! [1:07:19.906103]
>> 
>>
>
>
> --
> Anshum Gupta
>


Re: Welcome Greg Miller as Lucene committer

2021-05-31 Thread Tomás Fernández Löbbe
Congrats Greg!!

On Mon, May 31, 2021 at 9:37 AM Gautam Worah  wrote:

> Congratulations Greg :)
>
> On Mon, May 31, 2021, 8:02 AM Ilan Ginzburg  wrote:
>
>> Congrats Greg!
>>
>> On Sun, May 30, 2021 at 4:35 PM Greg Miller  wrote:
>>
>>> Thanks everyone! I'm honored to have been nominated and look forward
>>> to continuing to work with all of you on Lucene! I'm incredibly
>>> grateful for everyone that has helped me so far. There's a lot to
>>> learn in Lucene and this community has been a fantastic help ramping
>>> up, providing thorough PR feedback/ideas/etc. and simply been a great
>>> group of people to collaborate with.
>>>
>>> As far as a brief bio goes, I live in the Seattle area and work for
>>> Amazon's "Product Search" team, which I joined in January of this
>>> year. I'm a naturally curious person and find myself fascinated by
>>> data structure / algorithm problems, so diving into Lucene has been
>>> really fun! I'm also an avid runner (mostly marathons but right now
>>> I'm training for my first one-mile race on a track), and love to
>>> travel with my wife and daughter (although that's been on "pause" for
>>> obvious reasons for the past year+). My biggest accomplishment of 2021
>>> so far has been teaching my daughter to ride a bike, but being
>>> nominated as a Lucene committer is a close second :)
>>>
>>> Thanks again everyone and looking forward to continuing to work with all
>>> of you!
>>>
>>> Cheers,
>>> -Greg
>>>
>>> On Sat, May 29, 2021 at 7:59 PM Michael McCandless
>>>  wrote:
>>> >
>>> > Welcome Greg!
>>> >
>>> > Mike
>>> >
>>> > On Sat, May 29, 2021 at 3:47 PM Adrien Grand 
>>> wrote:
>>> >>
>>> >> I'm pleased to announce that Greg Miller has accepted the PMC's
>>> invitation to become a committer.
>>> >>
>>> >> Greg, the tradition is that new committers introduce themselves with
>>> a brief bio.
>>> >>
>>> >> Congratulations and welcome!
>>> >>
>>> >>
>>> >> --
>>> >> Adrien
>>> >
>>> > --
>>> > Mike McCandless
>>> >
>>> > http://blog.mikemccandless.com
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>


Re: Welcome Zach Chen as Lucene committer

2021-04-19 Thread Tomás Fernández Löbbe
Welcome Zach!

On Mon, Apr 19, 2021 at 3:38 PM Greg Miller  wrote:

> Congrats Zach!
>
> On Mon, Apr 19, 2021 at 3:09 PM Robert Muir  wrote:
> >
> > Congratulations!
> >
> >
> > On Mon, Apr 19, 2021 at 10:14 AM Adrien Grand  wrote:
> >>
> >> I'm pleased to announce that Zach Chen has accepted the PMC's
> invitation to become a committer.
> >>
> >> Zach, the tradition is that new committers introduce themselves with a
> brief bio.
> >>
> >> Congratulations and welcome!
> >>
> >> --
> >> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Peter Gromov as Lucene committer

2021-04-07 Thread Tomás Fernández Löbbe
Welcome Peter!

On Wed, Apr 7, 2021 at 12:13 PM Houston Putman 
wrote:

> Congrats!
>
> On Wed, Apr 7, 2021 at 12:47 PM Anshum Gupta 
> wrote:
>
>> Congratulations and welcome, Peter!
>>
>> On Tue, Apr 6, 2021 at 10:48 AM Robert Muir  wrote:
>>
>>> I'm pleased to announce that Peter Gromov has accepted the PMC's
>>> invitation to become a committer.
>>>
>>> Peter, the tradition is that new committers introduce themselves with a
>>> brief bio.
>>>
>>> Congratulations and welcome!
>>>
>>>
>>
>> --
>> Anshum Gupta
>>
>


Re: Bugfix release Lucene/Solr 8.8.2

2021-03-29 Thread Tomás Fernández Löbbe
Mike,
I'd like to backport SOLR-15216 (UI Fix)

On Mon, Mar 29, 2021 at 10:55 AM Mike Drob  wrote:

> Ignacio, Alan - I have looked at the patches and these should be safe to
> backport and useful in a bugfix. Please go ahead and commit, and update
> CHANGES entry as well.
>
> Cassandra - yes, I plan to republish the ref guide, your expertise is
> absolutely appreciated. It looks like there will also be some gradle->ant
> translation. Was this mostly find-and-replace or was there more order to it?
>
> On Mon, Mar 29, 2021 at 12:32 PM Alan Woodward 
> wrote:
>
>> Hi Mike,
>>
>> I’d like to back port LUCENE-9744 (which fixes an NPE in intervals
>> queries) and LUCENE-9762 (which fixes a bug in QueryValuesSource).
>>
>> Thanks, Alan
>>
>> On 29 Mar 2021, at 09:47, Ignacio Vera  wrote:
>>
>> I'd like to backport Lucene-9870 which is a bug in distance queries that
>> causes no matches along indexed lines and polygon edges. This fix
>> only touches one class tho so very low risk.
>> lucene
>>
>> On Sat, Mar 27, 2021 at 3:24 PM Mike Drob  wrote:
>>
>>> Ishan,
>>>
>>> Thank you for bringing this up. I’m comfortable delaying an extra week
>>> to accommodate the multitude of holidays (Holi, Passover, others) coming up.
>>>
>>> I will adjust my schedule to start the vote Tuesday, Apr 6.
>>>
>>> Please make sure that all back ports are appropriately marked with
>>> fixVersion in Jira and have corresponding CHANGES entries.
>>>
>>> Mike
>>>
>>> On Fri, Mar 26, 2021 at 11:11 PM Ishan Chattopadhyaya <
>>> ichattopadhy...@gmail.com> wrote:
>>>
 Hi Mike,

 I wish to get https://issues.apache.org/jira/browse/SOLR-15288 in, but
 will likely be able to wrap up by 2 April or so (on vacation right now due
 to the festival of Holi)

 Regards,
 Ishan

 On Sat, 27 Mar, 2021, 7:41 am Mike Drob,  wrote:

> I am now preparing for a bugfix release from branch branch_8_8
>
> I plan to have the RC built and vote started on Tuesday, Mar 30. If
> you have small, low risk bug fixes to backport before then, please do so
> using your best judgement.
>
> Please observe the normal rules for committing to this branch:
>
> * Before committing to the branch, reply to this thread and argue
>   why the fix needs backporting and how long it will take.
> * All issues accepted for backporting should be marked with 8.8.2
>   in JIRA, and issues that should delay the release must be marked as
> Blocker
> * All patches that are intended for the branch should first be
> committed
>   to the unstable branch, merged into the stable branch, and then into
>   the current release branch.
> * Only Jira issues with Fix version 8.8.2 and priority "Blocker" will
> delay
>   a release candidate build.
>
> Thanks,
> Mike
>

>>


Re: Welcome Bruno to the Apache Lucene PMC

2021-03-11 Thread Tomás Fernández Löbbe
Welcome Bruno!

On Thu, Mar 11, 2021 at 7:59 AM Dawid Weiss  wrote:

> Welcome, Bruno!
>
> On Thu, Mar 11, 2021 at 4:29 PM Gus Heck  wrote:
> >
> > Welcome :)
> >
> > On Thu, Mar 11, 2021 at 9:58 AM Houston Putman 
> wrote:
> >>
> >> Congrats and welcome Bruno!
> >>
> >> On Thu, Mar 11, 2021 at 8:32 AM David Smiley 
> wrote:
> >>>
> >>> Welcome Bruno!
> >>>
> >>> ~ David Smiley
> >>> Apache Lucene/Solr Search Developer
> >>> http://www.linkedin.com/in/davidwsmiley
> >>>
> >>>
> >>> On Wed, Mar 10, 2021 at 7:56 PM Mike Drob  wrote:
> 
>  I am pleased to announce that Bruno has accepted an invitation to
> join the Lucene PMC!
> 
>  Congratulations, and welcome aboard!
> 
>  Mike
> >
> >
> >
> > --
> > http://www.needhamsoftware.com (work)
> > http://www.the111shift.com (play)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [JENKINS] Lucene-Solr-master-Windows (64bit/jdk-13.0.2) - Build # 9738 - Still Unstable!

2021-03-05 Thread Tomás Fernández Löbbe
Most likely due to SOLR-15154
, I'll take a look

On Fri, Mar 5, 2021 at 4:21 PM Policeman Jenkins Server 
wrote:

> Build: https://jenkins.thetaphi.de/job/Lucene-Solr-master-Windows/9738/
> Java: 64bit/jdk-13.0.2 -XX:+UseCompressedOops -XX:+UseParallelGC
>
> 1 tests failed.
> FAILED:
> org.apache.solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactoryTest.classMethod
>
> Error Message:
> java.io.IOException: Could not remove the following files (in the order of
> attempts):
>
>  
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\solrj\build\tmp\tests-tmp\solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactoryTest_E44B603C4F133641-001\tempFile-001.tmp:
> java.nio.file.FileSystemException:
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\solrj\build\tmp\tests-tmp\solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactoryTest_E44B603C4F133641-001\tempFile-001.tmp:
> The process cannot access the file because it is being used by another
> process.
>
>
>  
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\solrj\build\tmp\tests-tmp\solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactoryTest_E44B603C4F133641-001:
> java.nio.file.DirectoryNotEmptyException:
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\solrj\build\tmp\tests-tmp\solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactoryTest_E44B603C4F133641-001
>
>
> Stack Trace:
> java.io.IOException: Could not remove the following files (in the order of
> attempts):
>
>  
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\solrj\build\tmp\tests-tmp\solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactoryTest_E44B603C4F133641-001\tempFile-001.tmp:
> java.nio.file.FileSystemException:
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\solrj\build\tmp\tests-tmp\solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactoryTest_E44B603C4F133641-001\tempFile-001.tmp:
> The process cannot access the file because it is being used by another
> process.
>
>
>  
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\solrj\build\tmp\tests-tmp\solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactoryTest_E44B603C4F133641-001:
> java.nio.file.DirectoryNotEmptyException:
> C:\Users\jenkins\workspace\Lucene-Solr-master-Windows\solr\solrj\build\tmp\tests-tmp\solr.client.solrj.impl.PreemptiveBasicAuthClientBuilderFactoryTest_E44B603C4F133641-001
>
> at __randomizedtesting.SeedInfo.seed([E44B603C4F133641]:0)
> at org.apache.lucene.util.IOUtils.rm(IOUtils.java:311)
> at
> org.apache.lucene.util.TestRuleTemporaryFilesCleanup.afterAlways(TestRuleTemporaryFilesCleanup.java:207)
> at
> com.carrotsearch.randomizedtesting.rules.TestRuleAdapter$1.afterAlways(TestRuleAdapter.java:31)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:43)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:51)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:44)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:60)
> at
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:47)
> at org.junit.rules.RunRules.evaluate(RunRules.java:20)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:370)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.lambda$forkTimeoutingTask$0(ThreadLeakControl.java:826)
> at java.base/java.lang.Thread.run(Thread.java:830)
>
> -
> To unsubscribe, e-mail: builds-unsubscr...@lucene.apache.org
> For additional commands, e-mail: builds-h...@lucene.apache.org


Re: Removal of Apache HttpComponents/HttpClient for 9.0?

2021-03-05 Thread Tomás Fernández Löbbe
+1 David
> Oh I see; there are entanglements with Solr's authentication plugins
Maybe we should move the authentication plugins to contribs (don't know if
we'll need one or two, one for client-side and one for server-side? I
haven't looked much at the code). Plus, we are shipping with multiple
authentication options, while at most one will be used.

> An even smaller baby step is to mark the httpclient dependency as
"optional" in the Maven pom we generate.  This is a clue to consumers to
move on
> Also marking HttpSolrClient deprecated
+1

On Fri, Mar 5, 2021 at 8:18 AM David Smiley 
wrote:

> An even smaller baby step is to mark the httpclient dependency as
> "optional" in the Maven pom we generate.  This is a clue to consumers to
> move on.
> Also marking HttpSolrClient deprecated.
>
> ~ David
>
> On Fri, Mar 5, 2021 at 11:06 AM David Smiley 
> wrote:
>
>> Oh I see; there are entanglements with Solr's authentication plugins :-(
>> One step in this direction is to *move* it from SolrJ to solr-core.  If
>> someone using SolrJ wants to pass whatever security tokens in headers, they
>> can add their own interceptors.  Also, SolrJ 8 will likely work fine with
>> SolrJ 9, so if there are unforeseen problems after 9.0, we can address them
>> in 9.1 and users that are affected by whatever the problem is can still use
>> SolrJ 8 as an option.
>>
>> Maintaining two HTTP client code paths is a pain.  It makes for possibly
>> duplicative work in metrics, tracing, authentication, and shear mental
>> overhead of what's going on.
>>
>> ~ David
>>
>>
>> On Wed, Oct 14, 2020 at 8:55 AM Noble Paul  wrote:
>>
>>> +1 @David Smiley
>>>
>>> On Sun, Oct 11, 2020 at 4:07 AM Ishan Chattopadhyaya
>>>  wrote:
>>> >
>>> > Maybe we need them for kerberos? I'm totally fine getting rid of
>>> kerberos support from Solr core some day, but it might not be very easy to
>>> refactor it into a package.
>>> >
>>> > On Sat, 10 Oct, 2020, 10:26 pm David Smiley, 
>>> wrote:
>>> >>
>>> >> I think that historically, we are good at adding code but not good at
>>> removing code.  We add new ways to do things but keep the old.  Removal is
>>> more work often forgotten but doing nothing implicitly adds technical debt
>>> henceforth.
>>> >>
>>> >> With that segue... given that our latest SolrClient implementations
>>> are based on Jetty HttpClient (to support Http2 but should support 1.1?),
>>> do we need the original Apache HttpComponents/HttpClient as well?  This is
>>> an honest question... maybe there are subtle reasons they are needed and I
>>> think it would be good as a project that we are clear on them.
>>> >>
>>> >> ~ David Smiley
>>> >> Apache Lucene/Solr Search Developer
>>> >> http://www.linkedin.com/in/davidwsmiley
>>>
>>>
>>>
>>> --
>>> -
>>> Noble Paul
>>>
>>


Re: Congratulations to the new Apache Solr PMC Chair, Jan Høydahl!

2021-02-18 Thread Tomás Fernández Löbbe
Congratulations Jan!

On Thu, Feb 18, 2021 at 10:56 AM Anshum Gupta 
wrote:

> Hi everyone,
>
> I’d like to inform everyone that the newly formed Apache Solr PMC
> nominated and elected Jan Høydahl for the position of the Solr PMC Chair
> and Vice President. This decision was approved by the board in its February
> 2021 meeting.
>
> Congratulations Jan!
>
> --
> Anshum Gupta
>


Re: Congratulations to the new Lucene PMC Chair, Michael Sokolov!

2021-02-17 Thread Tomás Fernández Löbbe
Congratulations Mike!

On Wed, Feb 17, 2021 at 2:42 PM Steve Rowe  wrote:

> Congrats Mike!
>
> --
> Steve
>
> > On Feb 17, 2021, at 4:31 PM, Anshum Gupta 
> wrote:
> >
> > Every year, the Lucene PMC rotates the Lucene PMC chair and Apache Vice
> President position.
> >
> > This year we nominated and elected Michael Sokolov as the Chair, a
> decision that the board approved in its February 2021 meeting.
> >
> > Congratulations, Mike!
> >
> > --
> > Anshum Gupta
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [VOTE] Release Lucene/Solr 8.8.1 RC2

2021-02-17 Thread Tomás Fernández Löbbe
SUCCESS! [1:07:31.079810]

Tested upgrading from 8.7 and saw no problems

+1 (binding)

On Wed, Feb 17, 2021 at 2:58 AM Noble Paul  wrote:

> SUCCESS! [1:04:46.520370]
>
> +1 Binding
>
> On Wed, Feb 17, 2021 at 1:44 PM Timothy Potter 
> wrote:
> >
> > And I continue to struggle with the python3 command:
> >
> > python3 -u dev-tools/scripts/smokeTestRelease.py \
> >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.8.1-RC2-rev64f3b496bfee762a9d2dbff40700f457f4464dfe
> >
> > On Tue, Feb 16, 2021 at 7:41 PM Timothy Potter 
> wrote:
> > >
> > > Please vote for release candidate 2 for Lucene/Solr 8.8.1
> > >
> > > The artifacts can be downloaded from:
> > >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.8.1-RC2-rev64f3b496bfee762a9d2dbff40700f457f4464dfe
> > >
> > > You can run the smoke tester directly with this command:
> > > python3 -u dev-tools/scripts/smokeTestRelease.py
> > >
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.8.1-RC2-rev64f3b496bfee762a9d2dbff40700f457f4464dfe
> > >
> > > The vote will be open for at least 72 hours i.e. until 2021-02-20
> 03:00 UTC.
> > >
> > > [ ] +1  approve
> > > [ ] +0  no opinion
> > > [ ] -1  disapprove (and reason why)
> > >
> > > Here is my +1 SUCCESS! [0:50:07.947952]
> > >
> > > Also, as with RC1, in addition to the smoke test, I built a Docker
> > > image from the RC locally and verified:
> > >
> > > a. A rolling upgrade of a 3-node 8.7.0 cluster to the 8.8.1 RC
> > > completes successfully w/o any NPEs or weirdness with leader election
> > > / recoveries.
> > > b. The base_url property is stored in replica state after the upgrade
> > > c. A basic client application built with SolrJ 8.7.0 can load cluster
> > > state info directly from ZK and query the 8.8.1 RC2 servers.
> > > d. Same client app built with SolrJ 8.8.0 works as well.
> > >
> > > As this bug-fix release is primarily needed to address a SolrJ
> > > back-compat break (SOLR-15145) and unfortunately our smoke tester
> > > framework does not test for backcompat of older SolrJ against the RC,
> > > I ask others to please test rolling upgrades of servers (ideally
> > > multi-node clusters) running pre-8.8.0 to this RC if possible. Also,
> > > please try client applications that are using an older SolrJ, esp.
> > > those that load cluster state directly from ZK.
> > >
> > > Best regards,
> > > Tim
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> --
> -
> Noble Paul
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: 8.8.1 release soon

2021-02-10 Thread Tomás Fernández Löbbe
I'd like to get SOLR-15114
 in. It already has a
patch that I'm testing, I'll try to merge it today.

On Wed, Feb 10, 2021 at 8:23 AM Timothy Potter  wrote:

> Hi Ishan,
>
> Please let me know how SOLR-15138 is looking on Friday and we can make a
> decision then. My hope is for 8.8.1 sooner than later, but a couple more
> days seems fine too.
>
> Cheers,
> Tim
>
> On Wed, Feb 10, 2021 at 8:55 AM Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> I'd like for us to include SOLR-15138 please, but the fix is still under
>> review and development. Please let us know if it should be possible for us
>> to wait until that one is done (hopefully quickly), otherwise we can
>> release it later (if you want to proceed with the release before this is
>> ready). Thanks for volunteering!
>>
>> On Wed, 10 Feb, 2021, 9:07 pm Timothy Potter, 
>> wrote:
>>
>>> I was a tad bit ambitious with backporting SOLR-12182 to 8.8.0 and it
>>> seems we have no automated SolrJ back-compat tests in our RC vetting
>>> process, so unfortunately older SolrJ clients don't work with Solr 8.8
>>> server, see SOLR-15145.
>>>
>>> I'd like to release 8.8.1 ASAP to address this problem and will be the
>>> RM.
>>>
>>> Let me know if you have any other issues you think need to go into
>>> 8.8.1, otherwise I'd like to build an RC tomorrow AM US time. It looks like
>>> there are already a number of updates going in for 8.9 so let's keep the
>>> updates for 8.8.1 to a minimum please.
>>>
>>> Cheers,
>>> Tim
>>>
>>


Re: [Reminder] SOLR-15114: WAND does not work correctly on multiple segments in Solr 8.6.3

2021-02-07 Thread Tomás Fernández Löbbe
Thanks, I missed the Jira issue completely. I’ll take a look.

On Sun, Feb 7, 2021 at 6:40 PM Naoto Minami  wrote:

> Hi,
>
> I found and fixed a bug of WAND algorithm in Solr. As stated in the
> contributing manual,
>
> > If no one responds to your patch after a few days, please make friendly
> reminders.
>
> https://cwiki.apache.org/confluence/display/lucene/HowToContribute#HowToContribute-Contributingyourwork
>
> I remind SOLR-15114.
>
> JIRA: https://issues.apache.org/jira/browse/SOLR-15114
> Detailed PDF:
> https://issues.apache.org/jira/secure/attachment/13019548/wand.pdf
> GitHub PR: https://github.com/apache/lucene-solr/pull/2259
>
> Thank you.
>
> -
> Yahoo Japan Corporation
> Naoto Minami
> -
>
>


Re: [DISCUSS] ConfigSet ZK to file system fallback

2021-02-04 Thread Tomás Fernández Löbbe
> Ehh; I am not suggesting that configSets belong local, which would be a
step backwards -- we put them in ZK for a reason right now :-)  I'm
suggesting we have both for the same configSet, where the deployer can
choose which element is node resident vs cluster/ZK resident.  Thanks to
existing Solr features like configOverlay.json and/or XML xi:include plus
one small addition of fallback resolution of configSet files from ZK to the
local node, we'd get this ability.  (see my first email).

To be clear, I didn't suggest we move all configsets to be local. I'm just
saying that having a local configset has those issues I mentioned.

The point I was trying to make is that, having a single configset loading
from both, local and zk may be confusing for the user and cause issues that
may be difficult to track: Which file is Solr really reading right now? is
it the local one or the remote one? Is there a local one in a node or not?
is it being correctly overridden? How do I ensure that I always have a
local version of a file to override the remote?

So, I'm thinking that if we want to support this feature, a cleaner
approach could be to just have a type of configset that's defined as
"local", and then it belongs to the local filesystem. We can just prevent a
node from starting if it's supposed to have a configset that doesn't have.
It's 100% clear where a config file is being read from, etc. Maybe the
"configOverlay.json" is an exception and should live in ZooKeeper (and
never locally) for the config API to work, but having just "default to
local when a file is not in ZooKeeper" just confuses things IMO.

On Tue, Jan 26, 2021 at 8:38 PM David Smiley  wrote:

> On Tue, Jan 26, 2021 at 1:27 PM Tomás Fernández Löbbe <
> tomasflo...@gmail.com> wrote:
>
>> Thanks for bringing this up, David. I thought about this same situation
>> before, but I think I never convinced myself in one way or another :p. As I
>> mentioned in many other emails, I think the infrastructure and the node
>> configuration (such as solr.xml) needs to be local (at least, needs to be
>> able to be local and not forced on ZooKeeper) for various reasons.
>>
>
> I agree 100%.  I think the key part there is having *choice* for each
> configuration element, and not one dictated by Solr as to what belongs
> where.  The implementation of it needn't be complicated; it's a
> straight-forward idea to have the same format with conceptual layer /
> aggregation of them.
>
>
>> The same reasons exist for configsets: safe upgrades, or possible
>> node-specific configuration, as you mentioned. But Configsets have another
>> layer of complexity in my mind, which is, you don't know where you'll need
>> them... because you don't (necessarily) know where replicas of a collection
>> are going to be created. True that this is not a problem in the Docker
>> image situation you are describing, or if handled with care, but how can
>> Solr make sure of it?
>>
>
> Ehh; I am not suggesting that configSets belong local, which would be a
> step backwards -- we put them in ZK for a reason right now :-)  I'm
> suggesting we have *both* for the same configSet, where the deployer can
> choose which element is node resident vs cluster/ZK resident.  Thanks to
> existing Solr features like configOverlay.json and/or XML xi:include plus
> one small addition of fallback resolution of configSet files from ZK to the
> local node, we'd get this ability.  (see my first email).
>
> We have a very limited ability to accomplish the broad idea today -- Java
> system properties with variable substitution in our files.  But of course
> it's very limited what you can do with that, and it feels abusive to push
> it too far.  It's fine for individual tunables (e.g. an integer) but not
> more aggregate things like a complete MergePolicy configuration or an
> analysis chain in a schema.
>
> We have another vaguely similar thing conceptually in Solr today --
> ImplicitPlugins.json.  Probably only a few of you have heard of it.  It's
> baked into solr-core's JAR.  Take a look at it.  What if it were a file
> that a deployer could easily replace on the node, e.g. to reduce SolrCore
> load time or for security or to add something that a company wants all
> SolrCores to have?  That is along the lines of what this email thread is
> about:  How can a Solr cluster deployer make settings changes (to include
> registering new plugins) that are either specific to a node and/or should
> be so for an entire cluster without each ZK resident configSet having the
> config element?  *We can come up with ideas but most importantly I want
> to validate the notion that this is a desirable thing.  *I think we
> agree, Thomas, but I'm unsure about Eric & Gus and anyone else for th

Re: [VOTE] Release Lucene/Solr 8.8.0 RC2

2021-01-28 Thread Tomás Fernández Löbbe
It’s 9 binding: Noble, Tim, me, Ignacio, Mike M, Namgyu, Atri, Anshum and
Mike S.

1 non-binding: Haoyu

On Thu, Jan 28, 2021 at 5:34 AM Michael Sokolov  wrote:

> SUCCESS! [0:58:25.213071]
>
> +1 better late than never?
>
> On Thu, Jan 28, 2021 at 8:04 AM Ishan Chattopadhyaya
>  wrote:
> >
> > Thanks Noble!
> >
> > On Thu, 28 Jan, 2021, 4:24 pm Noble Paul,  wrote:
> >>
> >> [+1]  9  (4 binding)
> >>
> >>  [0]  0
> >>
> >> [-1]  0
> >>
> >>
> >> This vote has PASSED.
> >>
> >> I shall proceed with the rest of the release process
> >>
> >> On Thu, Jan 28, 2021 at 6:24 PM Anshum Gupta 
> wrote:
> >> >
> >> > Thanks for handing the release, Noble!
> >> >
> >> > +1 (binding)
> >> >
> >> > SUCCESS! [0:56:12.016387]
> >> >
> >> > Ran the smoke tester, a demo app, and checked the change log. All of
> that looks good.
> >> >
> >> > On Mon, Jan 25, 2021 at 2:22 AM Noble Paul 
> wrote:
> >> >>
> >> >> Please vote for release candidate 2 for Lucene/Solr 8.8.0
> >> >>
> >> >> The artifacts can be downloaded from:
> >> >>
> >> >>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.8.0-RC2-revb10659f0fc18b58b90929cfdadde94544d202c4a/
> >> >>
> >> >> python3 -u dev-tools/scripts/smokeTestRelease.py \
> >> >>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.8.0-RC2-revb10659f0fc18b58b90929cfdadde94544d202c4a/
> >> >>
> >> >>
> >> >>
> >> >> The vote will be open for at least 72 hours
> >> >>
> >> >> [ ] +1  approve
> >> >> [ ] +0  no opinion
> >> >> [ ] -1  disapprove (and reason why)
> >> >>
> >> >> Here is my +1
> >> >> --
> >> >> -
> >> >> Noble Paul
> >> >>
> >> >> -
> >> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >> >>
> >> >
> >> >
> >> > --
> >> > Anshum Gupta
> >>
> >>
> >>
> >> --
> >> -
> >> Noble Paul
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [DISCUSS] ConfigSet ZK to file system fallback

2021-01-26 Thread Tomás Fernández Löbbe
Thanks for bringing this up, David. I thought about this same situation
before, but I think I never convinced myself in one way or another :p. As I
mentioned in many other emails, I think the infrastructure and the node
configuration (such as solr.xml) needs to be local (at least, needs to be
able to be local and not forced on ZooKeeper) for various reasons.
The same reasons exist for configsets: safe upgrades, or possible
node-specific configuration, as you mentioned. But Configsets have another
layer of complexity in my mind, which is, you don't know where you'll need
them... because you don't (necessarily) know where replicas of a collection
are going to be created. True that this is not a problem in the Docker
image situation you are describing, or if handled with care, but how can
Solr make sure of it?

But I think it's a valuable feature to explore. Maybe the configset needs
to exist in ZooKeeper and have some sort of flag (similar to secure=true)
where it could say "local=true", and then fail Solr instances to start if
the configset is not present or something? Otherwise the collection
creation and replica addition operations may need to know where configsets
are present, etc. I'm wondering if this mix you are proposing of some files
in ZooKeeper and some files local wouldn't complicate things too much...
not sure.

Tomás

On Mon, Jan 25, 2021 at 3:15 PM David Smiley  wrote:

> I'm not entirely sure how to react to the feedback.  Maybe in listing
> multiple benefits and a follow-on proposal, I inadvertently opened doors to
> distracting points.  I know I can be guilty of scope creep.  My proposal
> has no impact on where JARs go, and so let's not discuss lib directories,
> the package store, or LTR's feature store either which my proposal is not
> related to, ok?  My proposal doesn't even add a new configuration place
> that doesn't already exist.
>
> Let me try to express this proposal through a different angle / lens that
> I think is more clear and motivating than the first:
>
> Each physical Solr node (perhaps a Docker image) is composed of Solr's
> code, perhaps some plugin code too, and some configuration files with some
> settings.  Baked into any code are settings with a default value.  There
> are trivial primitive settings like an integer for "maxMergeAtOnce" on
> TieredMergePolicy, and there are more aggregate settings, like what the
> default MergePolicy is.  Sometimes the default changes from one release to
> the next, or new settings get added or go away (albeit rarely).  Let's just
> consider SolrCloud.
>
> ... Let's say you need to make a settings change.  ...
>
> For changes specified in solrconfig.xml (generalizable to any file in the
> configSet, really), you MUST deploy this to ZooKeeper.  That sucks when the
> configuration might only make sense for some nodes.  Most likely you are
> doing an upgrade in which you can't simply change the Solr nodes in an
> instant, but perhaps some nodes are simply different (different hardware?
> -- SSDs vs HDDs).  Upgrades can be orchestrated but it's more complex when
> there is ZK resident configuration, and it will impose annoying
> restrictions on the underlying code (i.e. back-compat concerns).  By having
> a "physical layer configuration" (borrowing Eric's terminology), we can tie
> some settings to this layer while still having a higher level layer.  I
> proposed one way of doing this; I'd be happy to discuss others.
>
> I'd like to extend the same argument to solr.xml, a node level
> configuration file.  Here, at least there is already _some_ flexibility --
> you can supply solr.xml with the physical layer (the Docker image) *OR* in
> ZooKeeper.  But IMO it's not ideal because it's either-or..  Some
> configuration might make sense with the physical node, and some at the
> cluster node.  Ideally IMO, we'd have a way to blend both such that the
> deployer chooses where the configuration makes sense based on their cluster.
>
> WDYT?
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Sun, Jan 24, 2021 at 6:08 AM Ilan Ginzburg  wrote:
>
>> An aspect that would be interesting to consider IMO is upgrade and
>> configuration changes.
>> For example a collection in use across Solr version upgrade might require
>> different configuration (config set) with the old and new Solr versions.
>> Solr itself can require changes in config across updates.
>>
>> Backward compatibility is the usual answer (the new code continues
>> working with the old config that can be updated once all nodes have been
>> deployed) but this imposes constraints on new code.
>> If there was a way for the new Solr code to "magically" use a different
>> config set for the collection (and for Solr config in general) there would
>> be more freedom to add or change features, change default behavior across
>> Solr versions etc.
>>
>> Ilan
>>
>> On Sat 23 Jan 2021 at 22:22, Gus Heck  wrote:
>>
>>> I'm in agreement with Eric 

Re: [VOTE] Release Lucene/Solr 8.8.0 RC2

2021-01-25 Thread Tomás Fernández Löbbe
Thanks Noble! And thanks for fixing that concurrency issue, I'd hit it but
didn't have time to investigate it.

+1
SUCCESS! [0:58:32.036482]

On Mon, Jan 25, 2021 at 10:19 AM Timothy Potter 
wrote:

> Thanks Noble!
>
> +1 SUCCESS! [1:24:28.212370] (my internet is super slow today)
>
> Re-ran all the Solr operator tests and verified the Cloud graph UI renders
> correctly now.
>
> On Mon, Jan 25, 2021 at 3:22 AM Noble Paul  wrote:
>
>> Please vote for release candidate 2 for Lucene/Solr 8.8.0
>>
>> The artifacts can be downloaded from:
>>
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.8.0-RC2-revb10659f0fc18b58b90929cfdadde94544d202c4a/
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.8.0-RC2-revb10659f0fc18b58b90929cfdadde94544d202c4a/
>>
>>
>>
>> The vote will be open for at least 72 hours
>>
>> [ ] +1  approve
>> [ ] +0  no opinion
>> [ ] -1  disapprove (and reason why)
>>
>> Here is my +1
>> --
>> -
>> Noble Paul
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


Re: Separate git repo(s) for Solr modules

2021-01-13 Thread Tomás Fernández Löbbe
Agere with Houston 100%. I think it's a good idea to have an official repo
for things that are related to Solr and, at the same time, don't belong
within the Solr codebase such as the prometheus exporter or, eventually,
the cross-dc work that Anshum is working on (once it "graduates" from
sandbox I guess). For the "first party" plugins (things like
analysis-extra, langid, ltr), it's better to keep them together with the
codebase, so that we can easily guarantee compatibility and releases
together with Solr.

On Wed, Jan 13, 2021 at 8:26 AM Atri Sharma  wrote:

> +1
>
> This is also an opportunity to create the distinction between first party
> supported packages and the other plug-ins.
>
>
>
> On Wed, 13 Jan 2021, 21:00 Ishan Chattopadhyaya, <
> ichattopadhy...@gmail.com> wrote:
>
>> Hi Devs,
>>
>> As we discussed over the last few months, there seems a need to move
>> non-core pieces away from the Solr core module. The contribs are presently
>> a good place, but it makes sense to have a separate git repository hosting
>> such modules. Some candidates that come to mind are the present day contrib
>> modules, upcoming HDFS support module (separated away from solr-core),
>> other first party packages. Along with that, there is also a need for a
>> repository for hosting WIP modules/sub-projects.
>>
>> I propose that we apply for the creation of two new git repositories:
>> 1. solr-extras (or lucene-solr-extras)
>> 2. solr-sandbox (or lucene-solr-sandbox)
>>
>> Well tested, well supported modules/sub-projects can be released straight
>> away from *solr-extras*. The first party packages can be built from this
>> location and shipped with Solr (or be available for install using the
>> package manager CLI).
>>
>> New, unproved, beta, unstable modules can be hosted on *solr-sandbox*
>> (and graduate to solr-extras once stable).
>>
>> Please let me know if there are any questions/concerns with this approach.
>>
>> Thanks and regards,
>> Ishan
>>
>


Re: [DISCUSS] SIP-12: Incremental Backup and Restore

2021-01-07 Thread Tomás Fernández Löbbe
Thanks Jason! This is great, and a very much needed feature.

> This helps to avoid confusion that would
> otherwise arise between identically named files when e.g. a shard
> leader changes between two incremental backups.  (I'll try to expand
> on this in the SIP, as it's a bit hard to give the full context here.)

Thanks, I was wondering the same thing. Maybe it would be good to put an
example of how the file structure of a backup looks like in the backup? and
how the manifest file looks like. As you said, a file with the same name
may refer to different segments created by different cores or the same one
(even if the leader changed, it may be a file from a previous replication).

On Thu, Jan 7, 2021 at 1:20 PM Jason Gerlowski 
wrote:

> Thanks for the feedback Mike.  I've gotta give any credit to Shalin
> though, he wrote most of it before the holiday.  He and Dat wrote much
> of the code involved as well.  I haven't done more than steward things
> along so far.  As you suggested, I've updated the SIP to mention the
> related SOLR-13608 (see the bottom of the "Motivation" section).
>
> As for your questions, I've tried to answer them below.
>
> 1. Good catch - it doesn't. The SIP should read that each backup
> creates its own manifest files as needed for directories it creates
> under the base "location".  This way, additional backups can be added
> to the same location without needing to modify existing metadata
> files.  I've updated the SIP to reflect this.
>
> 2. The proposed metadata file is a lot like segments_n (in spirit) in
> that it has pointers to each index file that comprise an
> index/replica.  But it differs in that it stores additional
> information about each file (checksum, size) separate from the file
> itself.  It also allows a layer of naming indirection between what
> files are named in the storage repository and what name they should be
> given upon restoration.  This helps to avoid confusion that would
> otherwise arise between identically named files when e.g. a shard
> leader changes between two incremental backups.  (I'll try to expand
> on this in the SIP, as it's a bit hard to give the full context here.)
>
> 3. My intention was that the 'maxNumBackups' parameter would only
> refer to the incremental backups in a given location.  This was mostly
> informed by the fact that traditional backups today are required to be
> 1-per-location.  (i.e. a backup in 8.6.3 will error out if the
> specified directory has files in it.).  We could fix that aspect of
> traditional backups and find semantics for 'maxNumBackups' that might
> include traditional ones, but IMO it'd add complexity and work for a
> format that the SIP is trying to replace more broadly anyways.
>
> 4. I definitely intended to update LocalFileSystemRepository.  I have
> code to update HdfsBackupRepository as well, but wasn't quite sure
> where that stood since it's currently deprecated.  I haven't seen
> plans to make it a plugin, but might've just missed those discussions
> in other mail.  Anyway, I plan to update it but that assumes it's
> sticking around in one form or another.
>
> 5. Good idea - I didn't realize that was an option.  But it would be
> really nice if possible.  I don't have an estimate on resources.  I
> expect the need would be relatively small - you could restrict the
> tests to running on the nightly runs on ASF's Jenkins unless devs
> provide their own (e.g.) s3 creds.  But that's just a guess obviously,
> and not even in concrete terms.
>
> Thanks again for taking the time to wade through the SIP - really
> appreciate the feedback.  Hope the answers help!
>
> Best,
>
> Jason
>
> On Tue, Jan 5, 2021 at 11:52 AM Mike Drob  wrote:
> >
> > This is a very thorough SIP, thank you for spending the time on it,
> Jason!
> >
> > I have a few minor questions about points that are unclear to me.
> >
> > 1) If we assume that we cannot overwrite files, how does the manifest
> file stay current for incremental backup operations to the same directory?
> > 2) How is the manifest file functionally different from the segments_n
> and segments.gen files?
> > 3) Does the maxNumBackups parameter consider incremental backups or only
> full backups? What happens if we have a full backup and then N incremental
> ones? Do we delete the full backup and convert the oldest incremental one
> into a full? I imagine this might be a metadata operation, but then the
> concerns from question 1 apply.
> > 4) Do we plan to retrofit HDFS Backup and Local File Backup to use the
> new interfaces? I believe we should, but may be willing to accept this as
> out of scope.
> > 5) Regarding cloud provider test resources, we can also approach the ASF
> Infra team to ask for cloud credits. Can you give rough estimates on what
> kind of resourcing would be needed?
> >
> > I did not examine the new APIs in detail, but they looked fine at a high
> level overview. Will probably look again after questions regarding v1/v2
> are figured out.
> >

Re: Welcome Houston Putman to the PMC

2020-12-01 Thread Tomás Fernández Löbbe
Welcome Houston!!

On Tue, Dec 1, 2020 at 1:28 PM Anshum Gupta  wrote:

> Congratulations and welcome, Houston!
>
> On Tue, Dec 1, 2020 at 1:19 PM Mike Drob  wrote:
>
>> I am pleased to announce that Houston Putman has accepted the PMC's
>> invitation to join.
>>
>> Congratulations and welcome, Houston!
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
> --
> Anshum Gupta
>


Re: Welcome Julie Tibshirani as Lucene/Solr committer

2020-11-18 Thread Tomás Fernández Löbbe
Welcome Julie!

On Wed, Nov 18, 2020 at 6:59 PM Ilan Ginzburg  wrote:

> Welcome Julie and congrats!
>
> On Thu, Nov 19, 2020 at 3:51 AM Julie Tibshirani 
> wrote:
>
>> Thank you for the warm welcome! It’s a big honor for me -- I’ve been a
>> Lucene fan since the start of my software career. I’m excited to contribute
>> to such a great project.
>>
>> I’m a developer at Elastic focused on core search features. My
>> professional background is in information retrieval and data systems. I
>> also have an interest in statistical computing and machine learning
>> software. I’m originally from Canada but have lived in the SF Bay Area for
>> many years now. Some of my favorite things…
>> * Color: purple
>> * Album: Siamese Dream
>> * Java keyword: final
>>
>> Julie
>>
>> On Wed, Nov 18, 2020 at 6:33 PM Ishan Chattopadhyaya <
>> ichattopadhy...@gmail.com> wrote:
>>
>>> Welcome Julie!
>>>
>>> On Thu, 19 Nov, 2020, 12:10 am Erick Erickson, 
>>> wrote:
>>>
 Welcome Julie!

 > On Nov 18, 2020, at 1:21 PM, Alexandre Rafalovitch <
 arafa...@gmail.com> wrote:
 >
 > Juliet from the house of Elasticsearch meets a interesting,
 relevancy-aware  committer from the house of Solr.
 >
 > Such a romantic beginning. Not sure I want to know the end of that
 heroine's journey.
 >
 > :-)
 >
 > On Wed., Nov. 18, 2020, 12:59 p.m. Dawid Weiss, <
 dawid.we...@gmail.com> wrote:
 >
 > Congratulations and welcome, Julie.
 >
 > I think juliet is not a bad nick at all, you just need to who -all |
 grep "romeo"... :)
 >
 > Dawid
 >
 > On Wed, Nov 18, 2020 at 4:08 PM Michael Sokolov 
 wrote:
 > I'm pleased to announce that Julie Tibshirani has accepted the PMC's
 > invitation to become a committer.
 >
 > Julie, the tradition is that new committers introduce themselves with
 > a brief bio.
 >
 > I think we may still be sorting out the details of your Apache account
 > (julie@ may have been taken?), but as soon as that has been sorted
 out
 >  and karma has been granted, you can use your new powers to add
 > yourself to the committers section of the Who We Are page on the
 > website: 
 >
 > Congratulations and welcome!
 >
 > Mike Sokolov
 >
 > -
 > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 > For additional commands, e-mail: dev-h...@lucene.apache.org
 >


 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [VOTE] Release Lucene/Solr 8.7.0 RC1

2020-10-30 Thread Tomás Fernández Löbbe
+1

SUCCESS! [1:03:01.296851]

On Fri, Oct 30, 2020 at 12:05 PM Nhat Nguyen 
wrote:

> +1 (binding)
> SUCCESS! [0:53:20.894728]
>
>
> On Fri, Oct 30, 2020 at 1:50 PM Michael McCandless <
> luc...@mikemccandless.com> wrote:
>
>> +1
>>
>>
>> SUCCESS! [0:45:49.703726]
>>
>> Mike McCandless
>>
>> http://blog.mikemccandless.com
>>
>>
>> On Fri, Oct 30, 2020 at 8:04 AM Ignacio Vera  wrote:
>>
>>> +1 SUCCESS! [1:42:16.864208]
>>>
>>> On Fri, Oct 30, 2020 at 12:13 PM Adrien Grand  wrote:
>>>
 +1 SUCCESS! [2:11:05.149743]

 On Fri, Oct 30, 2020 at 5:54 AM Atri Sharma  wrote:

> Please vote for release candidate 1 for Lucene/Solr 8.7.0
>
>
> The artifacts can be downloaded from:
>
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.7.0-RC1-rev2dc63e901c60cda27ef3b744bc554f1481b3b067
>
>
> You can run the smoke tester directly with this command:
>
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
>
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.7.0-RC1-rev2dc63e901c60cda27ef3b744bc554f1481b3b067
>
>
> The vote will be open for at least 72 hours i.e. until 2020-11-01
> 20:00 UTC.
>
>
> [ ] +1  approve
>
> [ ] +0  no opinion
>
> [ ] -1  disapprove (and reason why)
>
>
> Here is my +1
>
> 
>
>
> --
> Regards,
>
> Atri
> Apache Concerted
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>

 --
 Adrien

>>>


Re: Index documents in async way

2020-10-08 Thread Tomás Fernández Löbbe
Interesting idea Đạt. The first questions/comments that come to my mind
would be:
* Atomic updates, can those be supported? I guess yes if we can guarantee
that messages are read once and only once.
* I'm guessing we'd need to read messages in an ordered way, so it'd be a
single Kafka partition per Solr shard, right? (Don't know Pulsar)
* May be difficult to determine what replicas should do after a document
update failure. Do they continue processing (which means if it was a
transient error they'll become inconsistent) or do they stop? maybe try to
recover from other active replicas? but if none of the replicas could
process the document then they would all go to recovery?

> Then the user will call another endpoint for tracking the response like
GET status_updates?trackId=,
Maybe we could have a way to stream those responses out? (i.e. via another
queue)? Maybe with an option to only stream out errors or something.

> Currently we are also adding to tlog first then call writer.addDoc later
I don't think that'c correct? see DUH2.doNormalUpdate.

> I think it won't be very different from what we are having now, since on
commit (producer threads do the commit) we rotate to a new tlog.
How would this work in your mind with one of the distributed queues?

I think this is a great idea, something that needs to be deeply thought,
but could make big improvements. Thanks for bringing this up, Đạt.

On Thu, Oct 8, 2020 at 7:39 PM Đạt Cao Mạnh  wrote:

> > Can there be a situation where the index writer fails after the document
> was added to tlog and a success is sent to the user? I think we want to
> avoid such a situation, isn't it?
> > I suppose failures would be returned to the client one the async
>  response?
> To make things more clear, the response for async update will be something
> like this
> { "trackId" : "" }
> Then the user will call another endpoint for tracking the response like
> GET status_updates?trackId=, the response will tell that
> whether the update is in_queue, processing, succeed or failed. Currently we
> are also adding to tlog first then call writer.addDoc later.
> Later we can convert current sync operations by waiting until the update
> gets processed before return to users.
>
> >How would one keep the tlog from growing forever if the actual indexing
> took a long time?
> I think it won't be very different from what we are having now, since on
> commit (producer threads do the commit) we rotate to a new tlog.
>
> > I'd like to add another wrinkle to this. Which is to store the
> information about each batch as a record in the index. Each batch record
> would contain a fingerprint for the batch. This solves lots of problems,
> and allows us to confirm the integrity of the batch. It also means that we
> can compare indexes by comparing the batch fingerprints rather than
> building a fingerprint from the entire index.
> Thank you, it adds another pros to this model :P
>
> On Fri, Oct 9, 2020 at 2:10 AM Joel Bernstein  wrote:
>
>> I think this model has a lot of potential.
>>
>> I'd like to add another wrinkle to this. Which is to store the
>> information about each batch as a record in the index. Each batch record
>> would contain a fingerprint for the batch. This solves lots of problems,
>> and allows us to confirm the integrity of the batch. It also means that we
>> can compare indexes by comparing the batch fingerprints rather than
>> building a fingerprint from the entire index.
>>
>>
>> Joel Bernstein
>> http://joelsolr.blogspot.com/
>>
>>
>> On Thu, Oct 8, 2020 at 11:31 AM Erick Erickson 
>> wrote:
>>
>>> I suppose failures would be returned to the client one the async
>>> response?
>>>
>>> How would one keep the tlog from growing forever if the actual indexing
>>> took a long time?
>>>
>>> I'm guessing that this would be optional..
>>>
>>> On Thu, Oct 8, 2020, 11:14 Ishan Chattopadhyaya <
>>> ichattopadhy...@gmail.com> wrote:
>>>
 Can there be a situation where the index writer fails after the
 document was added to tlog and a success is sent to the user? I think we
 want to avoid such a situation, isn't it?

 On Thu, 8 Oct, 2020, 8:25 pm Cao Mạnh Đạt,  wrote:

> > Can you explain a little more on how this would impact durability of
> updates?
> Since we persist updates into tlog, I do not think this will be an
> issue
>
> > What does a failure look like, and how does that information get
> propagated back to the client app?
> I did not be able to do much research but I think this is gonna be the
> same as the current way of our asyncId. In this case asyncId will be the
> version of an update (in case of distributed queue it will be offset)
> failures update will be put into a time-to-live map so users can query the
> failure, for success we can skip that by leverage the max succeeded 
> version
> so far.
>
> On Thu, Oct 8, 2020 at 9:31 PM Mike Drob  wrote:
>
>> Interesting idea! Can 

Re: Solr Alpha (EA) release of Reference Branch

2020-10-06 Thread Tomás Fernández Löbbe
gt;> Yes, A docker image will definitely help. I wasn't trying to downplay
> that
> >>>
> >>> On Tue, Oct 6, 2020 at 6:55 PM Ishan Chattopadhyaya
> >>>  wrote:
> >>> >
> >>> >
> >>> > > Docker is not a big requirement for large scale installations.
> Most of them already have their own install scripts. Availability of docker
> is not important for them. If a user is only encouraged to install Solr
> because of a docker image , most likely they are not running a large enough
> cluster
> >>> >
> >>> > I disagree, Noble. Having a docker image us going to be useful to
> some clients, with complex usecases. Great point, David!
> >>> >
> >>> > On Tue, 6 Oct, 2020, 1:09 pm Ishan Chattopadhyaya, <
> ichattopadhy...@gmail.com> wrote:
> >>> >>
> >>> >> As I said, I'm *personally* not confident in putting such a big
> changeset into master that wasn't vetted in a real user environment widely.
> I have, in the past, done enough bad things to Solr (directly or
> indirectly), and I don't want to repeat the same. Also, I'll be very
> uncomfortable if someone else did so.
> >>> >>
> >>> >> Having said this, if someone else wants to port the changes over to
> master *without first getting enough real world testing*, feel free to do
> so, and I can focus my efforts elsewhere.
> >>> >>
> >>> >> On Tue, 6 Oct, 2020, 9:22 am Tomás Fernández Löbbe, <
> tomasflo...@gmail.com> wrote:
> >>> >>>
> >>> >>> I was thinking (and I haven’t flushed it out completely but will
> throw the idea) that an alternative approach with this timeline could be to
> cut 9x branch around November/December? And then you could merge into
> master, it would have the latest  changes from master plus the ref branch
> changes. From there any nightly build could be use to help test/debug.
> >>> >>>
> >>> >>> That said I don’t know for sure what are the changes in the branch
> that do not belong in 9. The problem with them being 10x only is that
> backports would potentially be more difficult for all the life of 9.
> >>> >>>
> >>> >>> On Mon, Oct 5, 2020 at 4:54 PM Noble Paul 
> wrote:
> >>> >>>>
> >>> >>>> >I don't think it can be said what committers do and don't do
> with regards to running Solr.  All of us would answer this differently and
> at different points in time.
> >>> >>>>
> >>> >>>> " I have run it in one large cluster, so it is certified to be
> bug free/stable" I don't think it's a reasonable approach. We need as much
> feedback from our users because each of them stress Solr in a different
> way. This is not to suggest that committers are not doing testing or their
> tests are not valid. When I talk to the committers out here they say they
> do not see any performance stability issues at all. But, my client reports
> issues on a day to day basis.
> >>> >>>>
> >>> >>>>
> >>> >>>>
> >>> >>>> > Definitely publish a Docker image BTW -- it's the best way to
> try out any software.
> >>> >>>>
> >>> >>>> Docker is not a big requirement for large scale installations.
> Most of them already have their own install scripts. Availability of docker
> is not important for them. If a user is only encouraged to install Solr
> because of a docker image , most likely they are not running a large enough
> cluster
> >>> >>>>
> >>> >>>>
> >>> >>>>
> >>> >>>> On Tue, Oct 6, 2020, 6:30 AM David Smiley 
> wrote:
> >>> >>>>>
> >>> >>>>> Thanks so much for your responses Ishan... I'm getting much more
> information in this thread than my attempts to get questions answered on
> the JIRA issue months ago.  And especially,  thank you for volunteering for
> the difficult porting efforts!
> >>> >>>>>
> >>> >>>>> Tomas said:
> >>> >>>>>>
> >>> >>>>>>  I do agree with the previous comments that calling it "Solr
> 10" (even with the "-alpha") would confuse users, maybe use "reference"? or
> maybe something in reference to SOLR-14788?
> >>> >>>>>
> >>> >>>>>
> >>> >>

Re: [VOTE] Release Lucene/Solr 8.6.3 RC1

2020-10-06 Thread Tomás Fernández Löbbe
+1 (binding)

SUCCESS! [1:05:14.591357]

On Mon, Oct 5, 2020 at 1:13 PM Anshum Gupta  wrote:

> +1 (binding)
>
> SUCCESS! [1:00:37.423566]
>
> Tried basic indexing and search and ran the smoke tester.
>
> On Sat, Oct 3, 2020 at 6:53 PM Jason Gerlowski 
> wrote:
>
>> Please vote for release candidate 1 for Lucene/Solr 8.6.3
>>
>>
>> The artifacts can be downloaded from:
>>
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.3-RC1-reve001c2221812a0ba9e9378855040ce72f93eced4
>>
>>
>> You can run the smoke tester directly with this command:
>>
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.3-RC1-reve001c2221812a0ba9e9378855040ce72f93eced4
>>
>>
>> The vote will be open for at least 72 hours i.e. until 2020-10-07 02:00
>> UTC.
>>
>>
>> [ ] +1  approve
>>
>> [ ] +0  no opinion
>>
>> [ ] -1  disapprove (and reason why)
>>
>>
>> Here is my +1
>>
>
>
> --
> Anshum Gupta
>


Re: Solr Alpha (EA) release of Reference Branch

2020-10-05 Thread Tomás Fernández Löbbe
I was thinking (and I haven’t flushed it out completely but will throw the
idea) that an alternative approach with this timeline could be to cut 9x
branch around November/December? And then you could merge into master, it
would have the latest  changes from master plus the ref branch changes.
>From there any nightly build could be use to help test/debug.

That said I don’t know for sure what are the changes in the branch that do
not belong in 9. The problem with them being 10x only is that backports
would potentially be more difficult for all the life of 9.

On Mon, Oct 5, 2020 at 4:54 PM Noble Paul  wrote:

> >I don't think it can be said what committers do and don't do with regards
> to running Solr.  All of us would answer this differently and at different
> points in time.
>
> " I have run it in one large cluster, so it is certified to be bug
> free/stable" I don't think it's a reasonable approach. We need as much
> feedback from our users because each of them stress Solr in a
> different way. This is not to suggest that committers are not doing testing
> or their tests are not valid. When I talk to the committers out here they
> say they do not see any performance stability issues at all. But, my client
> reports issues on a day to day basis.
>
>
>
> > Definitely publish a Docker image BTW -- it's the best way to try out
> any software.
>
> Docker is not a big requirement for large scale installations. Most of
> them already have their own install scripts. Availability of docker is not
> important for them. If a user is only encouraged to install Solr because of
> a docker image , most likely they are not running a large enough cluster
>
>
>
> On Tue, Oct 6, 2020, 6:30 AM David Smiley  wrote:
>
>> Thanks so much for your responses Ishan... I'm getting much more
>> information in this thread than my attempts to get questions answered on
>> the JIRA issue months ago.  And especially,  thank you for volunteering for
>> the difficult porting efforts!
>>
>> Tomas said:
>>
>>>  I do agree with the previous comments that calling it "Solr 10" (even
>>> with the "-alpha") would confuse users, maybe use "reference"? or maybe
>>> something in reference to SOLR-14788?
>>>
>>
>> I have the opposite opinion.  This word "reference" is baffling to me
>> despite whatever Mark's explanation is.  I like the justification Ishan
>> gave for 10-alpha and I don't think I could re-phrase his justification any
>> better.  *If* the release was _not_ official (thus wouldn't show up in the
>> usual places anyone would look for a release), I think it would alleviate
>> that confusion concern even more, although I think "alpha" ought to be
>> enough of a signal not to use it without digging deeper on what's going on.
>>
>> Alex then Ishan said:
>>
>>> > Maybe we could release it to
>>> > committers community first and dogfood it "internally"?
>>>
>>> Alex: It is meaningless. Committers don't run large scale installations.
>>> We barely even have time to take care of running unit tests before
>>> destabilizing our builds. We are not the right audience. However, we all
>>> can anyway check out the branch and start playing with it, even without a
>>> release. There are orgs that don't want to install any code that wasn't
>>> officially released; this release is geared towards them (to help us test
>>> this at their scale).
>>>
>>
>> I don't think it can be said what committers do and don't do with regards
>> to running Solr.  All of us would answer this differently and at different
>> points in time.  From time to time, though not at present, I've been well
>> positioned to try out a new version of Solr in a stage/test environment to
>> see how it goes.  (Putting on my Salesforce metaphorical hat...) Even
>> though I'm not able to deploy it in a realistic way today, I'm able to run
>> a battery of tests to see if one of the features we depend on have changed
>> or is broken.  That's useful feedback to an alpha release!  And even though
>> I'm saying I'm not well positioned to try out some new Solr release in a
>> production-ish setting now, it's something I could make a good case for
>> internally since upgrades take a lot of effort where I work.  It's in our
>> interest for SolrCloud to be very stable (of course).
>>
>> Regardless, I think what you're driving at Ishan is that you want an
>> "official" release -- one that goes through the whole ceremony.  You
>> believe that people would be more likely to use it.  I think all we need to
>> do is announce (similar to a real release) that there is some unofficial
>> alpha distribution and that we want to solicit your feedback -- basically,
>> help us find bugs.  Definitely publish a Docker image BTW -- it's the best
>> way to try out any software.  I'm -0 on doing an official release for alpha
>> software because it's unnecessary to achieve the goals and somewhat
>> confusing.  I think the Solr 4 alpha/beta situation was different -- it was
>> not some fork a committer was 

Re: Solr Alpha (EA) release of Reference Branch

2020-10-05 Thread Tomás Fernández Löbbe
> I think the Solr 4 alpha/beta situation was different -- it was not some
fork a committer was maintaining; it was the master branch of its time, and
it was destined to be the very next release, not some possible future
release.

Agree, 4’s alpha/beta was a very different situation.

> I believe it has to be an official release to have enough credibility.
People trust the Apache brand and the community. This will ensure that we
get enough people to test this out. The very objective of this release is
to get help from our users to uncover any bugs. Most big shops will not
deploy unofficial releases in their prod/staging environments. We wish to
tick all the boxes for our users

I think this is fooling ourselves/our users. They trust Apache releases
because we take them seriously, because they have the community behind,
etc. This is a release from a feature branch, so we have to be cautious and
very upfront. While I hope many go and test it and maybe deploy it to some
of their environments, I feel we need to be careful not to send the wrong
message.

> I'm -0 on doing an official release for alpha software because it's
unnecessary to achieve the goals and somewhat confusing

I’d say I’m -1 to make it an official release for those reasons.

I think we should try to merge into master (whenever you are comfortable?
Does it need to wait until 9.0 is out? What's the main reason for that? can
we split the parts that do need to wait vs the ones that don’t?) and then
people can be encouraged to run a nightly version in their test
environments to help debug possible instability. If we need to do
alpha/beta versions from master then I think that would make more sense.



On Mon, Oct 5, 2020 at 12:30 PM David Smiley  wrote:

> Thanks so much for your responses Ishan... I'm getting much more
> information in this thread than my attempts to get questions answered on
> the JIRA issue months ago.  And especially,  thank you for volunteering for
> the difficult porting efforts!
>
> Tomas said:
>
>>  I do agree with the previous comments that calling it "Solr 10" (even
>> with the "-alpha") would confuse users, maybe use "reference"? or maybe
>> something in reference to SOLR-14788?
>>
>
> I have the opposite opinion.  This word "reference" is baffling to me
> despite whatever Mark's explanation is.  I like the justification Ishan
> gave for 10-alpha and I don't think I could re-phrase his justification any
> better.  *If* the release was _not_ official (thus wouldn't show up in the
> usual places anyone would look for a release), I think it would alleviate
> that confusion concern even more, although I think "alpha" ought to be
> enough of a signal not to use it without digging deeper on what's going on.
>
> Alex then Ishan said:
>
>> > Maybe we could release it to
>> > committers community first and dogfood it "internally"?
>>
>> Alex: It is meaningless. Committers don't run large scale installations.
>> We barely even have time to take care of running unit tests before
>> destabilizing our builds. We are not the right audience. However, we all
>> can anyway check out the branch and start playing with it, even without a
>> release. There are orgs that don't want to install any code that wasn't
>> officially released; this release is geared towards them (to help us test
>> this at their scale).
>>
>
> I don't think it can be said what committers do and don't do with regards
> to running Solr.  All of us would answer this differently and at different
> points in time.  From time to time, though not at present, I've been well
> positioned to try out a new version of Solr in a stage/test environment to
> see how it goes.  (Putting on my Salesforce metaphorical hat...) Even
> though I'm not able to deploy it in a realistic way today, I'm able to run
> a battery of tests to see if one of the features we depend on have changed
> or is broken.  That's useful feedback to an alpha release!  And even though
> I'm saying I'm not well positioned to try out some new Solr release in a
> production-ish setting now, it's something I could make a good case for
> internally since upgrades take a lot of effort where I work.  It's in our
> interest for SolrCloud to be very stable (of course).
>
> Regardless, I think what you're driving at Ishan is that you want an
> "official" release -- one that goes through the whole ceremony.  You
> believe that people would be more likely to use it.  I think all we need to
> do is announce (similar to a real release) that there is some unofficial
> alpha distribution and that we want to solicit your feedback -- basically,
> help us find bugs.  Definitely publish a Docker image BTW -- it's the best
> way to try out any software.  I'm -0 on doing an official release for alpha
> software because it's unnecessary to achieve the goals and somewhat
> confusing.  I think the Solr 4 alpha/beta situation was different -- it was
> not some fork a committer was maintaining; it was the master branch of its
> time, 

Re: Solr Alpha (EA) release of Reference Branch

2020-10-04 Thread Tomás Fernández Löbbe
I'm glad to see efforts to merge the reference branch changes into master.
I do agree with the previous comments that calling it "Solr 10" (even with
the "-alpha") would confuse users, maybe use "reference"? or maybe
something in reference to SOLR-14788? Don't know what's the issue with the
package manager but It likely can be modified to handle something like
this. Also, why does this need to be an official Apache release? Lets just
make unofficial releases (maybe tags in GH and the binaries in your apache
home directory or something) and ask the community to test those. You can
iterate much faster that way (you may want to have multiple of these
releases , maybe weekly at some point and going through the official
process will be tedious) and it would be much more clear for the users.

On Sun, Oct 4, 2020 at 10:16 AM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> > I do hope we manage to port to master all these improvements!
>
> Ilan: It is a very important consideration. We should ensure that this
> happens (either these changes are ported to master in chunks, or this
> branch becomes master after fixing the history to decompose in meaningful
> chunks).
>
>
> > What’s your sense of how much effort changing _functionality_ on
> master/8x and porting it to the EA is? I’m sure “It Depends(tm)”, I’m more
> interested in whether you expect most stuff ports pretty easily or very
> little stuff to port easily?
>
> Erick: I feel both branches (master and reference) are mostly on par
> functionality wise (except recent changes in master, post June). AFAICT,
> bulk of the changes in reference branch are fixes, refactorings and
> SolrCloud improvements. Mark, please fill in if I've missed something.
>
> > BTW, how many warnings are there ;)
>
> Erick: Tons! Precommit fails too. That's why we need time till December :-)
>
> > does it become futile to chase down the intermittent failures we see on
> master/8x? One of the major thrusts of the EA is things like race
> conditions and the like. If many/most such errors just disappear in EA, I
> have little incentive to fix them in master/8x. Under any circumstances, I
> suspect that most fixes like this would be totally different between the
> two. That’s a huge positive BTW….
>
> Erick: Depends on how optimistic we are about the success of this branch.
> At this moment, I am not confident enough to have these changes in
> reference branch merged back to master, and hence I want users to do a
> thorough production testing before we are confident. Just because tests run
> fast doesn't necessarily mean the service will run flawlessly in
> production, even though that is definitely the hope Mark started this
> effort with. In my opinion, fixes to 8x/master should definitely not stop
> because of this effort.
>
> > Does it make sense to cut 9.0 coincidentally with the EA being adopted
> as the right future direction? In that case, 9x may be short-lived, more of
> a placeholder that we deprecate methods, backport new changes from EA etc,
> but don’t necessarily expend much effort to backport changes from 10x that
> don’t backport easily.
>
> Erick: +1, definitely one of the many reasonable paths to take, IMO.
>
> > Has Lucene changed much (or at all) in the EA? I’m guessing not. Maybe
> not even touched…
>
> Erick: It is the same Lucene as what master is using. I don't see any
> Lucene level changes.
>
> >  Let's focus on making all the tests pass and get this to the hands of
> our users.
>
> Noble: +1
>
> > Let's say Solr 10 ( or whatever name gets picked ) turns out stable
> enough in the alpha phase - What would the next step be?
>
> Varun: I think at that point we should make sure that branch is the
> master: either (i) by porting over all changes from that branch, piece by
> piece, to master branch (I volunteer to do so), or (ii) fix the commit
> history of that branch itself (decompose all the changes into
> meaningful/logical chunks) and make sure all features in current master are
> on that branch (I volunteer to do so).
>
> > Would we bring back all the changes to master?
>
> Varun: As I explained above, that is one of the possiblities.
>
> > Do you have a sense into how that would end up playing out? Could it be
> brought in chunks or would it have to be wholesale ?
>
> Varun: It can surely be brought in wholesale. As per a conversation with
> Noble and David, we agreed that bringing it in chunks would be best going
> forward. All chunks may not independently work/pass tests, but they will be
> isolated enough to capture the themes.
>
> > Also do you know what features in the reference branch have been removed
> because they were unstable ?
>
> Varun: None that I am aware of. Mark, please help me here. There are many
> features whose tests are disabled, either because they were failing or they
> were taking very long. It is our collective goal to ensure they are
> unignored before this EA release. Mark is working on it, AFAIK.
>
> > Finding out the 

Re: restlet dependencies

2020-09-30 Thread Tomás Fernández Löbbe
> Let's support the single file upload feature
+1, but let this behave exactly as a zip file with a single file in it
(regarding trusted/untrusted). We just need to change the configset handler
to be able to handle non-zip files, and have a way to "locate" that file
inside the configset (in case it needs to go somewhere other than the root).

On Wed, Sep 30, 2020 at 8:45 AM Eric Pugh 
wrote:

> I think that me in “violent agreement” with you.   Let’s understand the
> Annotations approach that we have, or pick something that is commonly used
> like JAX-RS / Jersey.
>
>
>
> On Sep 30, 2020, at 11:41 AM, Timothy Potter  wrote:
>
> I'm sorry, I don't understand what you mean by "make it a single pattern
> (the annotations?)" Eric?
>
> To me, the pattern is well established in the Java world: JAX-RS (with
> Jersey as the underlying impl. which has nice integration with Jetty). But
> when I suggested porting the code that uses restlet to JAX-RS / Jersey,
> Ishan said that wasn't necessary and is already supported with some
> Annotations ... I have no idea what that means and need more info about
> what is already in place. Short of that, replacing restlet with JAX-RS /
> Jersey looks like a trivial amount of work to me (and I'm happy to take it
> on).
>
> Tim
>
> On Wed, Sep 30, 2020 at 9:36 AM Eric Pugh 
> wrote:
>
>> The use case of “I want to update something via a API” is I think pretty
>> common, and it would be nice to make it a single pattern (the annotations?)
>> with lots of examples/developer docs for the next person.
>>
>>
>>
>> On Sep 30, 2020, at 11:04 AM, Timothy Potter 
>> wrote:
>>
>> I started looking into removing Managed Resources in master and wanted to
>> mention that the LTR contrib also relies on this framework
>> (ManagedModelStore and ManagedFeatureStore, see:
>> https://lucene.apache.org/solr/guide/8_6/learning-to-rank.html#uploading-a-model).
>> I only mention this b/c it's been said several times in this thread that
>> nobody uses this feature and it's only for editing config/schema like
>> synonyms. Afaik, LTR is a broadly used feature of Solr so now I'm not so
>> bullish on removing the ability to manage dynamic resources using a REST
>> like API. I agree that changing resources like the synonym set could be
>> replaced with configSet updates but I don't see how to replace the RESTful
>> model / feature store API w/o something like Managed Resources?
>>
>> From where I sit, I think we should just remove the use of restlet in the
>> implementation but keep the API for Solr 9 (master).
>>
>> @Ishan ~ you mentioned there is a way to get REST API like behavior w/o
>> using JAX-RS / Jersey ... something about annotations? Can you point me to
>> some example code of how that is done please?
>>
>> Cheers,
>> Tim
>>
>> On Wed, Sep 30, 2020 at 8:29 AM David Smiley  wrote:
>>
>>> These resources are fundamentally a part of the configSet and can (in
>>> general) affect query results and thus flushing caches (via a reload) is
>>> appropriate.
>>>
>>> ~ David Smiley
>>> Apache Lucene/Solr Search Developer
>>> http://www.linkedin.com/in/davidwsmiley
>>>
>>>
>>> On Wed, Sep 30, 2020 at 9:06 AM Noble Paul  wrote:
>>>
 Well, I believe we should have a mechanism to upload a single file to
 a configset.

 >  A single file configset upload would require the user to reload the
 collection, so it isn't better than managed resources.

 This is not true

 Only config/schema file changes result in core reload.

 On Wed, Sep 30, 2020 at 10:23 PM David Smiley 
 wrote:
 >
 > Definitely don't remove in 8.x!
 >
 > >  A single file configset upload would require the user to reload
 the collection, so it isn't better than managed resources.
 >
 > Do you view that as a substantial point in favor of
 managed-resources?  I view that as a trivial matter, and one I prefer to
 automagic and potentially premature reload if there are additional edits to
 be done (e.g. query-elevation or other word lists).
 >
 > ~ David Smiley
 > Apache Lucene/Solr Search Developer
 > http://www.linkedin.com/in/davidwsmiley
 >
 >
 > On Wed, Sep 30, 2020 at 5:46 AM Ishan Chattopadhyaya <
 ichattopadhy...@gmail.com> wrote:
 >>
 >> > * Nobody knows how it works. It's unsupported
 >> It is supported and documented:
 https://lucene.apache.org/solr/guide/8_6/managed-resources.html
 >>
 >> > * RESTlet dependency
 >> > * Cannot be secured using standard permissions
 >> > * It's extremely complex for the functionality it offers.
 >>
 >> I agree. Whatever alternative we build should address these, before
 we consider removing managed resources.
 >>
 >> On Wed, Sep 30, 2020 at 2:52 PM Ishan Chattopadhyaya <
 ichattopadhy...@gmail.com> wrote:
 >>>
 >>> The managed resources is the only reasonable way to upload synonyms
 on the fly for users today. A single file 

Re: restlet dependencies

2020-09-24 Thread Tomás Fernández Löbbe
I won't step in the way of a single file update. I haven't needed it so far
though. I usually have the configsets in a Git repo (all the configset
together) and I have a simple bash script that essentialy what's described
in the docs[1]: Generate the zip on the fly and upload (optionally set the
auth too). This can become a problem with big zips, but then again,
ZooKeeper limits the size of the configs, so far it hasn't been an issue
for me.


[1]
https://lucene.apache.org/solr/guide/8_6/configsets-api.html#configsets-upload

On Thu, Sep 24, 2020 at 7:13 AM Eric Pugh 
wrote:

> It would be great if we had a simple API for updating a file in a
> configset that didn’t assume you were just uploading a zip file.
>
> As an example use case, if you use the Querqy library, you need to deploy
> a “rules.txt” file, which in olden days just went on the filesystem and you
> would click reload on a core.   In the SolrCloud world, we do this awkward
> “Let me stick the file in Zookeeper directly, avoiding Solr, and then do a
> collection RELOAD” to push out the file everywhere. [1]  It works!  But
> it’s just awkward.
>
> It’s great to know that I’ll be able to change out this awkward process
> using these magic parameters to configSet.  Even nicer would be to just
> wrap the overwrite=true=false and the Zip requirement into
> something that sets those.
>
>
> [1]
> https://github.com/querqy/chorus/blob/master/smui/conf/smui2solrcloud.sh#L37
>
> On Sep 23, 2020, at 11:57 PM, Tomás Fernández Löbbe 
> wrote:
>
> > Hmmm; seems the configSet API doesn't have an API to update a single
> file!  I wonder if uploading a configSet to the same name effectively
> overwrites newly updated files but does not delete the existing files?
> I've been working on this recently. As of 8.7, the UPLOAD command supports
> overwriting (before, an UPLOAD on an existing configset name would fail
> with BAD_REQUEST) and you can choose to cleanup or not the extra files with
> the "cleanup" parameter.
> You could upload a single file if you say overwrite=true=false,
> but it would still need to be in a zip file (and needs to be located in the
> right path of the zip, for example, a synonyms file may be in
> lang/synonyms_foo.txt or something)
>
> On Wed, Sep 23, 2020 at 8:10 PM David Smiley  wrote:
>
>> +1 to deprecate managed resources in lieu of easier to maintain (and more
>> flexible) file based GET/PUT into the configset.
>>
>> > I don't know if 9 is too soon from a deprecation stand point
>>
>> IMO it's never too soon as long as there is a deprecated release.  Users
>> take their time upgrading to major versions.
>>
>> > How much harder are the use-cases currently covered by managed
>> resources, if that module was removed?
>>
>> I believe in practice, users synchronize one-way from their DB to Solr if
>> they have dynamic resources like this.  This is true where I work.
>> Otherwise, they would probably be using Solr as the source of truth, which
>> doesn't seem architecturally-sound for most apps IMO.  Those users
>> (hopefully few) would have to spend some time re-engineering the approach.
>> Given one-way sync, the transition here is pretty easy:  serialize the
>> client-managed data to the right Solr format (stopwords vs synonyms vs ...)
>> and then a file upload to Solr/ZK and then telling Solr which collections
>> to "reload".
>>
>> Hmmm; seems the configSet API doesn't have an API to update a single
>> file!  I wonder if uploading a configSet to the same name effectively
>> overwrites newly updated files but does not delete the existing files?
>>
>> ~ David Smiley
>> Apache Lucene/Solr Search Developer
>> http://www.linkedin.com/in/davidwsmiley
>>
>>
>> On Wed, Sep 23, 2020 at 10:28 AM Timothy Potter 
>> wrote:
>>
>>> I agree we should deprecate the managed resources feature, it was the
>>> first thing I was asked to build by LW nearly 7 years ago, before I was a
>>> committer. Restlet was already in place and I built on top of that, not
>>> sure who introduced it originally (nor do I care). Clearly from the vantage
>>> point of looking back, JAX-RS and Jersey won the day with REST in Java but
>>> that simply wasn't the case back then. What's important is how we move
>>> forward vs. bestowing judgement backed by wisdom of hindsight on decisions
>>> made many years ago.
>>>
>>> In the short term, does Apache have an Artifactory (or similar) where we
>>> can host the Restlet dependencies for Github to pull them from? If not,
>>> then we can port the code that's using Restlet over

Re: restlet dependencies

2020-09-23 Thread Tomás Fernández Löbbe
> Hmmm; seems the configSet API doesn't have an API to update a single
file!  I wonder if uploading a configSet to the same name effectively
overwrites newly updated files but does not delete the existing files?
I've been working on this recently. As of 8.7, the UPLOAD command supports
overwriting (before, an UPLOAD on an existing configset name would fail
with BAD_REQUEST) and you can choose to cleanup or not the extra files with
the "cleanup" parameter.
You could upload a single file if you say overwrite=true=false, but
it would still need to be in a zip file (and needs to be located in the
right path of the zip, for example, a synonyms file may be in
lang/synonyms_foo.txt or something)

On Wed, Sep 23, 2020 at 8:10 PM David Smiley  wrote:

> +1 to deprecate managed resources in lieu of easier to maintain (and more
> flexible) file based GET/PUT into the configset.
>
> > I don't know if 9 is too soon from a deprecation stand point
>
> IMO it's never too soon as long as there is a deprecated release.  Users
> take their time upgrading to major versions.
>
> > How much harder are the use-cases currently covered by managed
> resources, if that module was removed?
>
> I believe in practice, users synchronize one-way from their DB to Solr if
> they have dynamic resources like this.  This is true where I work.
> Otherwise, they would probably be using Solr as the source of truth, which
> doesn't seem architecturally-sound for most apps IMO.  Those users
> (hopefully few) would have to spend some time re-engineering the approach.
> Given one-way sync, the transition here is pretty easy:  serialize the
> client-managed data to the right Solr format (stopwords vs synonyms vs ...)
> and then a file upload to Solr/ZK and then telling Solr which collections
> to "reload".
>
> Hmmm; seems the configSet API doesn't have an API to update a single
> file!  I wonder if uploading a configSet to the same name effectively
> overwrites newly updated files but does not delete the existing files?
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Wed, Sep 23, 2020 at 10:28 AM Timothy Potter 
> wrote:
>
>> I agree we should deprecate the managed resources feature, it was the
>> first thing I was asked to build by LW nearly 7 years ago, before I was a
>> committer. Restlet was already in place and I built on top of that, not
>> sure who introduced it originally (nor do I care). Clearly from the vantage
>> point of looking back, JAX-RS and Jersey won the day with REST in Java but
>> that simply wasn't the case back then. What's important is how we move
>> forward vs. bestowing judgement backed by wisdom of hindsight on decisions
>> made many years ago.
>>
>> In the short term, does Apache have an Artifactory (or similar) where we
>> can host the Restlet dependencies for Github to pull them from? If not,
>> then we can port the code that's using Restlet over to using JAX-RS /
>> Jersey. Personally I'd prefer we remove Managed Resources support from 9
>> instead of porting the Restlet code but I don't know if 9 is too soon from
>> a deprecation stand point?
>>
>> Tim
>>
>>
>> On Mon, Sep 21, 2020 at 11:33 PM Noble Paul  wrote:
>>
>>> We should deprecate that feature and remove restlet dependency altogether
>>>
>>> On Mon, Sep 21, 2020 at 10:20 PM Joel Bernstein 
>>> wrote:
>>> >
>>> > Restlet again!!!
>>> >
>>> >
>>> >
>>> > Joel Bernstein
>>> > http://joelsolr.blogspot.com/
>>> >
>>> >
>>> > On Mon, Sep 21, 2020 at 7:18 AM Eric Pugh <
>>> ep...@opensourceconnections.com> wrote:
>>> >>
>>> >> Do we have a community blessed alternative to restlet already?
>>> >>
>>> >> On Sep 20, 2020, at 9:40 AM, Noble Paul  wrote:
>>> >>
>>> >> Haha.
>>> >>
>>> >> In fact schema APIs don't use restlet. Only the managed resources use
>>> it
>>> >>
>>> >> On Sat, Sep 19, 2020, 3:35 PM Ishan Chattopadhyaya <
>>> ichattopadhy...@gmail.com> wrote:
>>> >>>
>>> >>> If I were talend, I'd immediately start publishing to maven central.
>>> If I were the developer who built the schema APIs, I would never have used
>>> restlet to begin with.
>>> >>>
>>> >>> On Sat, 19 Sep, 2020, 1:13 am Uwe Schindler, 
>>> wrote:
>>> 
>>>  I was thinking the same. Because GitHub does not cache the
>>> downloaded artifacts like our jenkins servers.
>>> 
>>>  It seems to run it in a new VM or container every time, so it
>>> downloads all artifacts. If I were Talend, I'd also block this.
>>> 
>>>  Uwe
>>> 
>>>  Am September 18, 2020 7:32:47 PM UTC schrieb Dawid Weiss <
>>> dawid.we...@gmail.com>:
>>> >
>>> > I don't think it's http/https - I believe restlet repository simply
>>> > bans github servers because of excessive traffic? These URLs work
>>> for
>>> > me locally...
>>> >
>>> > Dawid
>>> >
>>> > On Fri, Sep 18, 2020 at 6:35 PM Christine Poerschke (BLOOMBERG/
>>> > LONDON)  wrote:
>>> >>
>>> >>
>>> >>  This sounds vaguely familiar. 

Re: Our test failure rate is unacceptable.

2020-09-18 Thread Tomás Fernández Löbbe
I thought we were talking about reverting your own commits, not someone
else’s...

On Fri, Sep 18, 2020 at 12:31 PM Dawid Weiss  wrote:

> I don't think it is along the Apache way to revert somebody's commit
>
> without an explicit permission to do so... Not that I would personally
>
> mind if somebody did it for me.
>
>
>
> On Fri, Sep 18, 2020 at 9:06 PM Tomás Fernández Löbbe
>
>  wrote:
>
> >
>
> > Sometimes Jenkins may take hours to take your commit, may fail in the
> middle of your night, may not fail consistently, etc. That's why I don't
> think giving specific timeframes makes sense, but yes, as soon as you
> notice it's failing, it's either fix immediately or revert IMO.
>
> >
>
> > On Fri, Sep 18, 2020 at 12:03 PM Jason Gerlowski 
> wrote:
>
> >>
>
> >> > If it’s inadvertently added, we either fix it within an hour or so or
> revert the offending commit
>
> >>
>
> >> > I don't want to set specific time frames,
>
> >>
>
> >> To play Devil's Advocate here: why wait even an hour to revert a 100%
>
> >> test failure?  Reverts are usually trivial to do, unblock others
>
> >> immediately, and don't interfere with the fix process at all.
>
> >> Remembering the times I've broken the build myself, reverts even seem
>
> >> preferable from that position - reverting up front takes all the
>
> >> time-pressure off of getting out a fix.  Why work under the gun when
>
> >> you don't have to?
>
> >>
>
> >> On Fri, Sep 18, 2020 at 1:14 PM Tomás Fernández Löbbe
>
> >>  wrote:
>
> >> >
>
> >> > I believe these failures are associated to
> https://issues.apache.org/jira/browse/SOLR-14151
>
> >> >
>
> >> > • FAILED:  org.apache.solr.pkg.TestPackages.classMethod
>
> >> > • FAILED:
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields
>
> >> > • FAILED:
> org.apache.solr.schema.ManagedSchemaRoundRobinCloudTest.testAddFieldsRoundRobin
>
> >> >
>
> >> > > IMO if a temporary instability is to be introduced deliberately, it
> should be published on the list. If it’s inadvertently added, we either fix
> it within an hour or so or revert the offending commit
>
> >> > I don't want to set specific time frames, but sometimes it's
> obviously too much time.
>
> >> >
>
> >> > On Fri, Sep 18, 2020 at 8:48 AM Atri Sharma  wrote:
>
> >> >>
>
> >> >> When I said temporary, I meant 3-4 hours. Definitely not more than
> that.
>
> >> >>
>
> >> >> IMO we should just roll back offending commits if they are easily
> identifiable. I agree with you — we all have been guilty of breaking builds
> (mea culpa as well). The bad part here is the longevity of the failures.
>
> >> >>
>
> >> >>
>
> >> >> On Fri, 18 Sep 2020 at 21:05, Erick Erickson <
> erickerick...@gmail.com> wrote:
>
> >> >>>
>
> >> >>> bq. IMO if a temporary instability is to be introduced
> deliberately, it should be published on the list
>
> >> >>>
>
> >> >>>
>
> >> >>>
>
> >> >>> Actually, I disagree. Having anything in the tests that fail 100%
> of the time is just unacceptable since it becomes a barrier for everyone
> else. AFAIK, if the problem can be identified to a particular push, I have
> no problems with that push being unilaterally rolled back.
>
> >> >>>
>
> >> >>>
>
> >> >>>
>
> >> >>> The exception for me is when the problem is addressed immediately,
> I’ve certainly been the source of that kind of problem, as have others.
>
> >> >>>
>
> >> >>>
>
> >> >>>
>
> >> >>> What I take great exception to is the fact that some of these tests
> have been failing 100% of the time for the last seven days! If it’s the
> case that the full test suite was never run before the push that’s another
> discussion. Yeah, it takes a long time but…
>
> >> >>>
>
> >> >>>
>
> >> >>>
>
> >> >>> Erick
>
> >> >>>
>
> >> >>>
>
> >> >>>
>
> >> >>> > On Sep 18, 2020, at 11:28 AM, Atri Sharma 
> wrote:
>
&g

Re: Our test failure rate is unacceptable.

2020-09-18 Thread Tomás Fernández Löbbe
Sometimes Jenkins may take hours to take your commit, may fail in the
middle of your night, may not fail consistently, etc. That's why I don't
think giving specific timeframes makes sense, but yes, as soon as you
notice it's failing, it's either fix immediately or revert IMO.

On Fri, Sep 18, 2020 at 12:03 PM Jason Gerlowski 
wrote:

> > If it’s inadvertently added, we either fix it within an hour or so or
> revert the offending commit
>
> > I don't want to set specific time frames,
>
> To play Devil's Advocate here: why wait even an hour to revert a 100%
> test failure?  Reverts are usually trivial to do, unblock others
> immediately, and don't interfere with the fix process at all.
> Remembering the times I've broken the build myself, reverts even seem
> preferable from that position - reverting up front takes all the
> time-pressure off of getting out a fix.  Why work under the gun when
> you don't have to?
>
> On Fri, Sep 18, 2020 at 1:14 PM Tomás Fernández Löbbe
>  wrote:
> >
> > I believe these failures are associated to
> https://issues.apache.org/jira/browse/SOLR-14151
> >
> > • FAILED:  org.apache.solr.pkg.TestPackages.classMethod
> > • FAILED:
> org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields
> > • FAILED:
> org.apache.solr.schema.ManagedSchemaRoundRobinCloudTest.testAddFieldsRoundRobin
> >
> > > IMO if a temporary instability is to be introduced deliberately, it
> should be published on the list. If it’s inadvertently added, we either fix
> it within an hour or so or revert the offending commit
> > I don't want to set specific time frames, but sometimes it's obviously
> too much time.
> >
> > On Fri, Sep 18, 2020 at 8:48 AM Atri Sharma  wrote:
> >>
> >> When I said temporary, I meant 3-4 hours. Definitely not more than that.
> >>
> >> IMO we should just roll back offending commits if they are easily
> identifiable. I agree with you — we all have been guilty of breaking builds
> (mea culpa as well). The bad part here is the longevity of the failures.
> >>
> >>
> >> On Fri, 18 Sep 2020 at 21:05, Erick Erickson 
> wrote:
> >>>
> >>> bq. IMO if a temporary instability is to be introduced deliberately,
> it should be published on the list
> >>>
> >>>
> >>>
> >>> Actually, I disagree. Having anything in the tests that fail 100% of
> the time is just unacceptable since it becomes a barrier for everyone else.
> AFAIK, if the problem can be identified to a particular push, I have no
> problems with that push being unilaterally rolled back.
> >>>
> >>>
> >>>
> >>> The exception for me is when the problem is addressed immediately,
> I’ve certainly been the source of that kind of problem, as have others.
> >>>
> >>>
> >>>
> >>> What I take great exception to is the fact that some of these tests
> have been failing 100% of the time for the last seven days! If it’s the
> case that the full test suite was never run before the push that’s another
> discussion. Yeah, it takes a long time but…
> >>>
> >>>
> >>>
> >>> Erick
> >>>
> >>>
> >>>
> >>> > On Sep 18, 2020, at 11:28 AM, Atri Sharma  wrote:
> >>>
> >>> >
> >>>
> >>> > IMO if a temporary instability is to be introduced deliberately, it
> should be published on the list. If it’s inadvertently added, we either fix
> it within an hour or so or revert the offending commit.
> >>>
> >>> >
> >>>
> >>> > On Fri, 18 Sep 2020 at 20:26, Erick Erickson <
> erickerick...@gmail.com> wrote:
> >>>
> >>> > http://fucit.org/solr-jenkins-reports/failure-report.html
> >>>
> >>> >
> >>>
> >>> >
> >>>
> >>> >
> >>>
> >>> > HdfsAutoAddReplicasTest failing 100% of the time.
> >>>
> >>> >
> >>>
> >>> > TestPackages.classMethod failing 100% of the time
> >>>
> >>> >
> >>>
> >>> > 3-4 AutoAddReplicas tests failing 98% of the time.
> >>>
> >>> >
> >>>
> >>> >
> >>>
> >>> >
> >>>
> >>> > Is anyone looking at these? I realize the code base is changing a
> lot, and some temporary instability is to be expected. What I’d like is for
> some indication that people are actively addressing these

Re: Our test failure rate is unacceptable.

2020-09-18 Thread Tomás Fernández Löbbe
I believe these failures are associated to
https://issues.apache.org/jira/browse/SOLR-14151

• FAILED:  org.apache.solr.pkg.TestPackages.classMethod
• FAILED:
org.apache.solr.schema.PreAnalyzedFieldManagedSchemaCloudTest.testAdd2Fields
• FAILED:
org.apache.solr.schema.ManagedSchemaRoundRobinCloudTest.testAddFieldsRoundRobin

> IMO if a temporary instability is to be introduced deliberately, it
should be published on the list. If it’s inadvertently added, we either fix
it within an hour or so or revert the offending commit
I don't want to set specific time frames, but sometimes it's obviously too
much time.

On Fri, Sep 18, 2020 at 8:48 AM Atri Sharma  wrote:

> When I said temporary, I meant 3-4 hours. Definitely not more than that.
>
> IMO we should just roll back offending commits if they are easily
> identifiable. I agree with you — we all have been guilty of breaking builds
> (mea culpa as well). The bad part here is the longevity of the failures.
>
>
> On Fri, 18 Sep 2020 at 21:05, Erick Erickson 
> wrote:
>
>> bq. IMO if a temporary instability is to be introduced deliberately, it
>> should be published on the list
>>
>>
>>
>> Actually, I disagree. Having anything in the tests that fail 100% of the
>> time is just unacceptable since it becomes a barrier for everyone else.
>> AFAIK, if the problem can be identified to a particular push, I have no
>> problems with that push being unilaterally rolled back.
>>
>>
>>
>> The exception for me is when the problem is addressed immediately, I’ve
>> certainly been the source of that kind of problem, as have others.
>>
>>
>>
>> What I take great exception to is the fact that some of these tests have
>> been failing 100% of the time for the last seven days! If it’s the case
>> that the full test suite was never run before the push that’s another
>> discussion. Yeah, it takes a long time but…
>>
>>
>>
>> Erick
>>
>>
>>
>> > On Sep 18, 2020, at 11:28 AM, Atri Sharma  wrote:
>>
>> >
>>
>> > IMO if a temporary instability is to be introduced deliberately, it
>> should be published on the list. If it’s inadvertently added, we either fix
>> it within an hour or so or revert the offending commit.
>>
>> >
>>
>> > On Fri, 18 Sep 2020 at 20:26, Erick Erickson 
>> wrote:
>>
>> > http://fucit.org/solr-jenkins-reports/failure-report.html
>>
>> >
>>
>> >
>>
>> >
>>
>> > HdfsAutoAddReplicasTest failing 100% of the time.
>>
>> >
>>
>> > TestPackages.classMethod failing 100% of the time
>>
>> >
>>
>> > 3-4 AutoAddReplicas tests failing 98% of the time.
>>
>> >
>>
>> >
>>
>> >
>>
>> > Is anyone looking at these? I realize the code base is changing a lot,
>> and some temporary instability is to be expected. What I’d like is for some
>> indication that people are actively addressing these.
>>
>> >
>>
>> >
>>
>> >
>>
>> > Erick
>>
>> >
>>
>> > -
>>
>> >
>>
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>
>> >
>>
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>> >
>>
>> >
>>
>> >
>>
>> > --
>>
>> > Regards,
>>
>> >
>>
>> > Atri
>>
>> > Apache Concerted
>>
>>
>>
>>
>>
>> -
>>
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>>
>> --
> Regards,
>
> Atri
> Apache Concerted
>


Re: Github PR Actions

2020-09-18 Thread Tomás Fernández Löbbe
I think this is a good idea. In general, I'm +1 on improving PR validations
as much as possible, and as Houston says, we can always remove them later
if it's not helping. I also agree with David in his Jira comment that even
more important than this is to have the tests running on Jenkins, but I
don't see why we can't have both.

Regards,

Tomás

On Fri, Sep 18, 2020 at 9:05 AM Atri Sharma  wrote:

> +1 to not depending on Docker for local tests.
>
> I do not wish to derail this thread — but re: reference branch, doesn’t it
> have a bunch of tests disabled?
>
> On Fri, 18 Sep 2020 at 03:53, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> > It would be great to run all the tests every time, but clearly that is
>> too expensive.
>>
>> The reference_impl branch requires around 30 seconds to run all solr-core
>> tests. That's where we should all put our collective efforts.
>> Also, I have reservations against docker based tests blocking PRs. If I
>> don't have docker running on my dev machine, I wouldn't be able to make
>> those tests pass. This may block my ability to merge any PR whatsoever.
>> Why can't we have integration tests that do not rely on docker?
>>
>> On Thu, Sep 17, 2020 at 9:26 PM Houston Putman 
>> wrote:
>>
>>> Thought I'd make this a thread instead of a discussion on a single JIRA
>>> ticket.
>>>
>>> Currently we have gradle precommit run on PRs for master, which is very
>>> useful and gives people confidence in approving PRs. But precommit is
>>> obviously not the only thing we care about before committing. It would be
>>> great to run all the tests every time, but clearly that is too expensive.
>>>
>>> In SOLR-14856 , I
>>> proposed adding a github action to build and test the solr docker image for
>>> PRs that affected relevant parts of the repo (solr/docker, solr/bin,
>>> solr/packaging and solr/contrib/prometheus-exporter/bin). Running the
>>> docker tests currently takes roughly 12 minutes in the github action, which
>>> would be costly if it ran on every PR. But when running on the small
>>> percentage of PRs that affect those code paths, I think the benefit
>>> outweighs the cost.
>>>
>>> Beyond just the docker tests, I think we can leverage this ability for
>>> other features that are limited to certain code paths. For example running
>>> tests for contrib modules, testing solr/examples, and many of
>>> the independent lucene modules. The SolrJ tests just ran in 3 minutes
>>> locally for me, maybe that'd be a good candidate as well.
>>>
>>> Anyways I'm sure there are other good candidates out there, but I just
>>> wanted to start the discussion and hear other opinions before diving any
>>> deeper.
>>>
>>>
>>>
>>
>> --
> Regards,
>
> Atri
> Apache Concerted
>


Re: Code Analysis during CI?

2020-09-03 Thread Tomás Fernández Löbbe
Thanks Tom. I think this could be very useful as long as it can be
configurable. (The "terms of use here[1] link to "google.com", so I
couldn't check that, but they claim it's free for public repos, so...). We
could always try it and remove it if we don't like it? What do others think?


[1] https://github.com/apps/muse-dev

On Thu, Sep 3, 2020 at 3:06 PM Tom DuBuisson  wrote:

> Hello Lucene/Solr folks,
>
> During Lucene development CI is used for build and unit tests to gate
> merges.  The CI doesn't yet include any analysis tools though, but their
> use has been discussed [1].  I fixed some issues flagged by Facebook's
> Infer and was prompted to bring up the topic here [2].
>
> The recent PR fixed some low-hanging fruit that was reported when I ran
> Muse [3] - a github app that is a platform for static analysis tools.
>  Muse's platform bundles the most useful analysis tools, all open source
> with many of them developed by FANG, and triggers analysis on PRs
> then delivers results as comments.
>
> Because of the PR-centric workflow you only see issues related to the
> changes in the pull request.  This means that even a project where tools
> give a daunting list of issues can still have quiet day-to-day operation.
> Muse also has options to configure individual tools and turn tools or
> warnings off entirely.  If there are concerns in addition to noise and
> added mental tax on development then I'd really like to hear those thoughts.
>
> Would you be up for running Muse on the lucene-solr repo?  Let me know,
> and I hope to hear your thoughts on analysis tools either way.
>
> -Tom
>
> [1] https://issues.apache.org/jira/projects/LUCENE/issues/LUCENE-8847
> [2] https://issues.apache.org/jira/projects/SOLR/issues/SOLR-14819
> [3] Muse result on Lucene:
> https://console.muse.dev/result/TomMD/lucene-solr/01EH5WXS6C1RH1NFYHP6ATXTZ9?tab=results
> Muse app link: https://github.com/apps/muse-dev
> [4] https://github.com/TomMD/lucene-solr/pulls
> [5] Example of muse commenting on an issue
> https://github.com/TomMD/shiro/pull/2
>
>


Re: Solr configuration options

2020-09-03 Thread Tomás Fernández Löbbe
Thanks Ishan,
I still don't think it covers the cases very well. The possibilities of how
that handler could be screwing up things are infinite (it could be
corrupting local cores, causing OOMs, it could be spawning infinite loops,
you name it). If the new handler requires initialization that reaches out
to external system, having a large enough cluster means this can hit
throttling or even take down something if you start them all atomically.
I'm fine with Solr supporting atomic deployments with packages and such,
but I'm not fine with that being the only way to deploy Solr, it may not be
suitable for all use cases.

Also, your workaround requires a ton of knowledge of Solr APIs and
internals, vs a simpler and more standard approach where there are two
versions (Docker images, AMIs, tars, whatever you use): old and new.  Add
"new" and remove "old" in your preferred way. This is exactly the same
you'll do when you need to upgrade Solr BTW, so it needs to be handled
anyways.

On Thu, Sep 3, 2020 at 11:35 AM Erick Erickson 
wrote:

> Hmmm, interesting point about deliberately changing one solr.xml for
> testing purposes. To emulate that on a per-node basis you’d have to have
> something like a “node props” associated with each node, which my instant
> reaction to is “y”.
>
> As far as API only, I’d assumed changes to clusterprops could be either
> way. If we allow Solr to start with no clusterprops, then the API route
> would create one. Pros can go ahead and hand-edit one and push it up if
> they want.
>
> In your nightmare scenario, where are the ZK’s located? Are they still
> running somewhere? Could you hand-edit clusterprops and push it to ZK?
>
> I wish everyone would just use Solr the way I think about it ;)
>
> > On Sep 3, 2020, at 2:11 PM, Tomás Fernández Löbbe 
> wrote:
> >
> > I can see that some of these configurations should be moved to
> clusterporps.json, I don’t believe this is the case for all of them. Some
> are configurations that are targeting the local node (i.e sharedLib path),
> some are needed before connecting to ZooKeeper (zk config). Configuration
> of global handlers and components, while in general you do want to see the
> same conf across all nodes, you may not want the changes to reflect
> atomically and instead rely on a phased upgrade (rolling, blue/green, etc),
> where the conf goes together with the binaries that are being deployed. I
> also fear that making the configuration of some of these components dynamic
> means we have to make the code handle them dynamically (i.e. recreate the
> CollectionsHandler based on callback from ZooKeeper). This would be very
> hardly used in reality, but all our code needs to be restructured to handle
> this, I fear this will complicate the code needlessly, and may introduce
> leaks and races of all kinds. If those components can have configuration
> that should be dynamic (some toggle, threshold, etc), I’d love to see those
> as clusterporps, key-value mostly.
> >
> > If we were to put this configuration in clusterprops, would that mean
> that I’m only able to do config changes via API? On a new cluster, do I
> need to start Solr, make a collections API call to change the collections
> handler? Or am I supposed to manually change the clusterporps file before
> starting Solr and push it to Zookeeper (having a file intended for manual
> edits and API edits is bad IMO)? Maybe via the cli, but still, I’d need to
> do this for every cluster I create (vs have the solr.xml in my source
> repository and Docker image, for example). Also I lose the ability to have
> this configuration in my git repo?
> >
> > I'm +1 to keep a node configuration local to the node in the filesystem.
> Currently, it's solr.xml. I've seen comments about xml difficult to
> read/write, I think that's personal preference so, while I don't see it
> that way, I understand lots of people do and things have been moving away
> to other formats, I'm open to discuss that as a change.
> >
> > > However, 1, 2, and 3, are not trivial for a large number of Solr nodes
> and if they aren’t right diagnosing them can be “challenging”…
> > In my mind, solr.xml goes with your code. Having it up to date means
> having all your nodes running the same version of your code. As I said,
> this is the "desired state" of the cluster, but may not be the case all the
> time (i.e. during deployments), and that's fine. Depending on how you
> manage the cluster, you may want to live with different versions for some
> time (you may have canaries or be doing a blue/green deployment, etc).
> Realistically speaking, if you have a 500+ node cluster, you must have a
> system in place to manage configuration and versions, let's not try to bend
> backw

Re: Solr configuration options

2020-09-03 Thread Tomás Fernández Löbbe
I can see that some of these configurations should be moved to
clusterporps.json, I don’t believe this is the case for all of them. Some
are configurations that are targeting the local node (i.e sharedLib path),
some are needed before connecting to ZooKeeper (zk config). Configuration
of global handlers and components, while in general you do want to see the
same conf across all nodes, you may not want the changes to reflect
atomically and instead rely on a phased upgrade (rolling, blue/green, etc),
where the conf goes together with the binaries that are being deployed. I
also fear that making the configuration of some of these components dynamic
means we have to make the code handle them dynamically (i.e. recreate the
CollectionsHandler based on callback from ZooKeeper). This would be very
hardly used in reality, but all our code needs to be restructured to handle
this, I fear this will complicate the code needlessly, and may introduce
leaks and races of all kinds. If those components can have configuration
that should be dynamic (some toggle, threshold, etc), I’d love to see those
as clusterporps, key-value mostly.

If we were to put this configuration in clusterprops, would that mean that
I’m only able to do config changes via API? On a new cluster, do I need to
start Solr, make a collections API call to change the collections handler?
Or am I supposed to manually change the clusterporps file before starting
Solr and push it to Zookeeper (having a file intended for manual edits and
API edits is bad IMO)? Maybe via the cli, but still, I’d need to do this
for every cluster I create (vs have the solr.xml in my source repository
and Docker image, for example). Also I lose the ability to have this
configuration in my git repo?

I'm +1 to keep a node configuration local to the node in the filesystem.
Currently, it's solr.xml. I've seen comments about xml difficult to
read/write, I think that's personal preference so, while I don't see it
that way, I understand lots of people do and things have been moving away
to other formats, I'm open to discuss that as a change.

> However, 1, 2, and 3, are not trivial for a large number of Solr nodes
and if they aren’t right diagnosing them can be “challenging”…
In my mind, solr.xml goes with your code. Having it up to date means having
all your nodes running the same version of your code. As I said, this is
the "desired state" of the cluster, but may not be the case all the time
(i.e. during deployments), and that's fine. Depending on how you manage the
cluster, you may want to live with different versions for some time (you
may have canaries or be doing a blue/green deployment, etc). Realistically
speaking, if you have a 500+ node cluster, you must have a system in place
to manage configuration and versions, let's not try to bend backwards for a
situation that isn't that realistic.

Let me put an example of things I fear with making these changes atomic.
Let's say I want to start using a new, custom HealthCheckHandler
implementation, that I have put in a jar (and let's assume the jar is
already in all nodes). If I use solr.xml (where one can currently
configures this implementation), I can do a phased deployment (yes, this is
a restart of all nodes), if the healthcheck handler is buggy and fails
request, the nodes with the new code will never show as healthy, so the
deployment will likely stop (i.e. if you are using Kubernetes and using
probes, those instances will keep restarting, if you use ASG in AWS you can
do the same thing). If you make it an atomic change, bye-bye cluster, all
nodes will start reporting unhealthy (Kubernetes and ASG will kill all
those nodes). Good luck doing API changes to revert now, there is no node
to respond to those requests. Hopefully you were using some sort of stable
storage because all ephemeral is gone. Bringing back that cluster is going
to be a PITA. I have seen similar things happen.


On Thu, Sep 3, 2020 at 9:40 AM Erick Erickson 
wrote:

> bq.  Isn’t solr.xml is a way to hardcode config in a more flexible way
> that a Java class?
>
> Yes, and the problem word here is “flexible”. For a single-node system
> that flexibility is desirable. Flexibility comes at the cost of complexity,
> especially in the SolrCloud case. In this case, not so much Solr code
> complexity as operations complexity.
>
> For me this isn’t so much a question of functionality as
> administration/troubleshooting/barrier to entry.
>
> If:
> 1. you can guarantee that every solr.xml file on every node in your entire
> 500 node cluster is up to date
> 2. or you can guarantee that the solr.xml stored on Zookeeper
> 3. and you can guarantee that clusterprops.json in cloud mode is
> interacting properly with whichever solr.xml is read
> 4. Then I’d have no problem with solr.xml.
>
> However, 1, 2, and 3, are not trivial for a large number of Solr nodes and
> if they aren’t right diagnosing them can be “challenging”…
>
> Imagine all the ways that “somehow” the solr.xml 

Re: With ant removed, will GitHub PR job always fail 1st check?

2020-08-28 Thread Tomás Fernández Löbbe
I have this PR to remove the ant action:
https://github.com/apache/lucene-solr/pull/1798

On Fri, Aug 28, 2020 at 6:58 PM Alexandre Rafalovitch 
wrote:

> For the Pull Request, GitHub is running both Ant and Gradle precommit.
>
>
>
> Now that ant is gone, it is probably safe to remove that check as
>
> well. It will always fail at "Ivy bootstrap" phase.
>
>
>
> Regards,
>
>Alex.
>
>
>
> -
>
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>
>
>


Re: Solr configuration options

2020-08-28 Thread Tomás Fernández Löbbe
As for AMIs, you have to do it at least once, right? or are you thinking in
someone using an pre-existing AMI? I see your point for the case of someone
using the official Solr image as-is without any volume mounts I guess. I'm
wondering if trying to put node configuration inside ZooKeeper is another
thing were we try to solve things inside Solr that the industry already
solved differently (AMIs, Docker images are exactly about packaging code
and config)

On Fri, Aug 28, 2020 at 11:11 AM Gus Heck  wrote:

> Which means whoever wants to make changes to solr needs to be
> able/willing/competent to make AMI/dockers/etc ... and one has to manage
> versions of those variants as opposed to managing versions of config files.
>
> On Fri, Aug 28, 2020 at 1:55 PM Tomás Fernández Löbbe <
> tomasflo...@gmail.com> wrote:
>
>> I think if you are using AMIs (or Docker), you could put the node
>> configuration inside the AMI (or Docker image), as Ilan said, together with
>> the binaries. Say you have a custom top-level handler (Collections, Cores,
>> Info, whatever), which takes some arguments and it's configured in solr.xml
>> and you are doing an upgrade, you probably want your old nodes (running
>> with your old AMI/Docker image with old jars) to keep the old configuration
>> and your new nodes to use the new.
>>
>> On Fri, Aug 28, 2020 at 10:42 AM Gus Heck  wrote:
>>
>>> Putting solr.xml in zookeeper means you can add a node simply by
>>> starting solr pointing to the zookeeper, and ensure a consistent solr.xml
>>> for the new node if you've customized it. Since I rarely (never) hit use
>>> cases where I need different per node solr.xml. I generally advocate
>>> putting it in ZK, I'd say heterogeneous node configs is the special case
>>> for advanced use here.  I'm a fan of a (hypothetical future) world where
>>> nodes can be added/removed simply without need for local configuration. It
>>> would be desirable IMHO to have a smooth node add and remove process and
>>> having to install a file into a distribution manually after unpacking it
>>> (or having coordinate variations of config to be pushed to machines) is a
>>> minus. If and when autoscaling is happy again I'd like to be able to start
>>> an AMI in AWS pointing at zk (or similar) and have it join automatically,
>>> and then receive replicas to absorb load (per whatever autoscaling is
>>> specified), and then be able to issue a single command to a node to sunset
>>> the node that moves replicas back off of it (again per autoscaling
>>> preferences, failing if autoscaling constraints would be violated) and then
>>> asks the node to shut down so that the instance in AWS (or wherever) can be
>>> shut down safely.  This is a black friday,  new tenants/lost tenants, or
>>> new feature/EOL feature sort of use case.
>>>
>>> Thus IMHO all config for cloud should live somewhere in ZK. File system
>>> access should not be required to add/remove capacity. If multiple node
>>> configurations need to be supported we should have nodeTypes directory in
>>> zk (similar to configsets for collections), possible node specific configs
>>> there and an env var that can be read to determine the type (with some
>>> cluster level designation of a default node type). I think that would be
>>> sufficient to parameterize AMI stuff (or containers) by reading tags into
>>> env variables
>>>
>>> As for knowing what a node loaded, we really should be able to emit any
>>> config file we've loaded (without reference to disk or zk). They aren't
>>> that big and in most cases don't change that fast, so caching a simple copy
>>> as a string in memory (but only if THAT node loaded it) for verification
>>> would seem smart. Having a file on disk doesn't tell you if solr loaded
>>> with that version or if it's changed since solr loaded it either.
>>>
>>> Anyway, that's the pie in my sky...
>>>
>>> -Gus
>>>
>>> On Fri, Aug 28, 2020 at 11:51 AM Ilan Ginzburg 
>>> wrote:
>>>
>>>> What I'm really looking for (and currently my understanding is that
>>>> solr.xml is the only option) is *a cluster config a Solr dev can set
>>>> as a default* when introducing a new feature for example, so that the
>>>> config is picked out of the box in SolrCloud, yet allowing the end user to
>>>> override it if he so wishes.
>>>>
>>>> But "cluster config" in this context *with a caveat*: when doing a
>>>> rolling upgrade, nodes running new code n

Re: Solr configuration options

2020-08-28 Thread Tomás Fernández Löbbe
I think if you are using AMIs (or Docker), you could put the node
configuration inside the AMI (or Docker image), as Ilan said, together with
the binaries. Say you have a custom top-level handler (Collections, Cores,
Info, whatever), which takes some arguments and it's configured in solr.xml
and you are doing an upgrade, you probably want your old nodes (running
with your old AMI/Docker image with old jars) to keep the old configuration
and your new nodes to use the new.

On Fri, Aug 28, 2020 at 10:42 AM Gus Heck  wrote:

> Putting solr.xml in zookeeper means you can add a node simply by starting
> solr pointing to the zookeeper, and ensure a consistent solr.xml for the
> new node if you've customized it. Since I rarely (never) hit use cases
> where I need different per node solr.xml. I generally advocate putting it
> in ZK, I'd say heterogeneous node configs is the special case for advanced
> use here.  I'm a fan of a (hypothetical future) world where nodes can be
> added/removed simply without need for local configuration. It would be
> desirable IMHO to have a smooth node add and remove process and having to
> install a file into a distribution manually after unpacking it (or having
> coordinate variations of config to be pushed to machines) is a minus. If
> and when autoscaling is happy again I'd like to be able to start an AMI in
> AWS pointing at zk (or similar) and have it join automatically, and then
> receive replicas to absorb load (per whatever autoscaling is specified),
> and then be able to issue a single command to a node to sunset the node
> that moves replicas back off of it (again per autoscaling preferences,
> failing if autoscaling constraints would be violated) and then asks the
> node to shut down so that the instance in AWS (or wherever) can be shut
> down safely.  This is a black friday,  new tenants/lost tenants, or new
> feature/EOL feature sort of use case.
>
> Thus IMHO all config for cloud should live somewhere in ZK. File system
> access should not be required to add/remove capacity. If multiple node
> configurations need to be supported we should have nodeTypes directory in
> zk (similar to configsets for collections), possible node specific configs
> there and an env var that can be read to determine the type (with some
> cluster level designation of a default node type). I think that would be
> sufficient to parameterize AMI stuff (or containers) by reading tags into
> env variables
>
> As for knowing what a node loaded, we really should be able to emit any
> config file we've loaded (without reference to disk or zk). They aren't
> that big and in most cases don't change that fast, so caching a simple copy
> as a string in memory (but only if THAT node loaded it) for verification
> would seem smart. Having a file on disk doesn't tell you if solr loaded
> with that version or if it's changed since solr loaded it either.
>
> Anyway, that's the pie in my sky...
>
> -Gus
>
> On Fri, Aug 28, 2020 at 11:51 AM Ilan Ginzburg  wrote:
>
>> What I'm really looking for (and currently my understanding is that
>> solr.xml is the only option) is *a cluster config a Solr dev can set as
>> a default* when introducing a new feature for example, so that the
>> config is picked out of the box in SolrCloud, yet allowing the end user to
>> override it if he so wishes.
>>
>> But "cluster config" in this context *with a caveat*: when doing a
>> rolling upgrade, nodes running new code need the new cluster config, nodes
>> running old code need the previous cluster config... Having a per node
>> solr.xml deployed atomically with the code as currently the case has
>> disadvantages, but solves this problem effectively in a very simple way. If
>> we were to move to a central cluster config, we'd likely need to introduce
>> config versioning or as Noble suggested elsewhere, only write code that's
>> backward compatible (w.r.t. config), deploy that code everywhere then once
>> no old code is running, update the cluster config. I find this approach
>> complicated from both dev and operational perspective with an unclear added
>> value.
>>
>> Ilan
>>
>> PS. I've stumbled upon the loading of solr.xml from Zookeeper in the
>> past but couldn't find it as I wrote my message so I thought I imagined
>> it...
>>
>> It's in SolrDispatchFilter.loadNodeConfig(). It establishes a connection
>> to ZK for fetching solr.xml then closes it.
>> It relies on system property waitForZk as the connection timeout (in
>> seconds, defaults to 30) and system property zkHost as the Zookeeper
>> host.
>>
>> I believe solr.xml can only end up in ZK through the use of ZkCLI. Then
>> the user is on his own to manage SolrCloud version upgrades: if a new
>> solr.xml is included as part of a new version of SolrCloud, the user
>> having pushed a previous version into ZK will not see the update.
>> I wonder if putting solr.xml in ZK is a common practice.
>>
>> On Fri, Aug 28, 2020 at 4:58 PM Jan Høydahl 
>> wrote:
>>
>>> I interpret 

Re: Solr configuration options

2020-08-28 Thread Tomás Fernández Löbbe
I think of it exactly as Jan described it. solr.xml is the node
configuration, usually should be the same for all the cluster, but not
necessarily all the time (i.e. during a deployment they may differ).
Putting it in ZooKeeper is, I believe, a mistake, because then you see a
file up there, but it’s not necessarily what Solr loaded, and there is no
way to know for sure what solr.xml a node started with.
I think some configurations can probably be moved to clusterprops (i.e.
maxBooleanClauses), while others still belong to whatever node
configuration file we have currently, solr.xml.

On Fri, Aug 28, 2020 at 8:51 AM Ilan Ginzburg  wrote:

> What I'm really looking for (and currently my understanding is that
> solr.xml is the only option) is *a cluster config a Solr dev can set as a
> default* when introducing a new feature for example, so that the config
> is picked out of the box in SolrCloud, yet allowing the end user to
> override it if he so wishes.
>
> But "cluster config" in this context *with a caveat*: when doing a
> rolling upgrade, nodes running new code need the new cluster config, nodes
> running old code need the previous cluster config... Having a per node
> solr.xml deployed atomically with the code as currently the case has
> disadvantages, but solves this problem effectively in a very simple way. If
> we were to move to a central cluster config, we'd likely need to introduce
> config versioning or as Noble suggested elsewhere, only write code that's
> backward compatible (w.r.t. config), deploy that code everywhere then once
> no old code is running, update the cluster config. I find this approach
> complicated from both dev and operational perspective with an unclear added
> value.
>
> Ilan
>
> PS. I've stumbled upon the loading of solr.xml from Zookeeper in the past
> but couldn't find it as I wrote my message so I thought I imagined it...
>
> It's in SolrDispatchFilter.loadNodeConfig(). It establishes a connection
> to ZK for fetching solr.xml then closes it.
> It relies on system property waitForZk as the connection timeout (in
> seconds, defaults to 30) and system property zkHost as the Zookeeper host.
>
> I believe solr.xml can only end up in ZK through the use of ZkCLI. Then
> the user is on his own to manage SolrCloud version upgrades: if a new
> solr.xml is included as part of a new version of SolrCloud, the user
> having pushed a previous version into ZK will not see the update.
> I wonder if putting solr.xml in ZK is a common practice.
>
> On Fri, Aug 28, 2020 at 4:58 PM Jan Høydahl  wrote:
>
>> I interpret solr.xml as the node-local configuration for a single node.
>> clusterprops.json is the cluster-wide configuration applying to all nodes.
>> solrconfig.xml is of course per core etc
>>
>> solr.in.sh is the per-node ENV-VAR way of configuring a node, and many
>> of those are picked up in solr.xml (other in bin/solr).
>>
>> I think it is important to keep a file-local config file which can only
>> be modified if you have shell access to that local node, it provides an
>> extra layer of security.
>> And in certain cases a node may need a different configuration from
>> another node, i.e. during an upgrade.
>>
>> I put solr.xml in zookeeper. It may have been a mistake, since it may not
>> make all that much sense to load solr.xml which is a node-level file, from
>> ZK. But if it uses var substitutions for all node-level stuff, it will
>> still work since those vars are pulled from local properties when parsed
>> anyway.
>>
>> I’m also somewhat against hijacking clusterprops.json as a general
>> purpose JSON config file for the cluster. It was supposed to be for simple
>> properties.
>>
>> Jan
>>
>> > 28. aug. 2020 kl. 14:23 skrev Erick Erickson :
>> >
>> > Solr.xml can also exist on Zookeeper, it doesn’t _have_ to exist
>> locally. You do have to restart to have any changes take effect.
>> >
>> > Long ago in a Solr far away solr.xml was where all the cores were
>> defined. This was before “core discovery” was put in. Since solr.xml had to
>> be there anyway and was read at startup, other global information was added
>> and it’s lived on...
>> >
>> > Then clusterprops.json came along as a place to put, well, cluster-wide
>> properties so having solr.xml too seems awkward. Although if you do have
>> solr.xml locally to each node, you could theoretically have different
>> settings for different Solr instances. Frankly I consider this more of a
>> bug than a feature.
>> >
>> > I know there have been some talk about removing solr.xml entirely, but
>> I’m not sure what the thinking is about what to do instead. Whatever we do
>> needs to accommodate standalone. We could do the same trick we do now, and
>> essentially move all the current options in solr.xml to clusterprops.json
>> (or other ZK node) and read it locally for stand-alone. The API could even
>> be used to change it if it was stored locally.
>> >
>> > That still leaves the chicken-and-egg problem if connecting to ZK in
>> 

Re: [VOTE] Release Lucene/Solr 8.6.2 RC1

2020-08-27 Thread Tomás Fernández Löbbe
+1 (binding)

SUCCESS! [1:03:21.542868]

On Thu, Aug 27, 2020 at 2:35 PM Uwe Schindler  wrote:

> Hi,
>
>
>
> +1 Binding from my side.
>
>
>
> Results:
>
> https://jenkins.thetaphi.de/job/Lucene-Solr-Release-Tester/34/console
>
>
>
> Works with Java 8 and later.
>
> I did not fully check everything in the artifacts, as this is a bugfix
> release only.
>
>
>
> Uwe
>
>
>
> -
>
> Uwe Schindler
>
> Achterdiek 19, D-28357 Bremen
>
> https://www.thetaphi.de
>
> eMail: u...@thetaphi.de
>
>
>
> *From:* Ignacio Vera 
> *Sent:* Wednesday, August 26, 2020 3:42 PM
> *To:* dev@lucene.apache.org
> *Subject:* [VOTE] Release Lucene/Solr 8.6.2 RC1
>
>
>
> Please vote for release candidate 1 for Lucene/Solr 8.6.2
>
>
>
> The artifacts can be downloaded from:
>
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.2-RC1-rev016993b65e393b58246d54e8ddda9f56a453eb0e
>
>
>
> You can run the smoke tester directly with this command:
>
>
>
> python3 -u dev-tools/scripts/smokeTestRelease.py \
>
>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.2-RC1-rev016993b65e393b58246d54e8ddda9f56a453eb0e
>
>
>
> The vote will be open for at least 72 hours i.e. until 2020-08-29 15:00
> UTC.
>
>
>
> [ ] +1  approve
>
> [ ] +0  no opinion
>
> [ ] -1  disapprove (and reason why)
>
>
>
> Here is my +1
>
>
>
> SUCCESS! [1:14:00.656250]
>


Re: Lucene/Solr 8.6.2 bugfix release

2020-08-25 Thread Tomás Fernández Löbbe
Hi, I'm about to backport SOLR-14751
. It's a trivial fix for
the ZooKeeper UI

On Tue, Aug 25, 2020 at 10:03 AM Michael McCandless <
luc...@mikemccandless.com> wrote:

> +1, thanks Simon.
>
> Mike McCandless
>
> http://blog.mikemccandless.com
>
>
> On Tue, Aug 25, 2020 at 4:14 AM Ignacio Vera  wrote:
>
>> +1 thanks Simon
>>
>> On Tue, Aug 25, 2020 at 10:12 AM Uwe Schindler  wrote:
>>
>>> Hi,
>>>
>>> I re-enabled Jenkins jobs for 8.6.
>>>
>>> Uwe
>>>
>>> -
>>> Uwe Schindler
>>> Achterdiek 19, D-28357 Bremen
>>> https://www.thetaphi.de
>>> eMail: u...@thetaphi.de
>>>
>>> > -Original Message-
>>> > From: Simon Willnauer 
>>> > Sent: Tuesday, August 25, 2020 10:07 AM
>>> > To: Ishan Chattopadhyaya 
>>> > Cc: Lucene Dev 
>>> > Subject: Re: Lucene/Solr 8.6.2 bugfix release
>>> >
>>> > I'd actually like to build the RC earlier than the end of the week.
>>> > Unless somebody objects I'd like to build one tonight or tomorrow.
>>> >
>>> > simon
>>> >
>>> > On Tue, Aug 25, 2020 at 7:52 AM Ishan Chattopadhyaya
>>> >  wrote:
>>> > >
>>> > > Thanks Simon and Ignacio!
>>> > >
>>> > > On Tue, 25 Aug, 2020, 11:21 am Simon Willnauer,
>>> >  wrote:
>>> > >>
>>> > >> +1 thank you! I was about to write the same email. Lets sync on the
>>> RM
>>> > >> I can certainly help... I need to go and find my code signing key
>>> > >> first :)
>>> > >>
>>> > >> simon
>>> > >>
>>> > >> On Tue, Aug 25, 2020 at 7:49 AM Ignacio Vera 
>>> wrote:
>>> > >> >
>>> > >> > Hi,
>>> > >> >
>>> > >> > I propose a 8.6.2 bugfix release and I volunteer as RM. The
>>> motivation for
>>> > this release is LUCENE-9478 where Simon addressed a serious memory
>>> leak in
>>> > DWPTDeleteQueue.
>>> > >> >
>>> > >> > If there are no objections I am planning to build the first RC by
>>> the end of
>>> > this week.
>>> > >> >
>>> > >> > Ignacio
>>> > >> >
>>> > >>
>>> > >>
>>> -
>>> > >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> > >> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> > >>
>>> >
>>> > -
>>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>


Re: Welcome Atri Sharma to the PMC

2020-08-20 Thread Tomás Fernández Löbbe
Welcome, Atri!

On Thu, Aug 20, 2020 at 2:50 PM Jitendra soni  wrote:

> Welcome Atri!
>
> On Fri, Aug 21, 2020 at 3:18 AM Nhat Nguyen 
> wrote:
>
>> Welcome Atri!
>>
>> On Thu, Aug 20, 2020 at 5:47 PM Gus Heck  wrote:
>>
>>> Welcome! :)
>>>
>>> On Thu, Aug 20, 2020 at 4:44 PM jim ferenczi 
>>> wrote:
>>>
 Welcome Atri!

 Le jeu. 20 août 2020 à 22:00, Jan Høydahl  a
 écrit :

> Welcome Atri!
>
> Jan
>
> 20. aug. 2020 kl. 20:16 skrev Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com>:
>
> 
> I am pleased to announce that Atri Sharma has accepted the PMC's
> invitation to join.
>
> Congratulations and welcome, Atri!
>
>
>
>>>
>>> --
>>> http://www.needhamsoftware.com (work)
>>> http://www.the111shift.com (play)
>>>
>>
>
> --
> Thanks
> Jitendra
>


Re: [VOTE] Release Lucene/Solr 8.6.1 RC2

2020-08-13 Thread Tomás Fernández Löbbe
Late, but +1

SUCCESS! [1:02:21.504688]

On Thu, Aug 13, 2020 at 1:06 PM Houston Putman 
wrote:

> It's been >72h since the vote was initiated and the result is:
>
> +1  8  (5 binding)
>  0  0
> -1  0
>
> This vote has PASSED
>
> On Thu, Aug 13, 2020 at 9:13 AM Anshum Gupta 
> wrote:
>
>> Thanks for doing this, Houston.
>>
>> SUCCESS! [1:21:02.914140]
>>
>> Tested out basic getting started w/ a 10 node cluster across a few local
>> machines and looked at the CHANGELOG. Everything LGTM on that front.
>>
>> +1 (binding)
>>
>>
>> On Mon, Aug 10, 2020 at 12:02 PM Houston Putman 
>> wrote:
>>
>>> Please vote for release candidate 2 for Lucene/Solr 8.6.1
>>>
>>> The artifacts can be downloaded from:
>>>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.1-RC2-rev6e11a1c3f0599f1c918bc69c4f51928d23160e99
>>>
>>> You can run the smoke tester directly with this command:
>>>
>>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>>
>>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.1-RC2-rev6e11a1c3f0599f1c918bc69c4f51928d23160e99
>>>
>>> The vote will be open for at least 72 hours i.e. until 2020-08-13 20:00
>>> UTC.
>>>
>>> [ ] +1  approve
>>> [ ] +0  no opinion
>>> [ ] -1  disapprove (and reason why)
>>>
>>> Here is my +1
>>>
>>
>>
>> --
>> Anshum Gupta
>>
>


Re: Badapple report

2020-08-10 Thread Tomás Fernández Löbbe
Hi Erick,
I've introduced and later fixed a bug in TestConfig. It hasn't failed
since, so please don't annotate it.

On Mon, Aug 10, 2020 at 7:47 AM Erick Erickson 
wrote:

> We’re backsliding some. I encourage people to look at:
> http://fucit.org/solr-jenkins-reports/failure-report.html, we have a
> number of ill-behaved tests, particularly TestRequestRateLimiter,
> TestBulkSchemaConcurrent, TestConfig, SchemaApiFailureTest and
> TestIndexingSequenceNumbers…
>
>
> Raw fail count by week totals, most recent week first (corresponds to
> bits):
> Week: 0  had  100 failures
> Week: 1  had  82 failures
> Week: 2  had  94 failures
> Week: 3  had  502 failures
>
>
> Failures in Hoss' reports for the last 4 rollups.
>
> There were 585 unannotated tests that failed in Hoss' rollups. Ordered by
> the date I downloaded the rollup file, newest->oldest. See above for the
> dates the files were collected
> These tests were NOT BadApple'd or AwaitsFix'd
>
> Failures in the last 4 reports..
>Report   Pct runsfails   test
>  0123   4.4 1583 37  BasicDistributedZkTest.test
>  0123   4.3 1727 77  CloudExitableDirectoryReaderTest.test
>  0123   2.5 8598248
> CloudExitableDirectoryReaderTest.testCreepThenBite
>  0123   1.9 1712 36
> CloudExitableDirectoryReaderTest.testWhitebox
>  0123   0.5 1587 11
> DocValuesNotIndexedTest.testGroupingDVOnlySortLast
>  0123   2.2 1679 82  HttpPartitionOnCommitTest.test
>  0123   0.5 1592 16  HttpPartitionTest.test
>  0123   1.0 1578  9  HttpPartitionWithTlogReplicasTest.test
>  0123   1.3 1569 13  LeaderFailoverAfterPartitionTest.test
>  0123   7.4 1643 59  MultiThreadedOCPTest.test
>  0123   0.3 1567  8  ReplaceNodeTest.test
>  0123   0.2 1588  6  ShardSplitTest.testSplitShardWithRule
>  0123 100.0   38 33  SharedFSAutoReplicaFailoverTest.test
>  0123   2.1  818 19
> TestCircuitBreaker.testBuildingMemoryPressure
>  0123   2.6  818 13
> TestCircuitBreaker.testResponseWithCBTiming
>  0123   6.2 1848104  TestContainerPlugin.testApiFromPackage
>  0123   2.5 1662 33  TestDistributedGrouping.test
>  0123   0.4 1448  6  TestDynamicLoading.testDynamicLoading
>  0123   6.4 1614 74  TestExportWriter.testExpr
>  0123   8.6 1356 70  TestHdfsCloudBackupRestore.test
>  0123   9.1 1697136  TestLocalFSCloudBackupRestore.test
>  0123   0.5 1607 26  TestPackages.testPluginLoading
>  0123   0.7 1596 15
> TestQueryingOnDownCollection.testQueryToDownCollectionShouldFailFast
>  0123   1.5 1610 59
> TestReRankQParserPlugin.testMinExactCount
>  0123   0.3 1552  4  TestReplicaProperties.test
>  0123   0.3 1556  5
> TestSolrCloudWithDelegationTokens.testDelegationTokenRenew
>  0123   0.3 1565  9  TestSolrConfigHandlerCloud.test
> 
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


Re: Standardize Leading Test or Trailing Test

2020-08-05 Thread Tomás Fernández Löbbe
+1

On Wed, Aug 5, 2020 at 10:37 PM David Smiley 
wrote:

> +1 to standardize on something.
> This has been brought up before: LUCENE-8626
>  -- credit to
> Christine who started the work.  I recommend resuming the discussion there.
>
> ~ David
>
>
> On Thu, Aug 6, 2020 at 12:08 AM Anshum Gupta 
> wrote:
>
>> +1
>>
>> Thanks for bringing this up, Marcus. Standardizing this is really great.
>>
>> On Wed, Aug 5, 2020 at 8:01 PM Marcus Eagan 
>> wrote:
>>
>>> Hi community, what do you think a small effort to standardize on leading
>>> with the word "Test" or trailing with the word "Test" in the repo. Most
>>> projects do one or the other and it has an impact on developer
>>> productivity. I'll explain my use case:
>>>
>>> I'm working on a class and I want to modify the test to evaluate my
>>> changes. If the class is named in a standard way, I can find it easily. If
>>> it is not, it's fine. There are typically two options. I consider it
>>> distracting and sloppy. Distraction is expensive for developers. I have
>>> some more important efforts that I'm working on, but if the community
>>> agrees on this one, I can open a ticket and submit a PR. Let me know what
>>> you think.
>>>
>>> Hoping to make the project more developer friendly.
>>>
>>> --
>>> Marcus Eagan
>>>
>>>
>>
>> --
>> Anshum Gupta
>>
>


Re: Welcome Munendra SN to the PMC

2020-08-04 Thread Tomás Fernández Löbbe
Congrats and Welcome!

On Mon, Aug 3, 2020 at 9:25 AM Steve Rowe  wrote:

> Congrats and welcome, Munendra!
>
> --
> Steve
>
> > On Aug 2, 2020, at 7:19 PM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
> >
> > I am pleased to announce that Munendra SN has accepted the PMC's
> invitation to join.
> >
> > Congratulations and welcome, Munendra!
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Gus Heck to the PMC

2020-08-04 Thread Tomás Fernández Löbbe
Welcome, Gus!!

On Mon, Aug 3, 2020 at 9:27 AM Steve Rowe  wrote:

> Congrats and welcome, Gus!
>
> --
> Steve
>
> > On Aug 2, 2020, at 7:20 PM, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
> >
> > I am pleased to announce that Gus Heck has accepted the PMC's invitation
> to join.
> >
> > Congratulations and welcome, Gus!
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Namgyu Kim to the PMC

2020-08-04 Thread Tomás Fernández Löbbe
Welcome!

On Mon, Aug 3, 2020 at 9:50 PM Christian Moen  wrote:

> Congrats, Namgyu!
>
> On Mon, Aug 3, 2020 at 8:19 AM Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> I am pleased to announce that Namgyu Kim has accepted the PMC's
>> invitation to join.
>>
>> Congratulations and welcome, Namgyu!
>>
>


Re: Deprecate Schemaless Mode?

2020-08-03 Thread Tomás Fernández Löbbe
Agree with Jason. It's useful for prototyping and developing. I remember
seeing some warnings about it (in the logs?), but maybe we need more?

On Mon, Aug 3, 2020 at 10:41 AM Jason Gerlowski 
wrote:

> > Is anyone on this list using schemaless mode in production or have you
> tried to?
>
> Schemaless mode is one of a group of Solr features present for
> convenience but not intended for production usage.  It's in the same
> boat as "bin/post", and SolrCell, and others.  These features do cause
> headaches when users ignore the documented restrictions and use them
> for more than prototyping.  But at the same time they're super
> valuable for these sort of demo-ing or getting-started use cases.  An
> easy getting-started experience is important, and schemaless et al
> serve a mostly positive role in that.
>
> I think we'd better serve our users if we left schemaless
> in/undeprecated, and instead focused on making it harder to
> (unknowingly) use them in ways contrary to community recommendations.
> Add louder warnings in the documentation (where not already present).
> Add warnings to the Solr logs the first time these features are used.
> Disable them by default (where that makes sense).  Taken to the
> extreme, we could even add a section into Solr's response that lists
> non-production features used in serving a given request.
>
> There are lots of ways to address the "feature X is trappy" problem
> without removing X together.
>
> On Mon, Aug 3, 2020 at 11:33 AM Marcus Eagan 
> wrote:
> >
> > Community,
> >
> > There are many of us that have had to deal with the pain of managing the
> schemaless mode of operation in Solr. I'm curious to get others thoughts
> about how well it is working for them and if they would like to continue to
> use it.
> >
> > I for one don't think Schemaless works as intended and favor deprecating
> it and replacing it with some more usable but I am sure others have
> thoughts here.
> >
> > Is anyone on this list using schemaless mode in production or have you
> tried to?
> >
> > A preliminary discussion has occurred in this Jira ticket:
> https://issues.apache.org/jira/browse/SOLR-14701
> >
> > Thank you all,
> >
> > Marcus Eagan
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Mike Drob to the PMC

2020-07-24 Thread Tomás Fernández Löbbe
Welcome Mike!!

On Fri, Jul 24, 2020 at 3:51 PM Martin Gainty  wrote:

> Congratulation Mike!
> martin
>
> --
> *From:* Erick Erickson 
> *Sent:* Friday, July 24, 2020 4:55 PM
> *To:* dev@lucene.apache.org 
> *Subject:* Re: Welcome Mike Drob to the PMC
>
> Welcome Mike!
>
> > On Jul 24, 2020, at 4:12 PM, Ilan Ginzburg  wrote:
> >
> > Congratulations Mike, happy to hear that!
> >
> > Ilan
> >
> > On Fri, Jul 24, 2020 at 9:56 PM Anshum Gupta 
> wrote:
> > I am pleased to announce that Mike Drob has accepted the PMC's
> invitation to join.
> >
> > Congratulations and welcome, Mike!
> >
> > --
> > Anshum Gupta
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [VOTE] Release Lucene/Solr 8.6.0 RC1

2020-07-09 Thread Tomás Fernández Löbbe
+1

SUCCESS! [1:04:02.550893]

On Thu, Jul 9, 2020 at 12:36 PM Michael Sokolov  wrote:

> +1
>
> SUCCESS! [0:59:20.777306]
>
> (tested on Graviton ARM processor)
>
> On Thu, Jul 9, 2020 at 1:10 PM Anshum Gupta 
> wrote:
> >
> > +1
> >
> > SUCCESS! [1:15:03.975368]
> >
> > On Wed, Jul 8, 2020 at 1:56 AM Bruno Roustant 
> wrote:
> >>
> >> Please vote for release candidate 1 for Lucene/Solr 8.6.0
> >>
> >> The artifacts can be downloaded from:
> >>
> >>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.0-RC1-reva9c5fb0da2dfc8c7375622c80dbf1a0cc26f44dc
> >>
> >> You can run the smoke tester directly with this command:
> >>
> >> python3 -u dev-tools/scripts/smokeTestRelease.py \
> >>
> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.6.0-RC1-reva9c5fb0da2dfc8c7375622c80dbf1a0cc26f44dc
> >>
> >> The vote will be open for at least 72 hours i.e. until 2020-07-11 09:00
> UTC.
> >>
> >> [ ] +1  approve
> >> [ ] +0  no opinion
> >> [ ] -1  disapprove (and reason why)
> >>
> >> Here is my +1
> >
> >
> >
> > --
> > Anshum Gupta
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: 8.6 release

2020-07-06 Thread Tomás Fernández Löbbe
Just resolved SOLR-14590.

On Mon, Jul 6, 2020 at 4:22 AM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> I'll take a look today, Bruno. Thanks.
>
> On Mon, 6 Jul, 2020, 4:32 pm Bruno Roustant, 
> wrote:
>
>> Hi all,
>>
>> 8.6 RC is planned tomorrow but there are still 9 Jira issues unresolved
>> for 8.6 (+ private ones?)
>>
>> Please review and update their status.
>>
>> 3 blockers
>> SOLR-14599 Introduce cluster level plugins through packages
>> SOLR-14593 Package store API to disable file upload over HTTP
>> SOLR-14580 CloudSolrClient cannot be initialized using 'zkHosts' builder
>>
>> Other
>> SOLR-14590 Add support for FeatureField in Solr
>> SOLR-14516 NPE during Realtime GET
>> SOLR-14422 Solr 8.5 Admin UI shows Angular placeholders on first load /
>> refresh
>> SOLR-14398 package store PUT should be idempotent
>> SOLR-14311 Shared schema should not have access to core level classes
>> LUCENE-9356 Add tests for corruptions caused by byte flips
>>
>> Le dim. 5 juil. 2020 à 08:10, David Smiley  a
>> écrit :
>>
>>> Pertaining to the highlighter performance regression:
>>> https://issues.apache.org/jira/browse/SOLR-14628
>>> It's a simple change in a default setting, that is furthermore
>>> consistent with how the behavior was prior to Solr 8.5
>>>
>>> I'm hoping this can make it into the release?  See the PR.
>>>
>>> ~ David
>>>
>>>
>>> On Wed, Jun 24, 2020 at 3:05 PM David Smiley 
>>> wrote:
>>>
>>>> Thanks starting this discussion, Cassandra.
>>>>
>>>> I reviewed the issues I was involved with and I don't quite see
>>>> something worth noting.
>>>>
>>>> I plan to add a note about a change in defaults within
>>>> UnifiedHighlighter that could be a significant perf regression.  This
>>>> wasn't introduced in 8.6 but introduced in 8.5 and it's significant enough
>>>> to bring attention to.  I could add it in 8.5's section but then add a
>>>> short pointer to it in 8.6.
>>>>
>>>> ~ David
>>>>
>>>>
>>>> On Wed, Jun 24, 2020 at 2:52 PM Cassandra Targett <
>>>> casstarg...@gmail.com> wrote:
>>>>
>>>>> I started looking at the Ref Guide for 8.6 to get it ready, and notice
>>>>> there are no Upgrade Notes in `solr-upgrade-notes.adoc` for 8.6. Is it
>>>>> really true that none are needed at all?
>>>>>
>>>>> I’ll add what I usually do about new features/changes that maybe
>>>>> wouldn’t normally make the old Upgrade Notes section, I just find it
>>>>> surprising that there weren’t any devs who thought any of the 100 or so
>>>>> Solr changes warrant any user caveats.
>>>>> On Jun 17, 2020, 12:27 PM -0500, Tomás Fernández Löbbe <
>>>>> tomasflo...@gmail.com>, wrote:
>>>>>
>>>>> +1. Thanks Bruno
>>>>>
>>>>> On Wed, Jun 17, 2020 at 6:22 AM Mike Drob  wrote:
>>>>>
>>>>>> +1
>>>>>>
>>>>>> The release wizard python script should be sufficient for everything.
>>>>>> If you run into any issues with it, let me know, I used it for 8.5.2 and
>>>>>> think I understand it pretty well.
>>>>>>
>>>>>> On Tue, Jun 16, 2020 at 8:31 AM Bruno Roustant <
>>>>>> bruno.roust...@gmail.com> wrote:
>>>>>>
>>>>>>> Hi all,
>>>>>>>
>>>>>>> It’s been a while since we released Lucene/Solr 8.5.
>>>>>>> I’d like to volunteer to be a release manager for an 8.6 release. If
>>>>>>> there's agreement, then I plan to cut the release branch two weeks 
>>>>>>> today,
>>>>>>> on June 30th, and then to build the first RC two days later.
>>>>>>>
>>>>>>> This will be my first time as release manager so I'll probably need
>>>>>>> some guidance. Currently I have two resource links on this subject:
>>>>>>> https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo
>>>>>>>
>>>>>>> https://github.com/apache/lucene-solr/tree/master/dev-tools/scripts#releasewizardpy
>>>>>>> If you have more, please share with me.
>>>>>>>
>>>>>>> Bruno
>>>>>>>
>>>>>>


Re: Welcome Tomoko Uchida to the PMC

2020-07-06 Thread Tomás Fernández Löbbe
Welcome Tomoko!

On Mon, Jul 6, 2020 at 9:08 AM Namgyu Kim  wrote:

>   Congratulations, Tomoko! :D
>
> On Mon, Jul 6, 2020 at 10:27 PM Steve Rowe  wrote:
>
>> Welcome and congrats Tomoko!
>>
>> --
>> Steve
>>
>> > On Jul 4, 2020, at 2:26 AM, Adrien Grand  wrote:
>> >
>> > I am pleased to announce that Tomoko Uchida has accepted the PMC's
>> invitation to join.
>> >
>> > Welcome Tomoko!
>> >
>> > --
>> > Adrien
>>
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


Re: Welcome Michael Sokolov to the PMC

2020-07-06 Thread Tomás Fernández Löbbe
Welcome, Michael!

On Mon, Jul 6, 2020 at 9:08 AM Namgyu Kim  wrote:

> Congratulations, Michael! :D
>
> On Mon, Jul 6, 2020 at 7:22 PM Mayya Sharipova
>  wrote:
>
>> Congratulations, Michael!
>>
>> On Sat, Jul 4, 2020 at 4:30 PM Michael McCandless <
>> luc...@mikemccandless.com> wrote:
>>
>>> Welcome to another Mike!
>>>
>>> Mike McCandless
>>>
>>> http://blog.mikemccandless.com
>>>
>>>
>>> On Fri, Jul 3, 2020 at 7:57 AM Adrien Grand  wrote:
>>>
 I am pleased to announce that Michael Sokolov has accepted the PMC's
 invitation to join.

 Welcome Michael!

 --
 Adrien

>>>


Re: 8.6 release

2020-06-26 Thread Tomás Fernández Löbbe
I tagged SOLR-14590 for 8.6, The PR is ready for review and I plan to merge
it soon

On Fri, Jun 26, 2020 at 12:54 PM Andrzej Białecki  wrote:

> Jan,
>
> I just removed SOLR-14182 from 8.6, this needs proper back-compat shims
> and testing, and I don’t have enough time to get it done properly for 8.6.
>
> On 26 Jun 2020, at 13:37, Jan Høydahl  wrote:
>
> Unresolved Solr issues tagged with 8.6:
>
> https://issues.apache.org/jira/issues/?jql=project%20%3D%20SOLR%20AND%20resolution%20%3D%20Unresolved%20AND%20fixVersion%20%3D%208.6
> <https://issues.apache.org/jira/issues/?jql=project%20=%20SOLR%20AND%20resolution%20=%20Unresolved%20AND%20fixVersion%20=%208.6>
>
>
> SOLR-14593   Package store API to disable file upload over HTTP
>Blocker
> SOLR-14580   CloudSolrClient cannot be initialized using 'zkHosts' builder
>   Blocker
> SOLR-14516   NPE during Realtime GET
>   Major
> SOLR-14502   increase bin/solr's post kill sleep
>   Minor
> SOLR-14398   package store PUT should be idempotent
>Trivial
> SOLR-14311   Shared schema should not have access to core level classes
>Major
> SOLR-14182   Move metric reporters config from solr.xml to ZK cluster
> properties Major
> SOLR-14066   Deprecate DIH
>   Blocker
> SOLR-14022   Deprecate CDCR from Solr in 8.x
>   Blocker
>
> Plus two private JIRA issues.
>
> Jan
>
> 26. jun. 2020 kl. 12:06 skrev Bruno Roustant :
>
> So the plan is to cut the release branch on next Tuesday June 30th. If you
> anticipate a problem with the date, please reply.
>
> Is there any JIRA issue that must be committed before the release is made
> and that has not already the appropriate "Fix Version"?
>
> Currently there 3 unresolved issues flagged as Fix Version = 8.6:
> Add tests for corruptions caused by byte flips LUCENE-9356
> <https://issues.apache.org/jira/browse/LUCENE-9356>
> Fix linefiledocs compression or replace in tests LUCENE-9191
> <https://issues.apache.org/jira/browse/LUCENE-9191>
> Can we merge small segments during refresh, for faster searching?
> LUCENE-8962 <https://issues.apache.org/jira/browse/LUCENE-8962>
>
>
> Le mer. 24 juin 2020 à 21:05, David Smiley  a
> écrit :
>
>> Thanks starting this discussion, Cassandra.
>>
>> I reviewed the issues I was involved with and I don't quite see something
>> worth noting.
>>
>> I plan to add a note about a change in defaults within UnifiedHighlighter
>> that could be a significant perf regression.  This wasn't introduced in 8.6
>> but introduced in 8.5 and it's significant enough to bring attention to.  I
>> could add it in 8.5's section but then add a short pointer to it in 8.6.
>>
>> ~ David
>>
>>
>> On Wed, Jun 24, 2020 at 2:52 PM Cassandra Targett 
>> wrote:
>>
>>> I started looking at the Ref Guide for 8.6 to get it ready, and notice
>>> there are no Upgrade Notes in `solr-upgrade-notes.adoc` for 8.6. Is it
>>> really true that none are needed at all?
>>>
>>> I’ll add what I usually do about new features/changes that maybe
>>> wouldn’t normally make the old Upgrade Notes section, I just find it
>>> surprising that there weren’t any devs who thought any of the 100 or so
>>> Solr changes warrant any user caveats.
>>> On Jun 17, 2020, 12:27 PM -0500, Tomás Fernández Löbbe <
>>> tomasflo...@gmail.com>, wrote:
>>>
>>> +1. Thanks Bruno
>>>
>>> On Wed, Jun 17, 2020 at 6:22 AM Mike Drob  wrote:
>>>
>>>> +1
>>>>
>>>> The release wizard python script should be sufficient for everything.
>>>> If you run into any issues with it, let me know, I used it for 8.5.2 and
>>>> think I understand it pretty well.
>>>>
>>>> On Tue, Jun 16, 2020 at 8:31 AM Bruno Roustant <
>>>> bruno.roust...@gmail.com> wrote:
>>>>
>>>>> Hi all,
>>>>>
>>>>> It’s been a while since we released Lucene/Solr 8.5.
>>>>> I’d like to volunteer to be a release manager for an 8.6 release. If
>>>>> there's agreement, then I plan to cut the release branch two weeks today,
>>>>> on June 30th, and then to build the first RC two days later.
>>>>>
>>>>> This will be my first time as release manager so I'll probably need
>>>>> some guidance. Currently I have two resource links on this subject:
>>>>> https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo
>>>>>
>>>>> https://github.com/apache/lucene-solr/tree/master/dev-tools/scripts#releasewizardpy
>>>>> If you have more, please share with me.
>>>>>
>>>>> Bruno
>>>>>
>>>>
>
>


Re: Welcome Ilan Ginzburg as Lucene/Solr committer

2020-06-22 Thread Tomás Fernández Löbbe
Welcome Ilan!

On Mon, Jun 22, 2020 at 12:26 PM Anshum Gupta 
wrote:

> Congratulations and welcome, Ilan!
>
> On Sun, Jun 21, 2020 at 2:44 AM Noble Paul  wrote:
>
>> Hi all,
>>
>> Please join me in welcoming Ilan Ginzburg as the latest Lucene/Solr
>> committer.
>> Ilan, it's tradition for you to introduce yourself with a brief bio.
>>
>> Congratulations and Welcome!
>> Noble
>>
>
>
> --
> Anshum Gupta
>


Re: 8.6 release

2020-06-17 Thread Tomás Fernández Löbbe
+1. Thanks Bruno

On Wed, Jun 17, 2020 at 6:22 AM Mike Drob  wrote:

> +1
>
> The release wizard python script should be sufficient for everything. If
> you run into any issues with it, let me know, I used it for 8.5.2 and think
> I understand it pretty well.
>
> On Tue, Jun 16, 2020 at 8:31 AM Bruno Roustant 
> wrote:
>
>> Hi all,
>>
>> It’s been a while since we released Lucene/Solr 8.5.
>> I’d like to volunteer to be a release manager for an 8.6 release. If
>> there's agreement, then I plan to cut the release branch two weeks today,
>> on June 30th, and then to build the first RC two days later.
>>
>> This will be my first time as release manager so I'll probably need some
>> guidance. Currently I have two resource links on this subject:
>> https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo
>>
>> https://github.com/apache/lucene-solr/tree/master/dev-tools/scripts#releasewizardpy
>> If you have more, please share with me.
>>
>> Bruno
>>
>


Re: Welcome Mayya Sharipova as Lucene/Solr committer

2020-06-08 Thread Tomás Fernández Löbbe
Welcome Mayya!

On Mon, Jun 8, 2020 at 10:53 AM Paul Sanwald
 wrote:

> Congratulations Mayya
>
> --paul
>
> On Mon, Jun 8, 2020 at 1:52 PM Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> wrote:
>
>> Congrats Mayya!
>>
>> On Mon, 8 Jun, 2020, 11:20 pm Alan Woodward, 
>> wrote:
>>
>>> Congratulations and welcome Mayya!
>>>
>>> On 8 Jun 2020, at 17:58, jim ferenczi  wrote:
>>>
>>> Hi all,
>>>
>>> Please join me in welcoming Mayya Sharipova as the latest Lucene/Solr
>>> committer.
>>> Mayya, it's tradition for you to introduce yourself with a brief bio.
>>>
>>> Congratulations and Welcome!
>>>
>>> Jim
>>>
>>>
>>>
>
> --
> --paul
>


Re: BadApple report

2020-06-08 Thread Tomás Fernández Löbbe
Thanks for keeping an eye Erick. I took a quick look at the
"TestIndexSearcher" failures and I think they're related to SOLR-14525.
Should be fixed after this[1] commit by Noble.

[1] https://gitbox.apache.org/repos/asf?p=lucene-solr.git;h=5827ddf

On Mon, Jun 8, 2020 at 7:52 AM Erick Erickson 
wrote:

> If people don’t know about:
> http://fucit.org/solr-jenkins-reports/suspicious-failure-report.html, I
> strongly recommend you periodically check it. It reports tests that have
> changed their failure rates lately. There are three currently:
>
> "org.apache.solr.search.TestIndexSearcher","testSearcherListeners"
>
> "org.apache.solr.update.processor.DocExpirationUpdateProcessorFactoryTest","testAutomaticDeletes"
> "org.apache.solr.cloud.PackageManagerCLITest","testPackageManager
>
> Short form:
>
> Raw fail count by week totals, most recent week first (corresponds to
> bits):
> Week: 0  had  128 failures
> Week: 1  had  68 failures
> Week: 2  had  113 failures
> Week: 3  had  103 failures
>
>
> Failures in Hoss' reports for the last 4 rollups.
>
> There were 298 unannotated tests that failed in Hoss' rollups. Ordered by
> the date I downloaded the rollup file, newest->oldest. See above for the
> dates the files were collected
> These tests were NOT BadApple'd or AwaitsFix'd
>
> Failures in the last 4 reports..
>Report   Pct runsfails   test
>  0123   0.4 1461  5
> DeleteReplicaTest.deleteReplicaAndVerifyDirectoryCleanup
>  0123   0.7 1464  9
> MetricTriggerIntegrationTest.testMetricTrigger
>  0123   1.6 1377 29  MultiThreadedOCPTest.test
>  0123   0.7 1455  5
> NodeMarkersRegistrationTest.testNodeMarkersRegistration
>  0123   2.1 1481 17  RollingRestartTest.test
>  0123   0.4 1537 55
> ScheduledTriggerIntegrationTest.testScheduledTrigger
>  0123   7.7   98  6
> ShardSplitTest.testSplitWithChaosMonkey
>  0123   0.4 1455  9
> SystemCollectionCompatTest.testBackCompat
>  0123   0.7 1456 14  TestPackages.testPluginLoading
>  0123   1.1 1460  9
> TestQueryingOnDownCollection.testQueryToDownCollectionShouldFailFast
>  0123   0.7 1498 13  TestSimScenario.testSuggestions
> 
> I took the SuppressWarnings count section out, it’s ridiculously big.
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


Re: [VOTE] Release Lucene/Solr 8.5.2 RC1

2020-05-26 Thread Tomás Fernández Löbbe
+1

SUCCESS! [1:05:23.582468]

On Mon, May 25, 2020 at 5:51 AM Andrzej Białecki  wrote:

> +1
>
> SUCCESS! [1:59:25.055394]
>
> (this slow smoke test time comes from a combination of running this on a
> relatively wimpy replacement MacBook and a slow network).
>
> On 23 May 2020, at 09:39, Shalin Shekhar Mangar 
> wrote:
>
> +1
>
> SUCCESS! [0:47:23.934909]
>
> On Wed, May 20, 2020 at 11:28 PM Mike Drob  wrote:
>
>> Devs,
>>
>> Please vote for release candidate 1 for Lucene/Solr 8.5.2
>>
>> The artifacts can be downloaded from:
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.5.2-RC1-rev384dadd9141cec3f848d8c416315dc2384749814
>>
>> You can run the smoke tester directly with this command:
>>
>> python3 -u dev-tools/scripts/smokeTestRelease.py \
>>
>> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.5.2-RC1-rev384dadd9141cec3f848d8c416315dc2384749814
>>
>> The vote will be open until 2020-05-26 18:00 UTC (extended deadline due
>> to multiple holidays in the next 72 hours)
>>
>> [ ] +1  approve
>> [ ] +0  no opinion
>> [ ] -1  disapprove (and reason why)
>>
>> Here is my +1 (non-binding)
>>
>> Mike
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>
>
> --
> Regards,
> Shalin Shekhar Mangar.
>
>
>


Re: [DISCUSS] 8.5.2 Release?

2020-05-16 Thread Tomás Fernández Löbbe
Done.

On Sat, May 16, 2020 at 1:49 PM Tomás Fernández Löbbe 
wrote:

> Mike, I'm about to merge https://issues.apache.org/jira/browse/SOLR-14471
>
> On Fri, May 15, 2020 at 11:51 AM Mike Drob  wrote:
>
>> Thanks for pointing those commits out to me, Noble. I cherry-picked the
>> doap and backcompat index changes to branch_8 and master.
>>
>> As for the releases...
>>
>> mdrob-imp:/tmp/lucene $ svn log | head
>> 
>> r39590 | noble | 2020-05-13 21:02:20 -0500 (Wed, 13 May 2020) | 1 line
>>
>> Stop mirroring 7.7.2 releases
>> 
>> r39589 | noble | 2020-05-13 21:02:02 -0500 (Wed, 13 May 2020) | 1 line
>>
>> Stop mirroring 7.7.3 releases
>>
>>
>> I suspect this shouldn't have happened, and I was going to revert (svn
>> reverse merge...) commit r39589 here to fix that, but there is waaay too
>> much stuff in there that shouldn't be there, like a bunch of maven
>> artifacts. Can you take a look at this and clean it up? As of right now,
>> 7.7.3 is missing completely from
>> https://projects.apache.org/json/foundation/releases.json (which
>> continues to block the releaseWizard script).
>>
>> Thanks,
>> Mike
>>
>> On Fri, May 15, 2020 at 2:56 AM Jan Høydahl 
>> wrote:
>>
>>> ./poll-mirrors.py -version 7.7.3
>>>
>>>    1 ↵  11084  09:51:08
>>>
>>> 2020-05-15 09:52:38
>>> Polling 204 Apache Mirrors and Maven Central...
>>>
>>> .X.XX..XXX.X
>>>
>>> 7.7.3 is downloadable from Maven Central
>>> 7.7.3 is downloadable from 5/204 Apache Mirrors (2.45%)
>>> Sleeping for 262 seconds...
>>>
>>> The RM job is not done until it is done…
>>>
>>> Note that you need to apply
>>> https://issues.apache.org/jira/browse/LUCENE-9288 to the poll-mirrors
>>> script for it to work. Do you have any idea why only 5 mirrors are updated
>>> Noble?
>>>
>>> Jan
>>>
>>> 14. mai 2020 kl. 19:41 skrev Mike Drob :
>>>
>>> Noble,
>>>
>>> We're still missing a few:
>>>
>>>
>>> https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo#ReleaseTodo-UpdatetheprojectDOAPfiles
>>>
>>> https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo#ReleaseTodo-GenerateBackcompatIndexes
>>>
>>> Also, the release is not on the mirrors -
>>> https://lucene.apache.org/core/downloads.html
>>> The link to
>>> https://www.apache.org/dyn/closer.lua/lucene/java/7.7.3/lucene-7.7.3-src.tgz
>>> doesn't resolve to anything...
>>>
>>> Mike
>>>
>>> On Wed, May 13, 2020 at 9:18 PM Noble Paul  wrote:
>>>
>>>> I just finished all the steps given in
>>>> https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo
>>>>
>>>> Please let me know if anything is missing.
>>>> I still don't see the solr release details in
>>>> https://lucene.apache.org/
>>>>
>>>> How do i do it?
>>>>
>>>>
>>>> On Thu, May 14, 2020 at 10:47 AM Noble Paul 
>>>> wrote:
>>>> >
>>>> > I'll fix them today
>>>> >
>>>> > On Thu, May 14, 2020, 9:19 AM Mike Drob  wrote:
>>>> >>
>>>> >> Thanks. I’ve found one small change for the release script so far, I
>>>> plan to batch all my notes together and commit them at the end of the
>>>> process.
>>>> >>
>>>> >> Right now I think I’m waiting on a few final steps from 7.7.3 to
>>>> complete and then we’ll be ready to roll with 8.5.2
>>>> >>
>>>> >> On Wed, May 13, 2020 at 4:59 PM Jan Høydahl 
>>>> wrote:
>>>> >>>
>>>> >>> Mike, I merged latest changes to releaseWizard to branch_8_5 as
>>>> well. So you’ll be first to test the new instructions for updating the web
>>>> site. So please be prepared for discovering quirks in that part of the
>>>> releaseWizard.
>>>> >>>
>>>> >>>

Re: [DISCUSS] 8.5.2 Release?

2020-05-16 Thread Tomás Fernández Löbbe
Mike, I'm about to merge https://issues.apache.org/jira/browse/SOLR-14471

On Fri, May 15, 2020 at 11:51 AM Mike Drob  wrote:

> Thanks for pointing those commits out to me, Noble. I cherry-picked the
> doap and backcompat index changes to branch_8 and master.
>
> As for the releases...
>
> mdrob-imp:/tmp/lucene $ svn log | head
> 
> r39590 | noble | 2020-05-13 21:02:20 -0500 (Wed, 13 May 2020) | 1 line
>
> Stop mirroring 7.7.2 releases
> 
> r39589 | noble | 2020-05-13 21:02:02 -0500 (Wed, 13 May 2020) | 1 line
>
> Stop mirroring 7.7.3 releases
>
>
> I suspect this shouldn't have happened, and I was going to revert (svn
> reverse merge...) commit r39589 here to fix that, but there is waaay too
> much stuff in there that shouldn't be there, like a bunch of maven
> artifacts. Can you take a look at this and clean it up? As of right now,
> 7.7.3 is missing completely from
> https://projects.apache.org/json/foundation/releases.json (which
> continues to block the releaseWizard script).
>
> Thanks,
> Mike
>
> On Fri, May 15, 2020 at 2:56 AM Jan Høydahl  wrote:
>
>> ./poll-mirrors.py -version 7.7.3
>>
>>    1 ↵  11084  09:51:08
>>
>> 2020-05-15 09:52:38
>> Polling 204 Apache Mirrors and Maven Central...
>>
>> .X.XX..XXX.X
>>
>> 7.7.3 is downloadable from Maven Central
>> 7.7.3 is downloadable from 5/204 Apache Mirrors (2.45%)
>> Sleeping for 262 seconds...
>>
>> The RM job is not done until it is done…
>>
>> Note that you need to apply
>> https://issues.apache.org/jira/browse/LUCENE-9288 to the poll-mirrors
>> script for it to work. Do you have any idea why only 5 mirrors are updated
>> Noble?
>>
>> Jan
>>
>> 14. mai 2020 kl. 19:41 skrev Mike Drob :
>>
>> Noble,
>>
>> We're still missing a few:
>>
>>
>> https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo#ReleaseTodo-UpdatetheprojectDOAPfiles
>>
>> https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo#ReleaseTodo-GenerateBackcompatIndexes
>>
>> Also, the release is not on the mirrors -
>> https://lucene.apache.org/core/downloads.html
>> The link to
>> https://www.apache.org/dyn/closer.lua/lucene/java/7.7.3/lucene-7.7.3-src.tgz
>> doesn't resolve to anything...
>>
>> Mike
>>
>> On Wed, May 13, 2020 at 9:18 PM Noble Paul  wrote:
>>
>>> I just finished all the steps given in
>>> https://cwiki.apache.org/confluence/display/LUCENE/ReleaseTodo
>>>
>>> Please let me know if anything is missing.
>>> I still don't see the solr release details in https://lucene.apache.org/
>>>
>>> How do i do it?
>>>
>>>
>>> On Thu, May 14, 2020 at 10:47 AM Noble Paul 
>>> wrote:
>>> >
>>> > I'll fix them today
>>> >
>>> > On Thu, May 14, 2020, 9:19 AM Mike Drob  wrote:
>>> >>
>>> >> Thanks. I’ve found one small change for the release script so far, I
>>> plan to batch all my notes together and commit them at the end of the
>>> process.
>>> >>
>>> >> Right now I think I’m waiting on a few final steps from 7.7.3 to
>>> complete and then we’ll be ready to roll with 8.5.2
>>> >>
>>> >> On Wed, May 13, 2020 at 4:59 PM Jan Høydahl 
>>> wrote:
>>> >>>
>>> >>> Mike, I merged latest changes to releaseWizard to branch_8_5 as
>>> well. So you’ll be first to test the new instructions for updating the web
>>> site. So please be prepared for discovering quirks in that part of the
>>> releaseWizard.
>>> >>>
>>> >>> Jan
>>> >>>
>>> >>> > 7. mai 2020 kl. 19:13 skrev Mike Drob :
>>> >>> >
>>> >>> > Devs,
>>> >>> >
>>> >>> > I know that we had 8.5.1 only a few weeks ago, but with the fix
>>> for LUCENE-9350 I think we should consider another bug-fix. I know that
>>> without it I will be explicitly recommending several users to stay off of
>>> 8.5.x on their upgrade plans. There are some pretty scary looking charts on
>>> SOLR-14428 that describe the impact of the bug in more detail.
>>> >>> >
>>> >>> > I'd be happy to volunteer as RM for this, would probably be
>>> looking at trying to get it a vote started sometime next week.
>>> >>> >
>>> >>> > Thanks,
>>> >>> > Mike
>>> >>> >
>>> >>> >
>>> >>> > https://issues.apache.org/jira/browse/SOLR-14428
>>> >>> > https://issues.apache.org/jira/browse/LUCENE-9350
>>> >>> >
>>> >>> >
>>> -
>>> >>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> >>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >>> >
>>> >>>
>>> >>>
>>> >>> -
>>> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>> >>>
>>>
>>>
>>> --
>>> 

Re: [VOTE] Solr to become a top-level Apache project (TLP)

2020-05-12 Thread Tomás Fernández Löbbe
-1 (binding)


Re: [DISCUSS] 8.5.2 Release?

2020-05-07 Thread Tomás Fernández Löbbe
+1. Thanks Mike

On Thu, May 7, 2020 at 2:55 PM Adrien Grand  wrote:

> +1
>
> Le jeu. 7 mai 2020 à 19:13, Mike Drob  a écrit :
>
>> Devs,
>>
>> I know that we had 8.5.1 only a few weeks ago, but with the fix for
>> LUCENE-9350 I think we should consider another bug-fix. I know that without
>> it I will be explicitly recommending several users to stay off of 8.5.x on
>> their upgrade plans. There are some pretty scary looking charts on
>> SOLR-14428 that describe the impact of the bug in more detail.
>>
>> I'd be happy to volunteer as RM for this, would probably be looking at
>> trying to get it a vote started sometime next week.
>>
>> Thanks,
>> Mike
>>
>>
>> https://issues.apache.org/jira/browse/SOLR-14428
>> https://issues.apache.org/jira/browse/LUCENE-9350
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


Re: [DISCUSS] Lucene-Solr split (Solr promoted to TLP)

2020-05-05 Thread Tomás Fernández Löbbe
On Tue, May 5, 2020 at 12:37 PM Dawid Weiss  wrote:

> > I read “promotion to TLP” as if this was some achievement that needs to
> be celebrated now.
>
> I honestly believe it is an achievement for a project to receive
> top-level status. It's a sign of having a community of users,
> committers and processes mature enough to empower its further
> development.
>

My point is that this is not something new. Solr is a mature product and
has had the community and process in place for a long time.


>
> > It’s technically true that Solr is a subproject of Lucene, but so is
> Lucene Core, and I don’t see Lucene Core being promoted to TLP
>
> I don't think these are same magnitude components, sorry. I can name
> at least a few projects that depend on Lucene alone (core + extras)
> and I can name companies using Solr as a product but I can't name a
> single project that would depend on lucene-core alone (without any
> other lucene-* dependency). Maybe there is something like this but
> it's definitely an outlier example of a typical use case.
>

If you go to lucene.apache.org, you'll see three things: Lucene Core
(Lucene with all it's modules), Solr and PyLucene. That's what I mean.


>
> Dawid
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: [DISCUSS] Lucene-Solr split (Solr promoted to TLP)

2020-05-05 Thread Tomás Fernández Löbbe
I don’t agree with the argument “Solr outgrew being a subproject of
Lucene”. I read “promotion to TLP” as if this was some achievement that
needs to be celebrated now. Solr didn’t become a TLP years ago because the
decision then was to merge with Lucene development, thinking they would
progress better together than separated. It’s technically true that Solr is
a subproject of Lucene, but so is Lucene Core, and I don’t see Lucene Core
being promoted to TLP. They are both part of the same Apache project, which
for historical reasons is called Lucene.

> I would be curious if people can make the argument for keeping them
together...

I think the same arguments that were used 10 years ago to merge the
projects are as valid now, some of them presented in Dawid’s email. Faster
development, better coverage, code in the right places.[1]

IMO, if we need to say “we can’t release X because it breaks Y”, or “we
need to release X to be able to release Y”, the projects are not really
independent, and “the PMCs will overlap” won’t take us very far.

> The big question is this: “Is this the right time to split Solr and
Lucene into two independent projects?”.
This is not the question we should be asking ourselves right now. It
assumes the split is happening, and that’s what we are trying to discuss
here. The question in my mind is “Is splitting Lucene and Solr into
different project beneficial for them? Is this going to make them both
better?"

> As it is today, deveopers have had to do necessary Solr changes at the
same time when doing changes in Lucene. This is not really fair to the
(mainly) Lucene developers. It is not fair to Solr either, as such work
might be done in a hasty fashion and/or in a sub optimal way due to lack of
familiarity with Solr code base; like we unfortunately have seen a couple
of times in the past (not trying to blame anyone).

This, I agree, is a pain point for keeping them together. That said, while
not all, most currently active committers joined the project while this was
already a thing, it’s not something that was imposed later to the majority
of us.

> With Lucene as a dependency, Solr can choose to stay on same Lucene
version for a couple of releases while taking the time to work out the
proper way to adapt to changed Lucene APIs or to sort out performance
issues.

I agree with this and I believe it’s a point in favor of keeping them
together (and in part discussed 10 years ago when projects merged). Keeping
them on the same repo forces Solr to use the latest Lucene, helping find
issues/bugs soon, hopefully before they are released.


[1]
https://mail-archives.apache.org/mod_mbox/lucene-general/201002.mbox/%3c9ac0c6aa1002240832x1a8e3309k6799d75b8d19d...@mail.gmail.com%3e

On Tue, May 5, 2020 at 8:56 AM Michael McCandless 
wrote:

> On Tue, May 5, 2020 at 11:41 AM Jan Høydahl  wrote:
>
> As it is today, deveopers have had to do necessary Solr changes at the
>> same time when doing changes in Lucene. This is not really fair to the
>> (mainly) Lucene developers. It is not fair to Solr either, as such work
>> might be done in a hasty fashion and/or in a sub optimal way due to lack of
>> familiarity with Solr code base; like we unfortunately have seen a couple
>> of times in the past (not trying to blame anyone). With Lucene as a
>> dependency, Solr can choose to stay on same Lucene version for a couple of
>> releases while taking the time to work out the proper way to adapt to
>> changed Lucene APIs or to sort out performance issues.
>>
>
> +1, that is a great point, Jan.
>
> This will mean that the (any) necessary Solr source code changes that go
> along with a Lucene change will (sometimes) be done with higher quality,
> more thought, better expertise, etc., which I agree will be good for
> ongoing Solr development, help prevent accidental performance regressions,
> etc.  Net/net that's a big positive for Solr, in addition to having a
> stronger independent identity (https://solr.apache.org).
>
>
>> Question: When Lucene no longer has the Solr test suite to help catch
>> bugs, how long time would it take from a Lucene commit, before Solr/ES
>> Jenkins instances would have had time to produce a build and run tests?
>> Would it be possible to setup a trigger in Solr Jenkins?
>>
>
> That's a great question!
>
> Maybe Elasticsearch developers could chime in, since this already happened
> for them many times by now :)  I would think there are technical solutions
> to let the Solr CI build pull the latest Lucene snapshot build, to keep the
> latency lowish, but I do not know the details.
>
> Mike
>


Re: Welcome Eric Pugh as a Lucene/Solr committer

2020-04-07 Thread Tomás Fernández Löbbe
Welcome Eric!

On Tue, Apr 7, 2020 at 8:12 AM Houston Putman 
wrote:

> Congrats Eric!
>
> On Tue, Apr 7, 2020 at 9:57 AM Eric Pugh 
> wrote:
>
>> Thank you everyone!  I’ll keep it short, otherwise this will be a very
>> long email… ;-).
>>
>> I was first introduced to Solr and Lucene by Erik Hatcher, and today I
>> wonder what my life would be like if he hadn’t taken the time to show me
>> some cool code he was working on and explained to me the way to change the
>> world was through open source contributions!
>>
>> I co-founded OpenSource Connections (http://o19s.com) along with Scott
>> Stults and Jason Hull in 2005.  We found our niche in Solr consulting after
>> I went to the first LuceneRevolution and got inspired (complete with Jerry
>> Maguire style manifesto shared with the company). Through consulting, I get
>> to help onboard organizations into the Solr community - a thriving, healthy
>> ASF is very near & dear to my heart.
>>
>> I’ve been around this community for a long time, with my first JIRA being
>> three digits: SOLR-284.  Today, I’m still contributing to Apache Tika. I’ve
>> gotten to meet and spend some significant time with Tim Allison from that
>> project and learned a LOT about text!
>>
>> I was in the right place at the right time and was able to join David
>> Smiley as co-author on the first Solr book, we went on and did a total of
>> three editions of that book.  Phew!
>>
>> Once I got to sit on stage as a judge for Stump the Chump, it was Erick,
>> Erik, and Eric ;-)
>>
>> After doing Solr for a good while, I got lucky and met Doug Turnbull on
>> the sidewalk one day because he had on a t-shirt that said “My code doesn’t
>> have bugs, it has unexpected features”.   Couple of years later he and
>> fellow colleague John Berryman published Relevant Search and today I’m
>> working in the fascinating intersection of people, Search, and Data Science
>> helping build smarter search experiences as a Relevance Strategist. I'm
>> excited about bringing relevance use cases 'down to earth'. I also steward
>> OSC's contributions to the open source tool Quepid to help fulfill that
>> goal.
>>
>> Oh, and I’ve got a stack of LuceneRevolution and related conference
>> t-shirts that my mother turned into a fantastic quilt ;-).
>>
>> Eric
>>
>>
>>
>> On Apr 6, 2020, at 9:39 PM, Shalin Shekhar Mangar 
>> wrote:
>>
>> Congratulations and welcome Eric!
>>
>> On Mon, Apr 6, 2020 at 5:51 PM Jan Høydahl  wrote:
>>
>>> Hi all,
>>>
>>> Please join me in welcoming Eric Pugh as the latest Lucene/Solr
>>> committer!
>>>
>>> Eric has been part of the Solr community for over a decade, as a code
>>> contributor, book author, company founder, blogger and mailing list
>>> contributor! We look forward to his future contributions!
>>>
>>> Congratulations and welcome! It is a tradition to introduce yourself
>>> with a brief bio, Eric.
>>>
>>> Jan Høydahl
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>>>
>>
>> --
>> Regards,
>> Shalin Shekhar Mangar.
>>
>>
>> ___
>> *Eric Pugh **| *Founder & CEO | OpenSource Connections, LLC | 434.466.1467
>> | http://www.opensourceconnections.com | My Free/Busy
>> 
>> Co-Author: Apache Solr Enterprise Search Server, 3rd Ed
>> 
>> This e-mail and all contents, including attachments, is considered to be
>> Company Confidential unless explicitly stated otherwise, regardless
>> of whether attachments are marked as such.
>>
>>


Re: Welcome Alessandro Benedetti as a Lucene/Solr committer

2020-03-18 Thread Tomás Fernández Löbbe
Congrats and welcome Alessandro!

On Wed, Mar 18, 2020 at 9:34 AM Erick Erickson 
wrote:

> Welcome Alessandro!
>
> On Wed, Mar 18, 2020, 10:32 Houston Putman 
> wrote:
>
>> Congrats Alessandro!
>>
>> On Wed, Mar 18, 2020 at 10:07 AM Tommaso Teofili <
>> tommaso.teof...@gmail.com> wrote:
>>
>>> welcome on board Alessandro, well deserved!
>>> I still remember when we were sitting together in the same room having
>>> fun with Lucene/Solr a few years ago, keep up the good job !
>>>
>>> Regards,
>>> Tommaso
>>>
>>> On Wed, 18 Mar 2020 at 14:01, David Smiley 
>>> wrote:
>>>
 Hi all,

 Please join me in welcoming Alessandro Benedetti as the latest
 Lucene/Solr committer!

 Alessandro has been contributing to Lucene and Solr in areas such as
 More Like This, Synonym boosting, and Suggesters, and other areas for
 years.  Furthermore he's been a help to many users on the solr-user mailing
 list and has helped others through his blog posts and presentations about
 search.  We look forward to his future contributions.

 Congratulations and welcome!  It is a tradition to introduce yourself
 with a brief bio, Alessandro.

 ~ David Smiley
 Apache Lucene/Solr Search Developer
 http://www.linkedin.com/in/davidwsmiley

>>>


Re: [VOTE] Release Lucene/Solr 8.5.0 RC1

2020-03-16 Thread Tomás Fernández Löbbe
+1

SUCCESS! [1:20:34.327672]

On Mon, Mar 16, 2020 at 12:59 PM Kevin Risden  wrote:

> +1
>
> SUCCESS! [1:24:43.574849]
>
> Kevin Risden
>
> On Mon, Mar 16, 2020 at 3:40 PM Nhat Nguyen
>  wrote:
> >
> > +1
> >
> > SUCCESS! [0:52:39.991003]
> >
> > On Mon, Mar 16, 2020 at 11:14 AM Cassandra Targett <
> casstarg...@gmail.com> wrote:
> >>
> >> I pushed the Solr Ref Guide DRAFT up this morning (thought I did it on
> Friday, sorry): https://lucene.apache.org/solr/guide/8_5/.
> >>
> >> Cassandra
> >> On Mar 15, 2020, 6:06 PM -0500, Uwe Schindler , wrote:
> >>
> >> Hi,
> >>
> >> I instructed Policeman Jenkins to automatically test the release for
> me, the result (Java 8 / Java 9 combined Smoketesting):
> >>
> >> SUCCESS! [1:24:47.422173]
> >> (see
> https://jenkins.thetaphi.de/job/Lucene-Solr-Release-Tester/30/console)
> >>
> >> I also downloaded the artifacts and tested manually:
> >> - Solr starts and stops perfectly on Windows with whitespace in path
> name: Java 8, Java 11 and Java 14 (coming out soon)
> >> - Javadocs of Lucene look fine
> >> - JAR files look good
> >> - All links to repos in pom.xml and ant use HTTPS
> >>
> >> So I am fine with releasing this.
> >> +1 to RELEASE!
> >>
> >> Uwe
> >>
> >> -
> >> Uwe Schindler
> >> Achterdiek 19, D-28357 Bremen
> >> https://www.thetaphi.de
> >> eMail: u...@thetaphi.de
> >>
> >> -Original Message-
> >> From: Alan Woodward 
> >> Sent: Friday, March 13, 2020 3:27 PM
> >> To: dev@lucene.apache.org
> >> Subject: [VOTE] Release Lucene/Solr 8.5.0 RC1
> >>
> >> Please vote for release candidate 1 for Lucene/Solr 8.5.0
> >>
> >> The artifacts can be downloaded from:
> >> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.5.0-RC1-
> >> rev7ac489bf7b97b61749b19fa2ee0dc46e74b8dc42
> >>
> >> You can run the smoke tester directly with this command:
> >>
> >> python3 -u dev-tools/scripts/smokeTestRelease.py \
> >> https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.5.0-RC1-
> >> rev7ac489bf7b97b61749b19fa2ee0dc46e74b8dc42
> >>
> >> The vote will be open for three working days i.e. until next Tuesday,
> 2020-03-
> >> 18 14:00 UTC.
> >>
> >> [ ] +1 approve
> >> [ ] +0 no opinion
> >> [ ] -1 disapprove (and reason why)
> >>
> >> Here is my +1
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
> >>
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, e-mail: dev-h...@lucene.apache.org
> >>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Nhat Nguyen to the PMC

2020-03-03 Thread Tomás Fernández Löbbe
Welcome Nhat!

On Tue, Mar 3, 2020 at 11:19 AM Anshum Gupta  wrote:

> Congratulations and welcome, Nhat!
>
> -Anshum
>
> On Tue, Mar 3, 2020 at 8:34 AM Adrien Grand  wrote:
>
>> I am pleased to announce that Nhat Nguyen has accepted the PMC's
>> invitation to join.
>>
>> Welcome Nhat!
>>
>> --
>> Adrien
>>
>
>
> --
> Anshum Gupta
>


Re: Custom Solr Collector

2020-02-13 Thread Tomás Fernández Löbbe
Hi Kyle,
For #2, I understand you need this because you want "min-visited-docs",
right? Because, for max you could use EarlyTerminatingSortingCollector? (or
Lucene's "HitsThresholdChecker", but I don't know if Solr has support for
this yet). The "min-visited" would override the "timeAllowed", so even if
the collection should expire based on time, you'd let it continue until
something hits, is that the idea?

On Thu, Feb 13, 2020 at 9:29 AM Kyle Maxwell
 wrote:

> Hi,
> Looking to see if there's any appetite for either:
>
> 1. Allowing custom collectors as Solr Plugins, or
> 2. Taking a patch on TimeLimitedCollector to allow it to be doc-limited as
> well.
>
> Motivation:
> https://medium.com/@kyle.c.maxwell/some-lucene-tuning-t-45d82a9dfd83
>
> TimeLimitedCollector Patch:
> https://github.com/fizx/lucene-solr-1/pull/1/files
>
> Which approach might people prefer?  I'm happy to do the legwork, but
> wanted to check in first.
>
> Thanks,
> Kyle
>


Re: Congratulations to the new Lucene/Solr PMC Chair, Anshum Gupta!

2020-01-15 Thread Tomás Fernández Löbbe
Congrats Anshum!!

On Wed, Jan 15, 2020 at 2:25 PM David Smiley 
wrote:

> Congrats Anshum!
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Wed, Jan 15, 2020 at 4:15 PM Cassandra Targett 
> wrote:
>
>> Every year, the Lucene PMC rotates the Lucene PMC chair and Apache Vice
>> President position.
>>
>> This year we have nominated and elected Anshum Gupta as the Chair, a
>> decision that the board approved in its January 2020 meeting.
>>
>> Congratulations, Anshum!
>>
>> Cassandra
>>
>>


Re: [VOTE] Release Lucene/Solr 8.4.0 RC2

2019-12-23 Thread Tomás Fernández Löbbe
+1
SUCCESS! [1:23:00.750081]

On Mon, Dec 23, 2019 at 9:17 AM David Smiley 
wrote:

> +1
>
> SUCCESS! [1:26:31.334270]
>
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Mon, Dec 23, 2019 at 11:00 AM Tommaso Teofili <
> tommaso.teof...@gmail.com> wrote:
>
>> +1
>>
>> SUCCESS! [1:28:25.980994]
>>
>>
>> Regards,
>>
>> Tommaso
>>
>> On Mon, 23 Dec 2019 at 04:34, Nhat Nguyen 
>> wrote:
>>
>>> +1
>>> SUCCESS! [0:57:57.529512]
>>>
>>> On Sat, Dec 21, 2019 at 6:04 PM Michael Sokolov 
>>> wrote:
>>>
 SUCCESS! [1:21:34.167550]

 +1

 On Fri, Dec 20, 2019 at 2:56 PM Anshum Gupta 
 wrote:
 >
 > +1
 >
 > SUCCESS! [1:11:53.626393]
 >
 > The javadocs look good too.
 >
 >
 > On Fri, Dec 20, 2019 at 9:13 AM Tomoko Uchida <
 tomoko.uchida.1...@gmail.com> wrote:
 >>
 >> +1 SUCCESS! [0:57:55.896528]. Luke also works fine.
 >>
 >> 2019年12月20日(金) 23:24 Kevin Risden :
 >> >
 >> > +1 SUCCESS! [1:09:53.172567]
 >> >
 >> >
 >> > Kevin Risden
 >> >
 >> >
 >> > On Fri, Dec 20, 2019 at 9:15 AM Uwe Schindler 
 wrote:
 >> >>
 >> >> Hi,
 >> >>
 >> >>
 >> >>
 >> >> I instructed Policeman Jenkins to do the mechanic tests for me:
 >> >>
 >> >>
 https://jenkins.thetaphi.de/job/Lucene-Solr-Release-Tester/28/console
 >> >>
 >> >>
 >> >>
 >> >> Result:
 >> >>
 >> >> SUCCESS! [2:33:17.003383]
 >> >>
 >> >> Finished: SUCCESS
 >> >>
 >> >>
 >> >>
 >> >> He tested 2 Java versions and it looks like he was happy.
 >> >>
 >> >>
 >> >>
 >> >> I then reviewed Changes and Javadocs: Looks fine. I also started
 Solr with whitespace in my username on Windows and it booted up
 successfully in Java 8, 11 and 14 EA.
 >> >>
 >> >>
 >> >>
 >> >> +1 to release as Lucene/Solr 8.4.0
 >> >>
 >> >>
 >> >>
 >> >> Uwe
 >> >>
 >> >>
 >> >>
 >> >> -
 >> >>
 >> >> Uwe Schindler
 >> >>
 >> >> Achterdiek 19, D-28357 Bremen
 >> >>
 >> >> https://www.thetaphi.de
 >> >>
 >> >> eMail: u...@thetaphi.de
 >> >>
 >> >>
 >> >>
 >> >> From: Adrien Grand 
 >> >> Sent: Friday, December 20, 2019 12:23 AM
 >> >> To: Lucene Dev 
 >> >> Subject: [VOTE] Release Lucene/Solr 8.4.0 RC2
 >> >>
 >> >>
 >> >>
 >> >> Please vote for release candidate 2 for Lucene/Solr 8.4.0
 >> >>
 >> >> The artifacts can be downloaded from:
 >> >>
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.4.0-RC2-revbc02ab906445fcf4e297f4ef00ab4a54fdd72ca2
 >> >>
 >> >> You can run the smoke tester directly with this command:
 >> >>
 >> >> python3 -u dev-tools/scripts/smokeTestRelease.py \
 >> >>
 https://dist.apache.org/repos/dist/dev/lucene/lucene-solr-8.4.0-RC2-revbc02ab906445fcf4e297f4ef00ab4a54fdd72ca2
 >> >>
 >> >> The vote will be open for at least 3 working days, i.e. until
 2019-12-28 00:00 UTC.
 >> >>
 >> >> [ ] +1  approve
 >> >> [ ] +0  no opinion
 >> >> [ ] -1  disapprove (and reason why)
 >> >>
 >> >> Here is my +1
 >> >>
 >> >>
 >> >>
 >> >> --
 >> >>
 >> >> Adrien
 >>
 >> -
 >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 >> For additional commands, e-mail: dev-h...@lucene.apache.org
 >>
 >
 >
 > --
 > Anshum Gupta

 -
 To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
 For additional commands, e-mail: dev-h...@lucene.apache.org




Re: Commit / Code Review Policy

2019-12-02 Thread Tomás Fernández Löbbe
Thank you for putting this up, David. While I see Rob’s points, I agree
with the proposal in general, and as you and Jan said, this is not too far
from what happens today, at least in Lucene-world.

> Then change the document's name to be Recommendation instead of Policy!

Maybe guidelines? The doc itself says so, so maybe the name should reflect
the intention.

> there is no "silence is consensus"

Good point, maybe we should include something about this too. While
hopefully this doesn’t happen much, it doesn’t make sense to stall work for
weeks after putting up a final patch. Again, I think the doc’s intention is
to be a guideline, but if there are exceptions, so be it.

> People are always going to make mistakes. These mistakes will sometimes
slip past reviewers, too. No matter how much process and time you throw at
it.

Not perfect, true, but in my experience a lot is caught in reviews.

> Please don't add unnecessary things to this document like "Linear git
history" which have nothing to do with code reviews. Its already
controversial as its trying to invent its own form of "RTC", you aren't
helping your cause.

Makes sense, maybe we can address that later.

On Mon, Dec 2, 2019 at 8:00 AM Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> wrote:

> > For example, I opened some patches to improve solr's security because
> its currently an RCE-fest. I'm gonna wait a couple days, if nobody says
> anything about these patches I opened for Solr, I'm gonna fucking commit
> them: someone needs to address this stuff. Why should I wait weeks/months
> for some explicit review? There is repeated RCE happening, how the hell
> could I make anything worse?
>
> +1 Robert, totally agree. RCE etc should be absolutely top priority.
> Thanks a ton for tackling this. Breaking functionality (not deliberately of
> course) is better than having RCEs in a release. IOW, it can't get worse.
>
> On Mon, 2 Dec, 2019, 3:03 PM Robert Muir,  wrote:
>
>>
>>
>> On Mon, Dec 2, 2019 at 3:49 AM Jan Høydahl  wrote:
>>
>>> I think the distanse is not necessarily as large as it seems. Nobody
>>> wants to get rid of lazy consensus, but rather put down in writing when we
>>> recommend to wait for explicit review.
>>>
>>
>> Then change the document's name to be Recommendation instead of Policy!
>>
>


Re: BadApple

2019-11-18 Thread Tomás Fernández Löbbe
I’ve been working with TestTlogReplica, please don’t change that one

On Mon, Nov 18, 2019 at 8:28 AM Erick Erickson 
wrote:

> Short form:
>
> Holding reasonably steady, except:
>
> PackageManagerCLITest.testPackeageManager is failing over 50% of the time.
>
> MoverReplicaHDFSTest.test failing over 40% of the time.
>
> TestFstDirectAddressing.testDeDupeTails is failing 18% of the time.
>
> Full output attached:
>
>
> There were 154 unannotated tests that failed in Hoss' rollups. Ordered by
> the date I downloaded the rollup file, newest->oldest. See above for the
> dates the files were collected
> These tests were NOT BadApple'd or AwaitsFix'd
> All tests that failed 4 weeks running will be BadApple'd unless there are
> objections
>
> Failures in the last 4 reports..
>Report   Pct runsfails   test
>  0123   3.3  917 45  BasicAuthIntegrationTest.testBasicAuth
>  0123   0.5  943  7
> DimensionalRoutedAliasUpdateProcessorTest.testCatTime
>  0123   2.7  943 18
> DimensionalRoutedAliasUpdateProcessorTest.testTimeCat
>  0123   8.3  961 91
> LegacyCloudClusterPropTest.testCreateCollectionSwitchLegacyCloud
>  0123  43.8  202 85  MoveReplicaHDFSTest.test
>  0123   0.5  903  5
> ReindexCollectionTest.testSameTargetReindexing
>  0123   4.8  104  5
> ShardSplitTest.testSplitWithChaosMonkey
>  0123   0.5  918  9
> SystemCollectionCompatTest.testBackCompat
>  0123   4.8  941 52
> TestCloudSearcherWarming.testRepFactor1LeaderStartup
>  0123   2.7  946 77
> TestModelManagerPersistence.testFilePersistence
>  0123   2.2  940 70
> TestModelManagerPersistence.testWrapperModelPersistence
>  0123   1.6  921 19
> TestSkipOverseerOperations.testSkipLeaderOperations
>  0123   0.6  905 11  TestStressLiveNodes.testStress
>  0123   1.1  651  9  TestTlogReplica.testKillTlogReplica
>  0123   4.2  112 11
> TestXYMultiPolygonShapeQueries.testRandomBig
>  Will BadApple all tests above this line except ones listed at
> the top**
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


Re: Welcome Houston Putman as Lucene/Solr committer

2019-11-14 Thread Tomás Fernández Löbbe
Welcome Houston!

On Thu, Nov 14, 2019 at 9:09 AM Kevin Risden  wrote:

> Congrats and welcome!
>
> Kevin Risden
>
> On Thu, Nov 14, 2019, 12:05 Jason Gerlowski  wrote:
>
>> Congratulations!
>>
>> On Thu, Nov 14, 2019 at 11:58 AM Gus Heck  wrote:
>> >
>> > Congratulations and welcome :)
>> >
>> > On Thu, Nov 14, 2019 at 11:52 AM Namgyu Kim  wrote:
>> >>
>> >> Congratulations and welcome, Houston! :D
>> >>
>> >> On Fri, Nov 15, 2019 at 1:18 AM Ken LaPorte 
>> wrote:
>> >>>
>> >>> Congratulations Houston! Well deserved honor.
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Sent from:
>> https://lucene.472066.n3.nabble.com/Lucene-Java-Developer-f564358.html
>> >>>
>> >>> -
>> >>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> >>> For additional commands, e-mail: dev-h...@lucene.apache.org
>> >>>
>> >
>> >
>> > --
>> > http://www.needhamsoftware.com (work)
>> > http://www.the111shift.com (play)
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


Re: [JENKINS] Lucene-Solr-BadApples-Tests-8.x - Build # 261 - Still Unstable

2019-11-01 Thread Tomás Fernández Löbbe
I pushed a fix for this

On Fri, Nov 1, 2019 at 5:17 PM Apache Jenkins Server <
jenk...@builds.apache.org> wrote:

> Build: https://builds.apache.org/job/Lucene-Solr-BadApples-Tests-8.x/261/
>
> 1 tests failed.
> FAILED:
> org.apache.solr.util.SolrPluginUtilsTest.testMinShouldMatchBadQueries
>
> Error Message:
> Expected exception SolrException but no exception was thrown
>
> Stack Trace:
> junit.framework.AssertionFailedError: Expected exception SolrException but
> no exception was thrown
> at
> __randomizedtesting.SeedInfo.seed([8204B44D4CD7DDA3:4CD874E732EF52AD]:0)
> at
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2722)
> at
> org.apache.lucene.util.LuceneTestCase.expectThrows(LuceneTestCase.java:2712)
> at
> org.apache.solr.util.SolrPluginUtilsTest.testMinShouldMatchBadQueries(SolrPluginUtilsTest.java:344)
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.invoke(RandomizedRunner.java:1750)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$8.evaluate(RandomizedRunner.java:938)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$9.evaluate(RandomizedRunner.java:974)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$10.evaluate(RandomizedRunner.java:988)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at
> org.apache.lucene.util.TestRuleSetupTeardownChained$1.evaluate(TestRuleSetupTeardownChained.java:49)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at
> org.apache.lucene.util.TestRuleThreadAndTestName$1.evaluate(TestRuleThreadAndTestName.java:48)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$StatementRunner.run(ThreadLeakControl.java:368)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl.forkTimeoutingTask(ThreadLeakControl.java:817)
> at
> com.carrotsearch.randomizedtesting.ThreadLeakControl$3.evaluate(ThreadLeakControl.java:468)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner.runSingleTest(RandomizedRunner.java:947)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$5.evaluate(RandomizedRunner.java:832)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$6.evaluate(RandomizedRunner.java:883)
> at
> com.carrotsearch.randomizedtesting.RandomizedRunner$7.evaluate(RandomizedRunner.java:894)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.SystemPropertiesRestoreRule$1.evaluate(SystemPropertiesRestoreRule.java:57)
> at
> org.apache.lucene.util.AbstractBeforeAfterRule$1.evaluate(AbstractBeforeAfterRule.java:45)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleStoreClassName$1.evaluate(TestRuleStoreClassName.java:41)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at
> com.carrotsearch.randomizedtesting.rules.NoShadowingOrOverridesOnMethodsRule$1.evaluate(NoShadowingOrOverridesOnMethodsRule.java:40)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> org.apache.lucene.util.TestRuleAssertionsRequired$1.evaluate(TestRuleAssertionsRequired.java:53)
> at
> org.apache.lucene.util.TestRuleMarkFailure$1.evaluate(TestRuleMarkFailure.java:47)
> at
> org.apache.lucene.util.TestRuleIgnoreAfterMaxFailures$1.evaluate(TestRuleIgnoreAfterMaxFailures.java:64)
> at
> org.apache.lucene.util.TestRuleIgnoreTestSuites$1.evaluate(TestRuleIgnoreTestSuites.java:54)
> at
> com.carrotsearch.randomizedtesting.rules.StatementAdapter.evaluate(StatementAdapter.java:36)
> at
> 

Re: [lucene-solr] branch master updated: SOLR-13783: fix failing tests due to NamedList.toString() change

2019-10-29 Thread Tomás Fernández Löbbe
Thanks Munendra,
sorry for the noise.

On Tue, Oct 29, 2019 at 1:06 AM  wrote:

> This is an automated email from the ASF dual-hosted git repository.
>
> munendrasn pushed a commit to branch master
> in repository https://gitbox.apache.org/repos/asf/lucene-solr.git
>
>
> The following commit(s) were added to refs/heads/master by this push:
>  new b82b772  SOLR-13783: fix failing tests due to
> NamedList.toString() change
> b82b772 is described below
>
> commit b82b7725e110cd482c7a4372dc3b1a47eee2023a
> Author: Munendra S N 
> AuthorDate: Tue Oct 29 13:35:31 2019 +0530
>
> SOLR-13783: fix failing tests due to NamedList.toString() change
> ---
>  solr/core/src/test/org/apache/solr/core/TestBadConfig.java |  2 +-
>  .../update/processor/UpdateRequestProcessorFactoryTest.java| 10
> +-
>  2 files changed, 6 insertions(+), 6 deletions(-)
>
> diff --git a/solr/core/src/test/org/apache/solr/core/TestBadConfig.java
> b/solr/core/src/test/org/apache/solr/core/TestBadConfig.java
> index db04152..1dfad85 100644
> --- a/solr/core/src/test/org/apache/solr/core/TestBadConfig.java
> +++ b/solr/core/src/test/org/apache/solr/core/TestBadConfig.java
> @@ -82,7 +82,7 @@ public class TestBadConfig extends
> AbstractBadConfigTestBase {
>
>public void testSchemaMutableButNotManaged() throws Exception {
>  assertConfigs("bad-solrconfig-schema-mutable-but-not-managed.xml",
> -  "schema-minimal.xml", "Unexpected arg(s):
> {mutable=false,managedSchemaResourceName=schema.xml}");
> +  "schema-minimal.xml", "Unexpected arg(s):
> {mutable=false, managedSchemaResourceName=schema.xml}");
>}
>
>public void testManagedSchemaCannotBeNamedSchemaDotXml() throws
> Exception {
> diff --git
> a/solr/core/src/test/org/apache/solr/update/processor/UpdateRequestProcessorFactoryTest.java
> b/solr/core/src/test/org/apache/solr/update/processor/UpdateRequestProcessorFactoryTest.java
> index 7d53b4f..66d612f 100644
> ---
> a/solr/core/src/test/org/apache/solr/update/processor/UpdateRequestProcessorFactoryTest.java
> +++
> b/solr/core/src/test/org/apache/solr/update/processor/UpdateRequestProcessorFactoryTest.java
> @@ -16,21 +16,21 @@
>   */
>  package org.apache.solr.update.processor;
>
> -import static
> org.apache.solr.update.processor.DistributingUpdateProcessorFactory.DISTRIB_UPDATE_PARAM;
> -
>  import java.lang.invoke.MethodHandles;
> -import java.util.Arrays;
>  import java.util.ArrayList;
> +import java.util.Arrays;
>  import java.util.List;
>
> +import org.apache.solr.SolrTestCaseJ4;
>  import org.apache.solr.common.params.ModifiableSolrParams;
>  import org.apache.solr.core.SolrCore;
>  import org.apache.solr.response.SolrQueryResponse;
> -import org.apache.solr.SolrTestCaseJ4;
>  import org.junit.BeforeClass;
>  import org.slf4j.Logger;
>  import org.slf4j.LoggerFactory;
>
> +import static
> org.apache.solr.update.processor.DistributingUpdateProcessorFactory.DISTRIB_UPDATE_PARAM;
> +
>  /**
>   *
>   */
> @@ -87,7 +87,7 @@ public class UpdateRequestProcessorFactoryTest extends
> SolrTestCaseJ4 {
>  assertEquals( custom, core.getUpdateProcessingChain( "custom" ) );
>
>  // Make sure the NamedListArgs got through ok
> -assertEquals( "{name={n8=88,n9=99}}", link.args.toString() );
> +assertEquals( "{name={n8=88, n9=99}}", link.args.toString() );
>}
>
>public void testUpdateDistribChainSkipping() throws Exception {
>
>


Re: Help cleaning up the Old Moin wiki content

2019-10-25 Thread Tomás Fernández Löbbe
Yes, added.

On Fri, Oct 25, 2019 at 10:00 AM Adam Walz  wrote:

> Hi Tomás,
>
> Could you add page deletion rights as well?
>
> On Fri, Oct 25, 2019 at 9:24 AM Tomás Fernández Löbbe <
> tomasflo...@gmail.com> wrote:
>
>> Thanks! Done.
>>
>> On Fri, Oct 25, 2019 at 9:19 AM Adam Walz  wrote:
>>
>>> Thank you. Yes I’d be happy to work on Solr’s as well.
>>>
>>> On Fri, Oct 25, 2019 at 9:10 AM Tomás Fernández Löbbe <
>>> tomasflo...@gmail.com> wrote:
>>>
>>>> Added you to the Lucene space, do you also want to work on Solr’s?
>>>>
>>>> On Fri, Oct 25, 2019 at 7:45 AM Adam Walz  wrote:
>>>>
>>>>> Bumping this thread.
>>>>> I'd like to help clean up the Lucene wiki site. Can I please get edit
>>>>> rights for username adamwalz?
>>>>>
>>>>> On Tue, Oct 22, 2019 at 2:20 PM Adam Walz  wrote:
>>>>>
>>>>>> Hi Lucene devs,
>>>>>> I'd like to help clean up confluence and the sub pages below Old Moin
>>>>>> wiki. Can you please give edit karma to username adamwalz as per
>>>>>> https://cwiki.apache.org/confluence/display/lucene
>>>>>>
>>>>>> --
>>>>>> Adam Walz
>>>>>>
>>>>>
>>>>>
>>>>> --
>>>>> Adam Walz
>>>>>
>>>> --
>>> Adam Walz
>>>
>>
>
> --
> Adam Walz
>


Re: Help cleaning up the Old Moin wiki content

2019-10-25 Thread Tomás Fernández Löbbe
Thanks! Done.

On Fri, Oct 25, 2019 at 9:19 AM Adam Walz  wrote:

> Thank you. Yes I’d be happy to work on Solr’s as well.
>
> On Fri, Oct 25, 2019 at 9:10 AM Tomás Fernández Löbbe <
> tomasflo...@gmail.com> wrote:
>
>> Added you to the Lucene space, do you also want to work on Solr’s?
>>
>> On Fri, Oct 25, 2019 at 7:45 AM Adam Walz  wrote:
>>
>>> Bumping this thread.
>>> I'd like to help clean up the Lucene wiki site. Can I please get edit
>>> rights for username adamwalz?
>>>
>>> On Tue, Oct 22, 2019 at 2:20 PM Adam Walz  wrote:
>>>
>>>> Hi Lucene devs,
>>>> I'd like to help clean up confluence and the sub pages below Old Moin
>>>> wiki. Can you please give edit karma to username adamwalz as per
>>>> https://cwiki.apache.org/confluence/display/lucene
>>>>
>>>> --
>>>> Adam Walz
>>>>
>>>
>>>
>>> --
>>> Adam Walz
>>>
>> --
> Adam Walz
>


Re: Help cleaning up the Old Moin wiki content

2019-10-25 Thread Tomás Fernández Löbbe
Added you to the Lucene space, do you also want to work on Solr’s?

On Fri, Oct 25, 2019 at 7:45 AM Adam Walz  wrote:

> Bumping this thread.
> I'd like to help clean up the Lucene wiki site. Can I please get edit
> rights for username adamwalz?
>
> On Tue, Oct 22, 2019 at 2:20 PM Adam Walz  wrote:
>
>> Hi Lucene devs,
>> I'd like to help clean up confluence and the sub pages below Old Moin
>> wiki. Can you please give edit karma to username adamwalz as per
>> https://cwiki.apache.org/confluence/display/lucene
>>
>> --
>> Adam Walz
>>
>
>
> --
> Adam Walz
>


  1   2   3   >