Re: Welcome Houston Putman as Lucene/Solr committer

2019-11-22 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Welcome Houston!

Christine

From: dev@lucene.apache.org At: 11/14/19 19:39:29To:  dev@lucene.apache.org
Subject: Re: Welcome Houston Putman as Lucene/Solr committer

 
Thanks everyone!

As requested, a brief history of me:

A native Austinite, I went to The University of Texas at Austin. Back in 2013 I 
lucked into an internship with Bloomberg working on a new Search Infrastructure 
team. There I had my first exposure to Solr and built the first iteration of 
the Analytics Component. Since graduating in 2016, moving up to NYC and 
starting at Bloomberg full time, I have been working on Solr in various ways, 
from rewriting the Analytics Component to adding some features to various parts 
of SolrJ and fixing some weirdness in pivot facets.

Lately I’ve been working (and presenting) on running Solr on Kubernetes. We’ve 
open sourced a Solr Kubernetes operator 
(https://github.com/bloomberg/solr-operator), which is currently being 
developed with help from across the community. Our goal is to make this a 
standard and flexible way of running Solr in a cloud environment, which 
includes making Solr itself run better in the cloud.

I can’t wait to continue working with y’all and making Solr as great as it can 
be!


- Houston Putman
On Thu, Nov 14, 2019 at 2:24 PM Varun Thacker  wrote:

Congratulations and welcome Houston!
On Thu, Nov 14, 2019 at 9:32 AM Tomás Fernández Löbbe  
wrote:

Welcome Houston!

On Thu, Nov 14, 2019 at 9:09 AM Kevin Risden  wrote:

Congrats and welcome!

Kevin Risden
On Thu, Nov 14, 2019, 12:05 Jason Gerlowski  wrote:

Congratulations!

On Thu, Nov 14, 2019 at 11:58 AM Gus Heck  wrote:
>
> Congratulations and welcome :)
>
> On Thu, Nov 14, 2019 at 11:52 AM Namgyu Kim  wrote:
>>
>> Congratulations and welcome, Houston! :D
>>
>> On Fri, Nov 15, 2019 at 1:18 AM Ken LaPorte  wrote:
>>>
>>> Congratulations Houston! Well deserved honor.
>>>
>>>
>>>
>>> --
>>> Sent from: 
>>> https://lucene.472066.n3.nabble.com/Lucene-Java-Developer-f564358.html
>>>
>>> -
>>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>>
>
>
> --
> http://www.needhamsoftware.com (work)
> http://www.the111shift.com (play)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org




Re: [jira] [Commented] (SOLR-13961) Unsetting Nested Documents using Atomic Update leads to SolrException: undefined field

2019-11-22 Thread Thomas Wöckinger
To have consistent API unsetting nested documents should work the same as
unsetting regular fields. For regular fields this is possible and also
documented.

On Fri, Nov 22, 2019 at 7:54 PM Bar Rotstein (Jira)  wrote:

>
> [
> https://issues.apache.org/jira/browse/SOLR-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980409#comment-16980409
> ]
>
> Bar Rotstein commented on SOLR-13961:
> -
>
> Does solr support setting a value as null at the moment?
>
>
>
> I wonder whether this is the way to go, since you can delete the child
> document using the nested atomic update delete operation.
>
> > Unsetting Nested Documents using Atomic Update leads to SolrException:
> undefined field
> >
> --
> >
> > Key: SOLR-13961
> > URL: https://issues.apache.org/jira/browse/SOLR-13961
> > Project: Solr
> >  Issue Type: Bug
> >  Security Level: Public(Default Security Level. Issues are Public)
> >  Components: Tests, UpdateRequestProcessors
> >Affects Versions: master (9.0), 8.3, 8.4
> >Reporter: Thomas Wöckinger
> >Priority: Critical
> >  Labels: easyfix
> >  Time Spent: 10m
> >  Remaining Estimate: 0h
> >
> > Using null or empty collection to unset nested documents (as suggested
> by documentation) leads to SolrException: undefined field ... .
>
>
>
> --
> This message was sent by Atlassian Jira
> (v8.3.4#803005)
>
> -
> To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
> For additional commands, e-mail: issues-h...@lucene.apache.org
>
>


Re: Lucene/Solr 8.4

2019-11-22 Thread Jan Høydahl
+1

--
Jan Høydahl, search solution architect
Cominvent AS - www.cominvent.com

> 22. nov. 2019 kl. 09:37 skrev Adrien Grand :
> 
> Hello all,
> 
> With Thanksgiving and then Christmas coming up, this is going to be a
> busy time for most of us. I'd like to get a new release before the end
> of the year, so I'm proposing the following schedule for Lucene/Solr
> 8.4:
> - cutting the branch on December 12th
> - building the first RC on December 14th
> and hopefully we'll have a release in the following week.
> 
> -- 
> Adrien
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 



Re: [jira] [Commented] (SOLR-13961) Unsetting Nested Documents using Atomic Update leads to SolrException: undefined field

2019-11-22 Thread Thomas Wöckinger
I am not sure who is responsible for Atomic Updates and nested documents,
it would be really great to get
https://github.com/apache/lucene-solr/pull/1030 into 8.4.

On Fri, Nov 22, 2019 at 7:17 PM Thomas Wöckinger (Jira) 
wrote:

>
> [
> https://issues.apache.org/jira/browse/SOLR-13961?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel=16980384#comment-16980384
> ]
>
> Thomas Wöckinger commented on SOLR-13961:
> -
>
> [~dsmiley] or [~gerlowskija] not sure who is responsilbe for Atomic
> Update. Maybe you can forward this issue. Thx a lot
>
> > Unsetting Nested Documents using Atomic Update leads to SolrException:
> undefined field
> >
> --
> >
> > Key: SOLR-13961
> > URL: https://issues.apache.org/jira/browse/SOLR-13961
> > Project: Solr
> >  Issue Type: Bug
> >  Security Level: Public(Default Security Level. Issues are Public)
> >  Components: Tests, UpdateRequestProcessors
> >Affects Versions: master (9.0), 8.3, 8.4
> >Reporter: Thomas Wöckinger
> >Priority: Critical
> >  Labels: easyfix
> >  Time Spent: 10m
> >  Remaining Estimate: 0h
> >
> > Using null or empty collection to unset nested documents (as suggested
> by documentation) leads to SolrException: undefined field ... .
>
>
>
> --
> This message was sent by Atlassian Jira
> (v8.3.4#803005)
>
> -
> To unsubscribe, e-mail: issues-unsubscr...@lucene.apache.org
> For additional commands, e-mail: issues-h...@lucene.apache.org
>
>


Re: Solr Prometheus Exporter failing with "Connection pool shut down" on large cluster

2019-11-22 Thread Richard Goodman
Sorry, I meant to say, Hey Alex

It's because this time around I did reply all, and Serj's name was
highlighted in my email!

On Fri, 22 Nov 2019 at 17:17, Richard Goodman 
wrote:

> Hey Serj,
>
> So I've applied your patch, deployed the latest build of the
> solr-prometheus exporter, and left it running for a little bit. It's had
> some interesting results.
>
> As I mentioned, I split up my config from 1 monolithic config to separate
> configs, and have an instance running for each one *(they are split up by
> groups defined in the /solr/admin/metrics endpoint)*, for more
> information, I have split them up via the following:
>
>- Core
>- Node
>- JVM
>- Jetty
>
> And I was getting the connection pool shut down, for every single type.
> However, this is not the case any more.
>
> *Core*
> Whilst I am no longer getting Connection Pool Shutdown, I am now getting
> the likes of;
> ERROR - 2019-11-22 16:44:17.522;
> org.apache.solr.prometheus.scraper.SolrScraper; failed to request:
> /admin/metrics Timeout occured while waiting response from server at:
> http://127.0.0.1:8085/solr
>
> Because of this, I am now no longer collecting any core metrics at all.
> I'm yet to figure out in the code where this error is reported, and seeing
> if anything can help with this.
>
> *Node*
> I did get a NPE that I've never seen before, however, this appeared to be
> a 1 off, and now this is also reporting 0 connection pool shut down issues,
> which I was getting previously. I'll attach a dump of the node NPE below as
> like an "appendix"
>
> *JVM*
> Completely seems to resolve the issue, have no longer been getting it.
>
> *Jetty*
> Unfortunately here, I am still getting connection pool shut down errors,
> again I will upload a dump of the error in the "appendix"
>
> Whilst this hasn't appeared to completely fix the issue, it's definitely a
> step in the right direction. As I said, for Node and JVM I am no longer
> getting this, and have seen an improve in the amount of metrics being
> collected. But it looks like there is still some work we can look at and do
>
> 
> *Appendix*
>
> *Node NPE Error*
> ERROR - 2019-11-22 16:46:38.594;
> org.apache.solr.prometheus.scraper.SolrScraper; failed to request:
> /admin/metrics Timeout occured while waiting response from server at:
> http://127.0.0.1:8083/solr
> WARN  - 2019-11-22 16:46:59.105; org.apache.solr.prometheus.scraper.Async;
> Error occurred during metrics collection
> java.util.concurrent.ExecutionException: java.lang.NullPointerException
> at
> java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
> ~[?:1.8.0_141]
> at
> java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
> ~[?:1.8.0_141]
> at
> org.apache.solr.prometheus.scraper.Async.lambda$null$1(Async.java:45)
> ~[solr-prometheus-exporter-7.7.2-SNAPSHOT.jar:7.7.2-SNAPSHOT
> e5d04ab6a061a02e47f9e6df62a3cfa69632987b - jenkins - 2019-11-22 16:23:03]
> at
> java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
> ~[?:1.8.0_141]
> at
> java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
> ~[?:1.8.0_141]
> at
> java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
> ~[?:1.8.0_141]
> at
> java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
> ~[?:1.8.0_141]
> at
> java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
> ~[?:1.8.0_141]
> at
> java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
> ~[?:1.8.0_141]
> at
> java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
> ~[?:1.8.0_141]
> at
> java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
> ~[?:1.8.0_141]
> at
> java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
> ~[?:1.8.0_141]
> at
> org.apache.solr.prometheus.scraper.Async.lambda$waitForAllSuccessfulResponses$3(Async.java:43)
> ~[solr-prometheus-exporter-7.7.2-SNAPSHOT.jar:7.7.2-SNAPSHOT
> e5d04ab6a061a02e47f9e6df62a3cfa69632987b - jenkins - 2019-11-22 16:23:03]
> at
> java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
> ~[?:1.8.0_141]
> at
> java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
> ~[?:1.8.0_141]
> at
> java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
> ~[?:1.8.0_141]
> at
> java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
> ~[?:1.8.0_141]
> at
> org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
> ~[solr-solrj-7.7.2-SNAPSHOT.jar:7.7.2-SNAPSHOT
> e5d04ab6a061a02e47f9e6df62a3cfa69632987b - jenkins - 2019-11-22 16:23:11]
> at
> 

Re: Solr Prometheus Exporter failing with "Connection pool shut down" on large cluster

2019-11-22 Thread Richard Goodman
Hey Serj,

So I've applied your patch, deployed the latest build of the
solr-prometheus exporter, and left it running for a little bit. It's had
some interesting results.

As I mentioned, I split up my config from 1 monolithic config to separate
configs, and have an instance running for each one *(they are split up by
groups defined in the /solr/admin/metrics endpoint)*, for more information,
I have split them up via the following:

   - Core
   - Node
   - JVM
   - Jetty

And I was getting the connection pool shut down, for every single type.
However, this is not the case any more.

*Core*
Whilst I am no longer getting Connection Pool Shutdown, I am now getting
the likes of;
ERROR - 2019-11-22 16:44:17.522;
org.apache.solr.prometheus.scraper.SolrScraper; failed to request:
/admin/metrics Timeout occured while waiting response from server at:
http://127.0.0.1:8085/solr

Because of this, I am now no longer collecting any core metrics at all. I'm
yet to figure out in the code where this error is reported, and seeing if
anything can help with this.

*Node*
I did get a NPE that I've never seen before, however, this appeared to be a
1 off, and now this is also reporting 0 connection pool shut down issues,
which I was getting previously. I'll attach a dump of the node NPE below as
like an "appendix"

*JVM*
Completely seems to resolve the issue, have no longer been getting it.

*Jetty*
Unfortunately here, I am still getting connection pool shut down errors,
again I will upload a dump of the error in the "appendix"

Whilst this hasn't appeared to completely fix the issue, it's definitely a
step in the right direction. As I said, for Node and JVM I am no longer
getting this, and have seen an improve in the amount of metrics being
collected. But it looks like there is still some work we can look at and do


*Appendix*

*Node NPE Error*
ERROR - 2019-11-22 16:46:38.594;
org.apache.solr.prometheus.scraper.SolrScraper; failed to request:
/admin/metrics Timeout occured while waiting response from server at:
http://127.0.0.1:8083/solr
WARN  - 2019-11-22 16:46:59.105; org.apache.solr.prometheus.scraper.Async;
Error occurred during metrics collection
java.util.concurrent.ExecutionException: java.lang.NullPointerException
at
java.util.concurrent.CompletableFuture.reportGet(CompletableFuture.java:357)
~[?:1.8.0_141]
at
java.util.concurrent.CompletableFuture.get(CompletableFuture.java:1895)
~[?:1.8.0_141]
at
org.apache.solr.prometheus.scraper.Async.lambda$null$1(Async.java:45)
~[solr-prometheus-exporter-7.7.2-SNAPSHOT.jar:7.7.2-SNAPSHOT
e5d04ab6a061a02e47f9e6df62a3cfa69632987b - jenkins - 2019-11-22 16:23:03]
at
java.util.stream.ForEachOps$ForEachOp$OfRef.accept(ForEachOps.java:184)
~[?:1.8.0_141]
at
java.util.stream.ReferencePipeline$2$1.accept(ReferencePipeline.java:175)
~[?:1.8.0_141]
at
java.util.ArrayList$ArrayListSpliterator.forEachRemaining(ArrayList.java:1374)
~[?:1.8.0_141]
at
java.util.stream.AbstractPipeline.copyInto(AbstractPipeline.java:481)
~[?:1.8.0_141]
at
java.util.stream.AbstractPipeline.wrapAndCopyInto(AbstractPipeline.java:471)
~[?:1.8.0_141]
at
java.util.stream.ForEachOps$ForEachOp.evaluateSequential(ForEachOps.java:151)
~[?:1.8.0_141]
at
java.util.stream.ForEachOps$ForEachOp$OfRef.evaluateSequential(ForEachOps.java:174)
~[?:1.8.0_141]
at
java.util.stream.AbstractPipeline.evaluate(AbstractPipeline.java:234)
~[?:1.8.0_141]
at
java.util.stream.ReferencePipeline.forEach(ReferencePipeline.java:418)
~[?:1.8.0_141]
at
org.apache.solr.prometheus.scraper.Async.lambda$waitForAllSuccessfulResponses$3(Async.java:43)
~[solr-prometheus-exporter-7.7.2-SNAPSHOT.jar:7.7.2-SNAPSHOT
e5d04ab6a061a02e47f9e6df62a3cfa69632987b - jenkins - 2019-11-22 16:23:03]
at
java.util.concurrent.CompletableFuture.uniExceptionally(CompletableFuture.java:870)
~[?:1.8.0_141]
at
java.util.concurrent.CompletableFuture$UniExceptionally.tryFire(CompletableFuture.java:852)
~[?:1.8.0_141]
at
java.util.concurrent.CompletableFuture.postComplete(CompletableFuture.java:474)
~[?:1.8.0_141]
at
java.util.concurrent.CompletableFuture$AsyncSupply.run(CompletableFuture.java:1595)
~[?:1.8.0_141]
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.lambda$execute$0(ExecutorUtil.java:209)
~[solr-solrj-7.7.2-SNAPSHOT.jar:7.7.2-SNAPSHOT
e5d04ab6a061a02e47f9e6df62a3cfa69632987b - jenkins - 2019-11-22 16:23:11]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
[?:1.8.0_141]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
[?:1.8.0_141]
at java.lang.Thread.run(Thread.java:748) [?:1.8.0_141]
Caused by: java.lang.NullPointerException
at
org.apache.solr.prometheus.scraper.SolrScraper.request(SolrScraper.java:112)
~[solr-prometheus-exporter-7.7.2-SNAPSHOT.jar:7.7.2-SNAPSHOT

Re: Solr Prometheus Exporter failing with "Connection pool shut down" on large cluster

2019-11-22 Thread Alex Jablonski
Hey Richard,

I'd definitely love to hear whether this improves things for you. According
to Guava's documentation, the cache can start evicting items when it gets
close to the limit (
https://github.com/google/guava/wiki/CachesExplained#size-based-eviction),
not just when it reaches it, so if this does end up helping you out, that
could be the reason. I haven't dug into the implementation of "close to the
maximum" that Guava's cache uses, or whether that would happen in the
course of building up the maps to ping all of the nodes (which is where the
issue for us seemed to be), but it's at least a possible avenue to explore
further.

In any case, thanks for trying this out!

On Fri, Nov 22, 2019 at 10:16 AM Richard Goodman 
wrote:

> Hi Alex,
>
> This makes me really happy to see an email about this. I've been working
> on a little while about setting up the prometheus exporter for our
> clusters. Spent good amount of time setting up config, and started getting
> some really decent graphs in grafana on metrics we've never been able to
> collect before.
>
> For our stage environment, this worked like a charm, so shortly rolled it
> out to our live environment. This is when I started to get into trouble.
>
> I too was getting the exact problem you was facing, I then decided to
> split out all of my config so I had one config dedicated to JVM metric
> collection, one dedicated to Node level metrics etc., etc., I was still
> getting loads of errors coming through, which confused me.
>
> Our clusters are typically 96 nodes, so from your report, not sure how I
> would be getting this issue. One theory I had was the timeouts happening on
> the core admin API *(our indexes are range between 5gb-20gb in size each)*,
> and our clusters will typically be around 10s of TB in size. Because of
> this, when we have any replica state change, we notice significant delays
> in /solr/admin/cores , sometimes taking a few minutes to return.
>
> Because of this, I think there is a strong connection to the core admin
> being a problem here, the reason for this is we have 1 unique cluster where
> its typically storing 30days worth of data within its collections, new day
> comes along, we create a collection for that day, and any collections older
> than 30 days get dropped. Documents within this cluster typically don't
> change either, so there's never really any state change, and causes the
> cluster to be significantly reliable for us, where our other main group of
> clusters go through significant amount of change a day.
>
> I'm currently applying your patch into our build, and will deploy this and
> keep you updated to see if this helps. At the moment, I'm looking if there
> is a way to add a default to indexInfo=false to the core admin api, that
> could help us here *(because using that makes the response time insanely
> fast as per usual, however, does remove some statistics)*.
>
> With that though, its very experimental, and not sure if it's the best
> approach, but you have to start somewhere right?
>
> I'd be keen to look into this issue with you, as it's been a problem for
> us also.
>
> I'll reply again with any results I find from applying your patch.
>
> Cheers,
>
> On Wed, 20 Nov 2019 at 20:34, Alex Jablonski 
> wrote:
>
>> Pull request is here: https://github.com/apache/lucene-solr/pull/1022/
>>
>> Thanks!
>> Alex Jablonski
>>
>> On Wed, Nov 20, 2019 at 1:36 PM Alex Jablonski <
>> ajablon...@thoughtworks.com> wrote:
>>
>>> Hi there!
>>>
>>> My colleague and I have run into an issue that seems to appear when
>>> running the Solr Prometheus exporter in SolrCloud mode against a large (>
>>> 100 node) cluster. The symptoms we're observing are "connection pool shut
>>> down" exceptions in the logs and the inability to collect metrics from more
>>> than 100 nodes in the cluster.
>>>
>>> We think we've traced down the issue to
>>> lucene-solr/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrCloudScraper.java
>>> . In that class, hostClientCache exists as a cache of HttpSolrClients
>>> (currently having fixed size 100) that, on evicting a client from the
>>> cache, closes the client's connection. The hostClientCache is used in
>>> createHttpSolrClients to return a map of base URLs to HttpSolrClients.
>>>
>>> Given, say, 300 base URLs, createHttpSolrClients will happily add those
>>> base URLs to the cache, and the "get" method on the cache will happily
>>> return the new additions to the cache. But on adding the 101st
>>> HttpSolrClient to the cache, the first HttpSolrClient gets evicted and
>>> closed. This repeats itself until the only open clients we have are to base
>>> URLs 201 through 300; clients for the first 200 base URLs will be returned,
>>> but will already have been closed. When we later use the result of
>>> createHttpSolrClients to collect metrics, expecting valid and open
>>> HttpSolrClients, we fail to connect when using any of those clients that
>>> have already been 

Re: Solr Prometheus Exporter failing with "Connection pool shut down" on large cluster

2019-11-22 Thread Richard Goodman
Hi Alex,

This makes me really happy to see an email about this. I've been working on
a little while about setting up the prometheus exporter for our clusters.
Spent good amount of time setting up config, and started getting some
really decent graphs in grafana on metrics we've never been able to collect
before.

For our stage environment, this worked like a charm, so shortly rolled it
out to our live environment. This is when I started to get into trouble.

I too was getting the exact problem you was facing, I then decided to split
out all of my config so I had one config dedicated to JVM metric
collection, one dedicated to Node level metrics etc., etc., I was still
getting loads of errors coming through, which confused me.

Our clusters are typically 96 nodes, so from your report, not sure how I
would be getting this issue. One theory I had was the timeouts happening on
the core admin API *(our indexes are range between 5gb-20gb in size each)*,
and our clusters will typically be around 10s of TB in size. Because of
this, when we have any replica state change, we notice significant delays
in /solr/admin/cores , sometimes taking a few minutes to return.

Because of this, I think there is a strong connection to the core admin
being a problem here, the reason for this is we have 1 unique cluster where
its typically storing 30days worth of data within its collections, new day
comes along, we create a collection for that day, and any collections older
than 30 days get dropped. Documents within this cluster typically don't
change either, so there's never really any state change, and causes the
cluster to be significantly reliable for us, where our other main group of
clusters go through significant amount of change a day.

I'm currently applying your patch into our build, and will deploy this and
keep you updated to see if this helps. At the moment, I'm looking if there
is a way to add a default to indexInfo=false to the core admin api, that
could help us here *(because using that makes the response time insanely
fast as per usual, however, does remove some statistics)*.

With that though, its very experimental, and not sure if it's the best
approach, but you have to start somewhere right?

I'd be keen to look into this issue with you, as it's been a problem for us
also.

I'll reply again with any results I find from applying your patch.

Cheers,

On Wed, 20 Nov 2019 at 20:34, Alex Jablonski 
wrote:

> Pull request is here: https://github.com/apache/lucene-solr/pull/1022/
>
> Thanks!
> Alex Jablonski
>
> On Wed, Nov 20, 2019 at 1:36 PM Alex Jablonski <
> ajablon...@thoughtworks.com> wrote:
>
>> Hi there!
>>
>> My colleague and I have run into an issue that seems to appear when
>> running the Solr Prometheus exporter in SolrCloud mode against a large (>
>> 100 node) cluster. The symptoms we're observing are "connection pool shut
>> down" exceptions in the logs and the inability to collect metrics from more
>> than 100 nodes in the cluster.
>>
>> We think we've traced down the issue to
>> lucene-solr/solr/contrib/prometheus-exporter/src/java/org/apache/solr/prometheus/scraper/SolrCloudScraper.java
>> . In that class, hostClientCache exists as a cache of HttpSolrClients
>> (currently having fixed size 100) that, on evicting a client from the
>> cache, closes the client's connection. The hostClientCache is used in
>> createHttpSolrClients to return a map of base URLs to HttpSolrClients.
>>
>> Given, say, 300 base URLs, createHttpSolrClients will happily add those
>> base URLs to the cache, and the "get" method on the cache will happily
>> return the new additions to the cache. But on adding the 101st
>> HttpSolrClient to the cache, the first HttpSolrClient gets evicted and
>> closed. This repeats itself until the only open clients we have are to base
>> URLs 201 through 300; clients for the first 200 base URLs will be returned,
>> but will already have been closed. When we later use the result of
>> createHttpSolrClients to collect metrics, expecting valid and open
>> HttpSolrClients, we fail to connect when using any of those clients that
>> have already been closed, leading to the "Connection pool shut down"
>> exception and not collecting metrics from those nodes.
>>
>> Our idea for a fix was to change the existing cache to, instead of having
>> a fixed maximum size, use `expireAfterAccess` with a timeout that's a
>> multiple of the scrape interval (twice the scrape interval?). We wanted to
>> confirm a few things:
>>
>> 1. Has this issue been reported before, and if so, is there another fix
>> in progress already?
>> 2. Does this approach seem desirable?
>> 3. If so, are there any opinions on what the cache timeout should be
>> besides just double the scrape interval?
>>
>> We'll also open a PR shortly with the changes we're proposing and link
>> here. Please let me know if any of the above is unclear or incorrect.
>>
>> Thanks!
>> Alex Jablonski
>>
>>

-- 

Richard Goodman|Data Infrastructure engineer

Re: Lucene/Solr 8.4

2019-11-22 Thread Namgyu Kim
+1

On Sat, Nov 23, 2019 at 12:33 AM Adrien Grand  wrote:

> Yes Mike, I'm volunteering to be RM.
>
> On Fri, Nov 22, 2019 at 4:08 PM Michael Sokolov 
> wrote:
> >
> > +1 from me - does this mean you (Adrien) are volunteering to be RM?
> >
> > On Fri, Nov 22, 2019 at 9:01 AM Erick Erickson 
> wrote:
> > >
> > > +1
> > >
> > > > On Nov 22, 2019, at 5:10 AM, Ignacio Vera  wrote:
> > > >
> > > > +1
> > > >
> > > > On Fri, Nov 22, 2019 at 10:56 AM jim ferenczi <
> jim.feren...@gmail.com> wrote:
> > > > +1
> > > >
> > > > Le ven. 22 nov. 2019 à 10:08, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> a écrit :
> > > > +1
> > > >
> > > > On Fri, Nov 22, 2019 at 2:16 PM Atri Sharma  wrote:
> > > > >
> > > > > +1
> > > > >
> > > > > On Fri, Nov 22, 2019 at 2:08 PM Adrien Grand 
> wrote:
> > > > > >
> > > > > > Hello all,
> > > > > >
> > > > > > With Thanksgiving and then Christmas coming up, this is going to
> be a
> > > > > > busy time for most of us. I'd like to get a new release before
> the end
> > > > > > of the year, so I'm proposing the following schedule for
> Lucene/Solr
> > > > > > 8.4:
> > > > > >  - cutting the branch on December 12th
> > > > > >  - building the first RC on December 14th
> > > > > > and hopefully we'll have a release in the following week.
> > > > > >
> > > > > > --
> > > > > > Adrien
> > > > > >
> > > > > >
> -
> > > > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Regards,
> > > > >
> > > > > Atri
> > > > > Apache Concerted
> > > > >
> > > > >
> -
> > > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > > >
> > > >
> > > > -
> > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > >
> > >
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Lucene/Solr 8.4

2019-11-22 Thread Adrien Grand
Yes Mike, I'm volunteering to be RM.

On Fri, Nov 22, 2019 at 4:08 PM Michael Sokolov  wrote:
>
> +1 from me - does this mean you (Adrien) are volunteering to be RM?
>
> On Fri, Nov 22, 2019 at 9:01 AM Erick Erickson  
> wrote:
> >
> > +1
> >
> > > On Nov 22, 2019, at 5:10 AM, Ignacio Vera  wrote:
> > >
> > > +1
> > >
> > > On Fri, Nov 22, 2019 at 10:56 AM jim ferenczi  
> > > wrote:
> > > +1
> > >
> > > Le ven. 22 nov. 2019 à 10:08, Ishan Chattopadhyaya 
> > >  a écrit :
> > > +1
> > >
> > > On Fri, Nov 22, 2019 at 2:16 PM Atri Sharma  wrote:
> > > >
> > > > +1
> > > >
> > > > On Fri, Nov 22, 2019 at 2:08 PM Adrien Grand  wrote:
> > > > >
> > > > > Hello all,
> > > > >
> > > > > With Thanksgiving and then Christmas coming up, this is going to be a
> > > > > busy time for most of us. I'd like to get a new release before the end
> > > > > of the year, so I'm proposing the following schedule for Lucene/Solr
> > > > > 8.4:
> > > > >  - cutting the branch on December 12th
> > > > >  - building the first RC on December 14th
> > > > > and hopefully we'll have a release in the following week.
> > > > >
> > > > > --
> > > > > Adrien
> > > > >
> > > > > -
> > > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > >
> > > > Atri
> > > > Apache Concerted
> > > >
> > > > -
> > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > >
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > >
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.4

2019-11-22 Thread Joel Bernstein
+1

Joel Bernstein
http://joelsolr.blogspot.com/


On Fri, Nov 22, 2019 at 10:08 AM Michael Sokolov  wrote:

> +1 from me - does this mean you (Adrien) are volunteering to be RM?
>
> On Fri, Nov 22, 2019 at 9:01 AM Erick Erickson 
> wrote:
> >
> > +1
> >
> > > On Nov 22, 2019, at 5:10 AM, Ignacio Vera  wrote:
> > >
> > > +1
> > >
> > > On Fri, Nov 22, 2019 at 10:56 AM jim ferenczi 
> wrote:
> > > +1
> > >
> > > Le ven. 22 nov. 2019 à 10:08, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> a écrit :
> > > +1
> > >
> > > On Fri, Nov 22, 2019 at 2:16 PM Atri Sharma  wrote:
> > > >
> > > > +1
> > > >
> > > > On Fri, Nov 22, 2019 at 2:08 PM Adrien Grand 
> wrote:
> > > > >
> > > > > Hello all,
> > > > >
> > > > > With Thanksgiving and then Christmas coming up, this is going to
> be a
> > > > > busy time for most of us. I'd like to get a new release before the
> end
> > > > > of the year, so I'm proposing the following schedule for
> Lucene/Solr
> > > > > 8.4:
> > > > >  - cutting the branch on December 12th
> > > > >  - building the first RC on December 14th
> > > > > and hopefully we'll have a release in the following week.
> > > > >
> > > > > --
> > > > > Adrien
> > > > >
> > > > >
> -
> > > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > > >
> > > >
> > > >
> > > > --
> > > > Regards,
> > > >
> > > > Atri
> > > > Apache Concerted
> > > >
> > > > -
> > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > >
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > >
> >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Lucene/Solr 8.4

2019-11-22 Thread Michael Sokolov
+1 from me - does this mean you (Adrien) are volunteering to be RM?

On Fri, Nov 22, 2019 at 9:01 AM Erick Erickson  wrote:
>
> +1
>
> > On Nov 22, 2019, at 5:10 AM, Ignacio Vera  wrote:
> >
> > +1
> >
> > On Fri, Nov 22, 2019 at 10:56 AM jim ferenczi  
> > wrote:
> > +1
> >
> > Le ven. 22 nov. 2019 à 10:08, Ishan Chattopadhyaya 
> >  a écrit :
> > +1
> >
> > On Fri, Nov 22, 2019 at 2:16 PM Atri Sharma  wrote:
> > >
> > > +1
> > >
> > > On Fri, Nov 22, 2019 at 2:08 PM Adrien Grand  wrote:
> > > >
> > > > Hello all,
> > > >
> > > > With Thanksgiving and then Christmas coming up, this is going to be a
> > > > busy time for most of us. I'd like to get a new release before the end
> > > > of the year, so I'm proposing the following schedule for Lucene/Solr
> > > > 8.4:
> > > >  - cutting the branch on December 12th
> > > >  - building the first RC on December 14th
> > > > and hopefully we'll have a release in the following week.
> > > >
> > > > --
> > > > Adrien
> > > >
> > > > -
> > > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > > >
> > >
> > >
> > > --
> > > Regards,
> > >
> > > Atri
> > > Apache Concerted
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > >
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.4

2019-11-22 Thread Erick Erickson
+1

> On Nov 22, 2019, at 5:10 AM, Ignacio Vera  wrote:
> 
> +1
> 
> On Fri, Nov 22, 2019 at 10:56 AM jim ferenczi  wrote:
> +1
> 
> Le ven. 22 nov. 2019 à 10:08, Ishan Chattopadhyaya 
>  a écrit :
> +1
> 
> On Fri, Nov 22, 2019 at 2:16 PM Atri Sharma  wrote:
> >
> > +1
> >
> > On Fri, Nov 22, 2019 at 2:08 PM Adrien Grand  wrote:
> > >
> > > Hello all,
> > >
> > > With Thanksgiving and then Christmas coming up, this is going to be a
> > > busy time for most of us. I'd like to get a new release before the end
> > > of the year, so I'm proposing the following schedule for Lucene/Solr
> > > 8.4:
> > >  - cutting the branch on December 12th
> > >  - building the first RC on December 14th
> > > and hopefully we'll have a release in the following week.
> > >
> > > --
> > > Adrien
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > >
> >
> >
> > --
> > Regards,
> >
> > Atri
> > Apache Concerted
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
> 


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [lucene-solr] branch master updated: LUCENE-9054: Fix reproduceJenkinsFailures.py to not overwrite junit XML files when retrying

2019-11-22 Thread Alan Woodward
Hey Hoss, this is failing precommit because there’s still a nocommit comment:

>failures[testcase] += 1
>break
> +  # have to play nice with 'ant clean' ... nocommit: test this
> +  newDirPath = os.path.join('repro-reports', newSubDir, dir)
> +  os.makedirs(newDirPath, exist_ok=True)



> On 21 Nov 2019, at 22:28, hoss...@apache.org wrote:
> 
> This is an automated email from the ASF dual-hosted git repository.
> 
> hossman pushed a commit to branch master
> in repository https://gitbox.apache.org/repos/asf/lucene-solr.git
> 
> 
> The following commit(s) were added to refs/heads/master by this push:
>new 2d1e67c  LUCENE-9054: Fix reproduceJenkinsFailures.py to not overwrite 
> junit XML files when retrying
> 2d1e67c is described below
> 
> commit 2d1e67c8b4adff3e1cb092fa69552529459d34d3
> Author: Chris Hostetter 
> AuthorDate: Thu Nov 21 15:28:46 2019 -0700
> 
>   LUCENE-9054: Fix reproduceJenkinsFailures.py to not overwrite junit XML 
> files when retrying
> ---
> dev-tools/scripts/reproduceJenkinsFailures.py | 25 -
> lucene/CHANGES.txt|  2 ++
> 2 files changed, 22 insertions(+), 5 deletions(-)
> 
> diff --git a/dev-tools/scripts/reproduceJenkinsFailures.py 
> b/dev-tools/scripts/reproduceJenkinsFailures.py
> index 037c62a..2cb86dd 100644
> --- a/dev-tools/scripts/reproduceJenkinsFailures.py
> +++ b/dev-tools/scripts/reproduceJenkinsFailures.py
> @@ -17,6 +17,7 @@ import argparse
> import http.client
> import os
> import re
> +import shutil
> import subprocess
> import sys
> import time
> @@ -211,7 +212,7 @@ def runTests(testIters, modules, tests):
>finally:
>  os.chdir(cwd)
> 
> -def printReport(testIters, location):
> +def printAndMoveReports(testIters, newSubDir, location):
>  failures = {}
>  for start in ('lucene/build', 'solr/build'):
>for (dir, _, files) in os.walk(start):
> @@ -221,12 +222,17 @@ def printReport(testIters, location):
>  testcase = testOutputFileMatch.group(1)
>  if testcase not in failures:
>failures[testcase] = 0
> -  with open(os.path.join(dir, file), encoding='UTF-8') as 
> testOutputFile:
> +  filePath = os.path.join(dir, file)
> +  with open(filePath, encoding='UTF-8') as testOutputFile:
>for line in testOutputFile:
>  errorFailureMatch = reErrorFailure.search(line)
>  if errorFailureMatch is not None:
>failures[testcase] += 1
>break
> +  # have to play nice with 'ant clean' ... nocommit: test this
> +  newDirPath = os.path.join('repro-reports', newSubDir, dir)
> +  os.makedirs(newDirPath, exist_ok=True)
> +  os.rename(filePath, os.path.join(newDirPath, file))
>  print("[repro] Failures%s:" % location)
>  for testcase in sorted(failures, key=lambda t: (failures[t],t)): # sort by 
> failure count, then by testcase 
>print("[repro]   %d/%d failed: %s" % (failures[testcase], testIters, 
> testcase))
> @@ -246,10 +252,17 @@ def main():
>localGitBranch = getLocalGitBranch()
> 
>  try:
> +# have to play nice with ant clean, so printAndMoveReports will move all 
> the junit XML files here...
> +print('[repro] JUnit rest result XML files will be moved to: 
> ./repro-reports')
> +if os.path.isdir('repro-reports'):
> +  print('[repro]   Deleting old ./repro-reports');
> +  shutil.rmtree('repro-reports')
>prepareWorkspace(config.useGit, revisionFromLog)
>modules = groupTestsByModule(tests)
>runTests(config.testIters, modules, tests)
> -failures = printReport(config.testIters, '')
> +failures = printAndMoveReports(config.testIters, 'orig',
> +   ' w/original seeds' + (' at %s' % 
> revisionFromLog if config.useGit else ''))
> +  
> 
>if config.useGit:
>  # Retest 100% failures at the tip of the branch
> @@ -264,7 +277,8 @@ def main():
>prepareWorkspace(True, branchFromLog)
>modules = groupTestsByModule(tests)
>runTests(config.testIters, modules, tests)
> -failures = printReport(config.testIters, ' at the tip of %s' % 
> branchFromLog)
> +failures = printAndMoveReports(config.testIters, 'branch-tip',
> +   ' original seeds at the tip of %s' % 
> branchFromLog)
> 
># Retest 100% tip-of-branch failures without a seed
>oldTests = tests
> @@ -278,7 +292,8 @@ def main():
>  prepareWorkspace(False, branchFromLog)
>  modules = groupTestsByModule(tests)
>  runTests(config.testIters, modules, tests)
> -  printReport(config.testIters, ' at the tip of %s without a seed' % 
> branchFromLog)
> +  printAndMoveReports(config.testIters, 'branch-tip-no-seed',
> +  ' at the tip of %s without a seed' % 
> branchFromLog)
>  except Exception as e:
>

Re: Lucene/Solr 8.4

2019-11-22 Thread Ignacio Vera
+1

On Fri, Nov 22, 2019 at 10:56 AM jim ferenczi 
wrote:

> +1
>
> Le ven. 22 nov. 2019 à 10:08, Ishan Chattopadhyaya <
> ichattopadhy...@gmail.com> a écrit :
>
>> +1
>>
>> On Fri, Nov 22, 2019 at 2:16 PM Atri Sharma  wrote:
>> >
>> > +1
>> >
>> > On Fri, Nov 22, 2019 at 2:08 PM Adrien Grand  wrote:
>> > >
>> > > Hello all,
>> > >
>> > > With Thanksgiving and then Christmas coming up, this is going to be a
>> > > busy time for most of us. I'd like to get a new release before the end
>> > > of the year, so I'm proposing the following schedule for Lucene/Solr
>> > > 8.4:
>> > >  - cutting the branch on December 12th
>> > >  - building the first RC on December 14th
>> > > and hopefully we'll have a release in the following week.
>> > >
>> > > --
>> > > Adrien
>> > >
>> > > -
>> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > > For additional commands, e-mail: dev-h...@lucene.apache.org
>> > >
>> >
>> >
>> > --
>> > Regards,
>> >
>> > Atri
>> > Apache Concerted
>> >
>> > -
>> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> > For additional commands, e-mail: dev-h...@lucene.apache.org
>> >
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


Re: Lucene/Solr 8.4

2019-11-22 Thread jim ferenczi
+1

Le ven. 22 nov. 2019 à 10:08, Ishan Chattopadhyaya <
ichattopadhy...@gmail.com> a écrit :

> +1
>
> On Fri, Nov 22, 2019 at 2:16 PM Atri Sharma  wrote:
> >
> > +1
> >
> > On Fri, Nov 22, 2019 at 2:08 PM Adrien Grand  wrote:
> > >
> > > Hello all,
> > >
> > > With Thanksgiving and then Christmas coming up, this is going to be a
> > > busy time for most of us. I'd like to get a new release before the end
> > > of the year, so I'm proposing the following schedule for Lucene/Solr
> > > 8.4:
> > >  - cutting the branch on December 12th
> > >  - building the first RC on December 14th
> > > and hopefully we'll have a release in the following week.
> > >
> > > --
> > > Adrien
> > >
> > > -
> > > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > > For additional commands, e-mail: dev-h...@lucene.apache.org
> > >
> >
> >
> > --
> > Regards,
> >
> > Atri
> > Apache Concerted
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Lucene/Solr 8.4

2019-11-22 Thread Ishan Chattopadhyaya
+1

On Fri, Nov 22, 2019 at 2:16 PM Atri Sharma  wrote:
>
> +1
>
> On Fri, Nov 22, 2019 at 2:08 PM Adrien Grand  wrote:
> >
> > Hello all,
> >
> > With Thanksgiving and then Christmas coming up, this is going to be a
> > busy time for most of us. I'd like to get a new release before the end
> > of the year, so I'm proposing the following schedule for Lucene/Solr
> > 8.4:
> >  - cutting the branch on December 12th
> >  - building the first RC on December 14th
> > and hopefully we'll have a release in the following week.
> >
> > --
> > Adrien
> >
> > -
> > To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> > For additional commands, e-mail: dev-h...@lucene.apache.org
> >
>
>
> --
> Regards,
>
> Atri
> Apache Concerted
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene/Solr 8.4

2019-11-22 Thread Atri Sharma
+1

On Fri, Nov 22, 2019 at 2:08 PM Adrien Grand  wrote:
>
> Hello all,
>
> With Thanksgiving and then Christmas coming up, this is going to be a
> busy time for most of us. I'd like to get a new release before the end
> of the year, so I'm proposing the following schedule for Lucene/Solr
> 8.4:
>  - cutting the branch on December 12th
>  - building the first RC on December 14th
> and hopefully we'll have a release in the following week.
>
> --
> Adrien
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>


-- 
Regards,

Atri
Apache Concerted

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Lucene/Solr 8.4

2019-11-22 Thread Adrien Grand
Hello all,

With Thanksgiving and then Christmas coming up, this is going to be a
busy time for most of us. I'd like to get a new release before the end
of the year, so I'm proposing the following schedule for Lucene/Solr
8.4:
 - cutting the branch on December 12th
 - building the first RC on December 14th
and hopefully we'll have a release in the following week.

-- 
Adrien

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org