Re: Lucene (unexpected ) fsync on existing segments

2021-03-11 Thread Rahul Goswami
I can create a Jira and assign it to myself if that's ok (?). I think this
can help improve commit performance.
Also, to answer your question, we have indexes sometimes going into
multiple terabytes. Using the replication handler for backup would mean
requiring a disk capacity more than 2x the index size on the machine at all
times, which might not be feasible. So we directly back the index up from
the Solr node to a remote repository.

Thanks,
Rahul

On Thu, Mar 11, 2021 at 4:09 PM Michael Sokolov  wrote:

> Well, it certainly doesn't seem necessary to fsync files that are
> unchanged and have already been fsync'ed. Maybe there's an opportunity
> to improve it? On the other hand, support for external processes
> reading Lucene index files isn't likely to become a feature of Lucene.
> You might want to consider using Solr replication to power your
> backup?
>
> On Thu, Mar 11, 2021 at 2:52 PM Rahul Goswami 
> wrote:
> >
> > Thanks Michael. I thought since this discussion is closer to the code
> than most discussions on the solr-users list, it seemed like a more
> appropriate forum. Will be mindful going forward.
> > On your point about new segments, I attached a debugger and tried to do
> a new commit (just pure Solr commit, no backup process running), and the
> code indeed does fsync on a pre-existing segment file. Hence I was a bit
> baffled since it challenged my fundamental understanding that segment files
> once written are immutable, no matter what (unless picked up for a merge of
> course). Hence I thought of reaching out, in case there are scenarios where
> this might happen which I might be unaware of.
> >
> > Thanks,
> > Rahul
> >
> > On Thu, Mar 11, 2021 at 2:38 PM Michael Sokolov 
> wrote:
> >>
> >> This isn't a support forum; solr-users@ might be more appropriate. On
> >> that list someone might have a better idea about how the replication
> >> handler gets its list of files. This would be a good list to try if
> >> you wanted to propose a fix for the problem you're having. But since
> >> you're here -- it looks to me as if IndexWriter indeed syncs all "new"
> >> files in the current segments being committed; look in
> >> IndexWriter.startCommit and SegmentInfos.files. Caveat: (1) I'm
> >> looking at this code for the first time, and (2) things may have been
> >> different in 7.7.2? Sorry I don't know for sure, but are you sure that
> >> your backup process is not attempting to copy one of the new files?
> >>
> >> On Thu, Mar 11, 2021 at 1:35 PM Rahul Goswami 
> wrote:
> >> >
> >> > Hello,
> >> > Just wanted to follow up one more time to see if this is the right
> form for my question? Or is this suitable for some other mailing list?
> >> >
> >> > Best,
> >> > Rahul
> >> >
> >> > On Sat, Mar 6, 2021 at 3:57 PM Rahul Goswami 
> wrote:
> >> >>
> >> >> Hello everyone,
> >> >> Following up on my question in case anyone has any idea. Why it's
> important to know this is because I am thinking of allowing the backup
> process to not hold any lock on the index files, which should allow the
> fsync during parallel commits. BUT, in case doing an fsync on existing
> segment files in a saved commit point DOES have an effect, it might render
> the backed up index in a corrupt state.
> >> >>
> >> >> Thanks,
> >> >> Rahul
> >> >>
> >> >> On Fri, Mar 5, 2021 at 3:04 PM Rahul Goswami 
> wrote:
> >> >>>
> >> >>> Hello,
> >> >>> We have a process which backs up the index (Solr 7.7.2) on a
> schedule. The way we do it is we first save a commit point on the index and
> then using Solr's /replication handler, get the list of files in that
> generation. After the backup completes, we release the commit point (Please
> note that this is a separate backup process outside of Solr and not the
> backup command of the /replication handler)
> >> >>> The assumption is that while the commit point is saved, no changes
> happen to the segment files in the saved generation.
> >> >>>
> >> >>> Now the issue... The backup process opens the index files in a
> shared READ mode, preventing writes. This is causing any parallel commits
> to fail as it seems to be complaining about the index files to be locked by
> another process(the backup process). Upon debugging, I see that fsync is
> being called during commit on already existing segment files which is not
> expected. So, my question is, is there any reason for lucene to call fsync
> on already existing segment files?
> >> >>>
> >> >>> The line of code I am referring to is as below:
> >> >>> try (final FileChannel file = FileChannel.open(fileToSync, isDir ?
> StandardOpenOption.READ : StandardOpenOption.WRITE))
> >> >>>
> >> >>> in method fsync(Path fileToSync, boolean isDir) of the class file
> >> >>>
> >> >>> lucene\core\src\java\org\apache\lucene\util\IOUtils.java
> >> >>>
> >> >>> Thanks,
> >> >>> Rahul
> >>
> >> -
> >> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> >> For additional commands, 

Re: [lucene] branch main updated: Always include errorprone dependency, even if we're not checking. This ensures consistent use patterns across JVMs.

2021-03-11 Thread Dawid Weiss
> I know it's not needed, but "if" statements around dependencies look
> strange to me!
>

It is just code, it's fine. That's the beauty of it.


> I can try to merge it as a proof of concept.
>

You can have three remotes and cherry pick between repos. ;) This,
amazingly, would work just fine.

Uwe
>
> Am March 11, 2021 9:27:41 PM UTC schrieb dwe...@apache.org:
>>
>> This is an automated email from the ASF dual-hosted git repository.
>>
>> dweiss pushed a commit to branch main
>> in repository https://gitbox.apache.org/repos/asf/lucene.git
>>
>>
>> The following commit(s) were added to refs/heads/main by this push:
>>  new 8bbcc39  Always include errorprone dependency, even if we're not 
>> checking. This ensures consistent use patterns across JVMs.
>> 8bbcc39 is described below
>>
>> commit 8bbcc395832ccd109794f4b85a71a59a0af2d4f4
>> Author: Dawid Weiss 
>> AuthorDate: Thu Mar 11 22:27:25 2021 +0100
>>
>> Always include errorprone dependency, even if we're not checking. This 
>> ensures consistent use patterns across JVMs.
>> --
>>  gradle/validation/error-prone.gradle | 248 
>> ++-
>>  1 file changed, 125 insertions(+), 123 deletions(-)
>>
>> diff --git a/gradle/validation/error-prone.gradle 
>> b/gradle/validation/error-prone.gradle
>> index 2cec644..edcbaed 100644
>> --- a/gradle/validation/error-prone.gradle
>> +++ b/gradle/validation/error-prone.gradle
>> @@ -15,10 +15,9 @@
>>   * limitations under the License.
>>   */
>>
>> -// LUCENE-9650: Errorprone on master/gradle no longer works with JDK-16
>> -if (rootProject.runtimeJavaVersion > JavaVersion.VERSION_15) {
>> +def includeErrorProne = rootProject.runtimeJavaVersion <= 
>> JavaVersion.VERSION_15;
>> +if (!includeErrorProne) {
>>logger.warn("WARNING: errorprone disabled (won't work with JDK 
>> ${rootProject.runtimeJavaVersion})")
>> -  return
>>  }
>>
>>  allprojects { prj ->
>> @@ -29,127 +28,130 @@ allprojects { prj ->
>>errorprone("com.google.errorprone:error_prone_core")
>>  }
>>
>> -tasks.withType(JavaCompile) { task ->
>> -  options.errorprone.disableWarningsInGeneratedCode = true
>> -  options.errorprone.errorproneArgs = [
>> -  // test
>> -  '-Xep:ExtendingJUnitAssert:OFF',
>> -  '-Xep:UseCorrectAssertInTests:OFF',
>> -  '-Xep:DefaultPackage:OFF',
>> -  '-Xep:FloatingPointLiteralPrecision:OFF',
>> -  '-Xep:CatchFail:OFF',
>> -  '-Xep:TryFailThrowable:OFF',
>> -  '-Xep:MathAbsoluteRandom:OFF',
>> -  '-Xep:AssertionFailureIgnored:OFF',
>> -  '-Xep:JUnit4TestNotRun:OFF',
>> -  '-Xep:FallThrough:OFF',
>> -  '-Xep:CatchAndPrintStackTrace:OFF',
>> -  '-Xep:ToStringReturnsNull:OFF',
>> -  '-Xep:ArrayAsKeyOfSetOrMap:OFF',
>> -  '-Xep:StaticAssignmentInConstructor:OFF',
>> -  '-Xep:SelfAssignment:OFF',
>> -  '-Xep:InvalidPatternSyntax:OFF',
>> -  '-Xep:MissingFail:OFF',
>> -  '-Xep:LossyPrimitiveCompare:OFF',
>> -  '-Xep:ComparableType:OFF',
>> -  '-Xep:InfiniteRecursion:OFF',
>> -  '-Xep:MisusedDayOfYear:OFF',
>> -  '-Xep:FloatingPointAssertionWithinEpsilon:OFF',
>> +// LUCENE-9650: Errorprone on master/gradle no longer works with JDK-16
>> +if (includeErrorProne) {
>> +  tasks.withType(JavaCompile) { task ->
>> +options.errorprone.disableWarningsInGeneratedCode = true
>> +options.errorprone.errorproneArgs = [
>> +// test
>> +'-Xep:ExtendingJUnitAssert:OFF',
>> +'-Xep:UseCorrectAssertInTests:OFF',
>> +'-Xep:DefaultPackage:OFF',
>> +'-Xep:FloatingPointLiteralPrecision:OFF',
>> +'-Xep:CatchFail:OFF',
>> +'-Xep:TryFailThrowable:OFF',
>> +'-Xep:MathAbsoluteRandom:OFF',
>> +'-Xep:AssertionFailureIgnored:OFF',
>> +'-Xep:JUnit4TestNotRun:OFF',
>> +'-Xep:FallThrough:OFF',
>> +'-Xep:CatchAndPrintStackTrace:OFF',
>> +'-Xep:ToStringReturnsNull:OFF',
>> +'-Xep:ArrayAsKeyOfSetOrMap:OFF',
>> +'-Xep:StaticAssignmentInConstructor:OFF',
>> +'-Xep:SelfAssignment:OFF',
>> +'-Xep:InvalidPatternSyntax:OFF',
>> +'-Xep:MissingFail:OFF',
>> +'-Xep:LossyPrimitiveCompare:OFF',
>> +'-Xep:ComparableType:OFF',
>> +'-Xep:InfiniteRecursion:OFF',
>> +'-Xep:MisusedDayOfYear:OFF',
>> +'-Xep:FloatingPointAssertionWithinEpsilon:OFF',
>>
>> -  '-Xep:ThrowNull:OFF',
>> -  '-Xep:StaticGuardedByInstance:OFF',
>> -  '-Xep:ArrayHashCode:OFF',
>> -  '-Xep:ArrayEquals:OFF',
>> -  '-Xep:IdentityBinaryExpression:OFF',
>> -  '-Xep:ComplexBooleanConstant:OFF',
>> -  '-Xep:ComplexBooleanConstant:OFF',
>> -  '-Xep:StreamResourceLeak:OFF',
>> -  

RE: [lucene] branch main updated: Always include errorprone dependency, even if we're not checking. This ensures consistent use patterns across JVMs.

2021-03-11 Thread Uwe Schindler
Hi Dawid,

 

I simply tried it out to merge/cherrypick something  to another checkout. I did 
it with TortoiseGit, but this just made it simple to setup (GUI, no crazy 
cmdline).

 

I cherrypicked the lucene commit and applied it to solr. How I have my local 
setup:

*   I have three checkouts and git repos: lucene-solr.git, lucene.git and 
solr.git. I don’t want to mix them together, so keep them separate. All are in 
same top level folder.
*   On each repo I added the 2 other ones as local remote (add a remote, 
named “local-solr” with URL “../solr”, same for the other ones)
*   I pulled all repos, to be sure to be uptodate
*   On the solr repo, I used the “show log” tortoisegit functionality, 
switched to “remotes/local-solr main”, right clicked your commit and selected 
“cherr-pick this commit”. Voila done! Just pushing repo and all was fine.

 

With command line it might be more complicated, but I’m happy!

 

Uwe

 

-

Uwe Schindler

Achterdiek 19, D-28357 Bremen

https://www.thetaphi.de

eMail: u...@thetaphi.de

 

From: Uwe Schindler  
Sent: Thursday, March 11, 2021 10:53 PM
To: dev@lucene.apache.org
Subject: Re: [lucene] branch main updated: Always include errorprone 
dependency, even if we're not checking. This ensures consistent use patterns 
across JVMs.

 

Should we maybe merge this also to Solr?

I know it's not needed, but "if" statements around dependencies look strange to 
me!

I can try to merge it as a proof of concept. 

Uwe

Am March 11, 2021 9:27:41 PM UTC schrieb dwe...@apache.org 
 :

This is an automated email from the ASF dual-hosted git repository.

dweiss pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/lucene.git


The following commit(s) were added to refs/heads/main by this push:
 new 8bbcc39  Always include errorprone dependency, even if we're not 
checking. This ensures consistent use patterns across JVMs.
8bbcc39 is described below

commit 8bbcc395832ccd109794f4b85a71a59a0af2d4f4
Author: Dawid Weiss mailto:dawid.we...@carrotsearch.com> >
AuthorDate: Thu Mar 11 22:27:25 2021 +0100

Always include errorprone dependency, even if we're not checking. This 
ensures consistent use patterns across JVMs.


  _  

 gradle/validation/error-prone.gradle | 248 ++-
 1 file changed, 125 insertions(+), 123 deletions(-)

diff --git a/gradle/validation/error-prone.gradle 
b/gradle/validation/error-prone.gradle
index 2cec644..edcbaed 100644
--- a/gradle/validation/error-prone.gradle
+++ b/gradle/validation/error-prone.gradle
@@ -15,10 +15,9 @@
  * limitations under the License.
  */
 
-// LUCENE-9650: Errorprone on master/gradle no longer works with JDK-16
-if (rootProject.runtimeJavaVersion > JavaVersion.VERSION_15) {
+def includeErrorProne = rootProject.runtimeJavaVersion <= 
JavaVersion.VERSION_15;
+if (!includeErrorProne) {
   logger.warn("WARNING: errorprone disabled (won't work with JDK 
${rootProject.runtimeJavaVersion})")
-  return
 }
 
 allprojects { prj ->
@@ -29,127 +28,130 @@ allprojects { prj ->
   errorprone("com.google.errorprone:error_prone_core")
 }
 
-tasks.withType(JavaCompile) { task ->
-  options.errorprone.disableWarningsInGeneratedCode = true
-  options.errorprone.errorproneArgs = [
-  // test
-  '-Xep:ExtendingJUnitAssert:OFF',
-  '-Xep:UseCorrectAssertInTests:OFF',
-  '-Xep:DefaultPackage:OFF',
-  '-Xep:FloatingPointLiteralPrecision:OFF',
-  '-Xep:CatchFail:OFF',
-  '-Xep:TryFailThrowable:OFF',
-  '-Xep:MathAbsoluteRandom:OFF',
-  '-Xep:AssertionFailureIgnored:OFF',
-  '-Xep:JUnit4TestNotRun:OFF',
-  '-Xep:FallThrough:OFF',
-  '-Xep:CatchAndPrintStackTrace:OFF',
-  '-Xep:ToStringReturnsNull:OFF',
-  '-Xep:ArrayAsKeyOfSetOrMap:OFF',
-  '-Xep:StaticAssignmentInConstructor:OFF',
-  '-Xep:SelfAssignment:OFF',
-  '-Xep:InvalidPatternSyntax:OFF',
-  '-Xep:MissingFail:OFF',
-  '-Xep:LossyPrimitiveCompare:OFF',
-  '-Xep:ComparableType:OFF',
-  '-Xep:InfiniteRecursion:OFF',
-  '-Xep:MisusedDayOfYear:OFF',
-  '-Xep:FloatingPointAssertionWithinEpsilon:OFF',
+// LUCENE-9650: Errorprone on master/gradle no longer works with JDK-16
+if (includeErrorProne) {
+  tasks.withType(JavaCompile) { task ->
+options.errorprone.disableWarningsInGeneratedCode = true
+options.errorprone.errorproneArgs = [
+// test
+'-Xep:ExtendingJUnitAssert:OFF',
+'-Xep:UseCorrectAssertInTests:OFF',
+'-Xep:DefaultPackage:OFF',
+'-Xep:FloatingPointLiteralPrecision:OFF',
+'-Xep:CatchFail:OFF',
+'-Xep:TryFailThrowable:OFF',
+'-Xep:MathAbsoluteRandom:OFF',
+'-Xep:AssertionFailureIgnored:OFF',
+'-Xep:JUnit4TestNotRun:OFF',
+

Re: [lucene] branch main updated: Always include errorprone dependency, even if we're not checking. This ensures consistent use patterns across JVMs.

2021-03-11 Thread Uwe Schindler
Should we maybe merge this also to Solr?

I know it's not needed, but "if" statements around dependencies look strange to 
me!

I can try to merge it as a proof of concept. 

Uwe

Am March 11, 2021 9:27:41 PM UTC schrieb dwe...@apache.org:
>This is an automated email from the ASF dual-hosted git repository.
>
>dweiss pushed a commit to branch main
>in repository https://gitbox.apache.org/repos/asf/lucene.git
>
>
>The following commit(s) were added to refs/heads/main by this push:
>new 8bbcc39  Always include errorprone dependency, even if we're not
>checking. This ensures consistent use patterns across JVMs.
>8bbcc39 is described below
>
>commit 8bbcc395832ccd109794f4b85a71a59a0af2d4f4
>Author: Dawid Weiss 
>AuthorDate: Thu Mar 11 22:27:25 2021 +0100
>
>Always include errorprone dependency, even if we're not checking. This
>ensures consistent use patterns across JVMs.
>---
>gradle/validation/error-prone.gradle | 248
>++-
> 1 file changed, 125 insertions(+), 123 deletions(-)
>
>diff --git a/gradle/validation/error-prone.gradle
>b/gradle/validation/error-prone.gradle
>index 2cec644..edcbaed 100644
>--- a/gradle/validation/error-prone.gradle
>+++ b/gradle/validation/error-prone.gradle
>@@ -15,10 +15,9 @@
>  * limitations under the License.
>  */
> 
>-// LUCENE-9650: Errorprone on master/gradle no longer works with
>JDK-16
>-if (rootProject.runtimeJavaVersion > JavaVersion.VERSION_15) {
>+def includeErrorProne = rootProject.runtimeJavaVersion <=
>JavaVersion.VERSION_15;
>+if (!includeErrorProne) {
>logger.warn("WARNING: errorprone disabled (won't work with JDK
>${rootProject.runtimeJavaVersion})")
>-  return
> }
> 
> allprojects { prj ->
>@@ -29,127 +28,130 @@ allprojects { prj ->
>   errorprone("com.google.errorprone:error_prone_core")
> }
> 
>-tasks.withType(JavaCompile) { task ->
>-  options.errorprone.disableWarningsInGeneratedCode = true
>-  options.errorprone.errorproneArgs = [
>-  // test
>-  '-Xep:ExtendingJUnitAssert:OFF',
>-  '-Xep:UseCorrectAssertInTests:OFF',
>-  '-Xep:DefaultPackage:OFF',
>-  '-Xep:FloatingPointLiteralPrecision:OFF',
>-  '-Xep:CatchFail:OFF',
>-  '-Xep:TryFailThrowable:OFF',
>-  '-Xep:MathAbsoluteRandom:OFF',
>-  '-Xep:AssertionFailureIgnored:OFF',
>-  '-Xep:JUnit4TestNotRun:OFF',
>-  '-Xep:FallThrough:OFF',
>-  '-Xep:CatchAndPrintStackTrace:OFF',
>-  '-Xep:ToStringReturnsNull:OFF',
>-  '-Xep:ArrayAsKeyOfSetOrMap:OFF',
>-  '-Xep:StaticAssignmentInConstructor:OFF',
>-  '-Xep:SelfAssignment:OFF',
>-  '-Xep:InvalidPatternSyntax:OFF',
>-  '-Xep:MissingFail:OFF',
>-  '-Xep:LossyPrimitiveCompare:OFF',
>-  '-Xep:ComparableType:OFF',
>-  '-Xep:InfiniteRecursion:OFF',
>-  '-Xep:MisusedDayOfYear:OFF',
>-  '-Xep:FloatingPointAssertionWithinEpsilon:OFF',
>+// LUCENE-9650: Errorprone on master/gradle no longer works with
>JDK-16
>+if (includeErrorProne) {
>+  tasks.withType(JavaCompile) { task ->
>+options.errorprone.disableWarningsInGeneratedCode = true
>+options.errorprone.errorproneArgs = [
>+// test
>+'-Xep:ExtendingJUnitAssert:OFF',
>+'-Xep:UseCorrectAssertInTests:OFF',
>+'-Xep:DefaultPackage:OFF',
>+'-Xep:FloatingPointLiteralPrecision:OFF',
>+'-Xep:CatchFail:OFF',
>+'-Xep:TryFailThrowable:OFF',
>+'-Xep:MathAbsoluteRandom:OFF',
>+'-Xep:AssertionFailureIgnored:OFF',
>+'-Xep:JUnit4TestNotRun:OFF',
>+'-Xep:FallThrough:OFF',
>+'-Xep:CatchAndPrintStackTrace:OFF',
>+'-Xep:ToStringReturnsNull:OFF',
>+'-Xep:ArrayAsKeyOfSetOrMap:OFF',
>+'-Xep:StaticAssignmentInConstructor:OFF',
>+'-Xep:SelfAssignment:OFF',
>+'-Xep:InvalidPatternSyntax:OFF',
>+'-Xep:MissingFail:OFF',
>+'-Xep:LossyPrimitiveCompare:OFF',
>+'-Xep:ComparableType:OFF',
>+'-Xep:InfiniteRecursion:OFF',
>+'-Xep:MisusedDayOfYear:OFF',
>+'-Xep:FloatingPointAssertionWithinEpsilon:OFF',
> 
>-  '-Xep:ThrowNull:OFF',
>-  '-Xep:StaticGuardedByInstance:OFF',
>-  '-Xep:ArrayHashCode:OFF',
>-  '-Xep:ArrayEquals:OFF',
>-  '-Xep:IdentityBinaryExpression:OFF',
>-  '-Xep:ComplexBooleanConstant:OFF',
>-  '-Xep:ComplexBooleanConstant:OFF',
>-  '-Xep:StreamResourceLeak:OFF',
>-  '-Xep:UnnecessaryLambda:OFF',
>-  '-Xep:ObjectToString:OFF',
>-  '-Xep:URLEqualsHashCode:OFF',
>-  '-Xep:DoubleBraceInitialization:OFF',
>-  '-Xep:ShortCircuitBoolean:OFF',
>-  '-Xep:InputStreamSlowMultibyteRead:OFF',
>-  '-Xep:NonCanonicalType:OFF',
>-  '-Xep:CollectionIncompatibleType:OFF',
>-  

Re: Configurable Postings Block Size?

2021-03-11 Thread Greg Miller
I did end up internally benchmarking a few different FOR block sizes and
wanted to circle back here with the results in case anyone else was
curious. Maybe these results will be useful to others. Or maybe someone
will spot something interesting here that I overlooked. The tl;dr is that
the current block size (128) seems to perform the best for us.

These results were all from an internal benchmarking tool we use against
Amazon's product search engine. For my methodology, I directly modified
ForUtil on my local branch to work on different block sizes (64, 256 and
512 in addition to the default 128). Because ForUtil is common to both
ForDeltaUtil and PForUtil, and there's an interesting interaction between
the PFOR "patched exception" count and the block size (changing only the
blocksize without changing exception count skews the ratio), I took the "P"
out of PForUtil for testing (set exceptions to zero to make this just
vanilla FOR). Maybe this methodology is flawed, but it was a start. I'm
highlighting the impacts to index size, red-line queries/sec, GC time and
avg. latency since those were the most interesting. I hope this table is
moderately readable...

baseline | candidate | index size impact |red-line qps impact | gc time
impact | avg. latency impact |
|---||---|-|---|
128| 64  |  0% |
  -1.5% | -4.88% |   +0.96% |
128| 256|   +0.42% |
-0.73% |+4.53% |   +0.49% |
256| 512|   +0.58% |
-1.17% |+4.67% |   +1.53% |

It makes sense that increasing the block size would cause the index size to
increase. Especially without any exceptions, the number of bits-per-value
needed in each block would be expected to increase. I'm also not surprised
that GC time increases with block size since decoding each block will
create more garbage.

I did find it a bit surprising that our red-line qps and avg. latency both
regressed when shrinking the block size to 64. Our searcher does a lot of
conjunctive querying, so I had hypothesized that we'd see better qps and
latency by shrinking the block size given that we should be doing a lot of
skipping. I'm not sure what's happening here. One speculation is that we
might be "skipping" to adjacent blocks more often than I'd guess. When that
happens, we'd essentially decode a block of 64 then turn right around and
decode the next block of 64. In these cases, I expect it's more efficient
to decode all 128 in one shot. I don't have any metrics to support this
though, so just throwing out a guess. If I do end up instrumenting some
metrics to support or refute this, I'll report back for those curious.

I also tested the impact of keeping the block size at 128 but just removing
the three PFOR "exceptions" (so basically testing PFOR vs. FOR for the same
block size). This was also curious. The index size increased by +0.66%
(makes sense) but red-line qps dropped by -0.58%. I would have expected
red-line qps to slightly improve by moving from PFOR to FOR since the
"patching" should be relatively expensive. Maybe the slightly more
compressed data resulting from PFOR has some other advantages? Dunno.
Another curiosity for the time-being.

Cheers,
-Greg

On Thu, Mar 4, 2021 at 6:22 AM Greg Miller  wrote:

> Thanks Robert. I've created
> https://issues.apache.org/jira/browse/LUCENE-9822 and will attach a patch
> shortly.
>
> Cheers,
> -Greg
>
> On Wed, Mar 3, 2021 at 6:21 PM Robert Muir  wrote:
>
>> I think its a good idea, especially if the assert can be in a good place
>> (ideally a not-so-hot place, e.g. encoding, patching code). asserts have
>> some costs for this kind of code even when disabled, bytecode count limits
>> are used for compiler threshold and stuff.
>>
>> On Wed, Mar 3, 2021 at 9:05 PM Greg Miller  wrote:
>>
>>> So, slightly different topic, maybe, but related so tacking onto this
>>> thread...
>>>
>>> While tweaking ForUtil locally to experiment with different block sizes,
>>> I realized that PForUtil encodes the offset for each "patch" using a single
>>> byte, which implies a strict upper limit of 256 on the BLOCK_SIZE defined
>>> in ForUtil. This essentially silently failed on me when I was trying to set
>>> up blocks of 512. The unit tests caught it since the results were incorrect
>>> after encoding/decoding with PForUtil (hooray!), but it would have been
>>> nice to have an assert somewhere guarding for this to make matters a little
>>> more explicit.
>>>
>>> While I realize that the likelihood of changing the blockside in ForUtil
>>> may be low for now, it seems like such a small, easy change to toss an
>>> assert in that it seems useful. What do you all think? Worth opening a
>>> minor issue for this and putting in a one-liner?
>>>
>>> Cheers,
>>> -Greg
>>>
>>> On Mon, 

Re: [JENKINS-EA] Lucene-main-Linux (64bit/jdk-16-ea+36) - Build # 29666 - Failure!

2021-03-11 Thread Dawid Weiss
I changed the code to always include errorprone dependency -
regardless of whether we
actually run the check or not. This is the simplest way to achieve
consistency here. Thanks for tracking this down, Uwe.


Dawid

On Thu, Mar 11, 2021 at 10:21 PM Dawid Weiss  wrote:
>
> > I think we must either put that on some white- ähm exclusion-list!
>
> I'll fix it.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: [JENKINS-EA] Lucene-main-Linux (64bit/jdk-16-ea+36) - Build # 29666 - Failure!

2021-03-11 Thread Dawid Weiss
> I think we must either put that on some white- ähm exclusion-list!

I'll fix it.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene (unexpected ) fsync on existing segments

2021-03-11 Thread Michael Sokolov
Well, it certainly doesn't seem necessary to fsync files that are
unchanged and have already been fsync'ed. Maybe there's an opportunity
to improve it? On the other hand, support for external processes
reading Lucene index files isn't likely to become a feature of Lucene.
You might want to consider using Solr replication to power your
backup?

On Thu, Mar 11, 2021 at 2:52 PM Rahul Goswami  wrote:
>
> Thanks Michael. I thought since this discussion is closer to the code than 
> most discussions on the solr-users list, it seemed like a more appropriate 
> forum. Will be mindful going forward.
> On your point about new segments, I attached a debugger and tried to do a new 
> commit (just pure Solr commit, no backup process running), and the code 
> indeed does fsync on a pre-existing segment file. Hence I was a bit baffled 
> since it challenged my fundamental understanding that segment files once 
> written are immutable, no matter what (unless picked up for a merge of 
> course). Hence I thought of reaching out, in case there are scenarios where 
> this might happen which I might be unaware of.
>
> Thanks,
> Rahul
>
> On Thu, Mar 11, 2021 at 2:38 PM Michael Sokolov  wrote:
>>
>> This isn't a support forum; solr-users@ might be more appropriate. On
>> that list someone might have a better idea about how the replication
>> handler gets its list of files. This would be a good list to try if
>> you wanted to propose a fix for the problem you're having. But since
>> you're here -- it looks to me as if IndexWriter indeed syncs all "new"
>> files in the current segments being committed; look in
>> IndexWriter.startCommit and SegmentInfos.files. Caveat: (1) I'm
>> looking at this code for the first time, and (2) things may have been
>> different in 7.7.2? Sorry I don't know for sure, but are you sure that
>> your backup process is not attempting to copy one of the new files?
>>
>> On Thu, Mar 11, 2021 at 1:35 PM Rahul Goswami  wrote:
>> >
>> > Hello,
>> > Just wanted to follow up one more time to see if this is the right form 
>> > for my question? Or is this suitable for some other mailing list?
>> >
>> > Best,
>> > Rahul
>> >
>> > On Sat, Mar 6, 2021 at 3:57 PM Rahul Goswami  wrote:
>> >>
>> >> Hello everyone,
>> >> Following up on my question in case anyone has any idea. Why it's 
>> >> important to know this is because I am thinking of allowing the backup 
>> >> process to not hold any lock on the index files, which should allow the 
>> >> fsync during parallel commits. BUT, in case doing an fsync on existing 
>> >> segment files in a saved commit point DOES have an effect, it might 
>> >> render the backed up index in a corrupt state.
>> >>
>> >> Thanks,
>> >> Rahul
>> >>
>> >> On Fri, Mar 5, 2021 at 3:04 PM Rahul Goswami  
>> >> wrote:
>> >>>
>> >>> Hello,
>> >>> We have a process which backs up the index (Solr 7.7.2) on a schedule. 
>> >>> The way we do it is we first save a commit point on the index and then 
>> >>> using Solr's /replication handler, get the list of files in that 
>> >>> generation. After the backup completes, we release the commit point 
>> >>> (Please note that this is a separate backup process outside of Solr and 
>> >>> not the backup command of the /replication handler)
>> >>> The assumption is that while the commit point is saved, no changes 
>> >>> happen to the segment files in the saved generation.
>> >>>
>> >>> Now the issue... The backup process opens the index files in a shared 
>> >>> READ mode, preventing writes. This is causing any parallel commits to 
>> >>> fail as it seems to be complaining about the index files to be locked by 
>> >>> another process(the backup process). Upon debugging, I see that fsync is 
>> >>> being called during commit on already existing segment files which is 
>> >>> not expected. So, my question is, is there any reason for lucene to call 
>> >>> fsync on already existing segment files?
>> >>>
>> >>> The line of code I am referring to is as below:
>> >>> try (final FileChannel file = FileChannel.open(fileToSync, isDir ? 
>> >>> StandardOpenOption.READ : StandardOpenOption.WRITE))
>> >>>
>> >>> in method fsync(Path fileToSync, boolean isDir) of the class file
>> >>>
>> >>> lucene\core\src\java\org\apache\lucene\util\IOUtils.java
>> >>>
>> >>> Thanks,
>> >>> Rahul
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS-EA] Lucene-main-Linux (64bit/jdk-16-ea+36) - Build # 29666 - Failure!

2021-03-11 Thread Uwe Schindler
I think I know why this happens:

Before the split of solr and lucene, the solr part used some annotations of 
errorprone in its build. Because of that, errorprone dependencies were always 
used (downloaded), so palantir was happy.

Lucene does not use any errorprone JARs in its compile classpath. When then JDK 
16 disables errorprone (see the if statement), Palantir sees no usage of the 
dependency anymore.

I think we must either put that on some white- ähm exclusion-list!

Dawid, does this sound correct?

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
https://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Uwe Schindler 
> Sent: Thursday, March 11, 2021 9:34 PM
> To: dev@lucene.apache.org
> Cc: Dawid Weiss 
> Subject: RE: [JENKINS-EA] Lucene-main-Linux (64bit/jdk-16-ea+36) - Build #
> 29666 - Failure!
> 
> Hi,
> 
> it looks like due to the split something went wrong, because with JDK 16 the
> build fails - no change, but suddenly fails:
> 
> FAILURE: Build failed with an exception.
> 
> * What went wrong:
> Execution failed for task ':checkUnusedConstraints'.
> > There are unused pins in your versions.props:
>   [com.google.guava:guava, com.google.errorprone:*,
> com.google.protobuf:protobuf-java, com.github.ben-manes.caffeine:caffeine]
> 
>   Rerun with --fix to remove them.
> 
> * Try:
> Run with --stacktrace option to get the stack trace. Run with --info or 
> --debug
> option to get more log output. Run with --scan to get full insights.
> 
> * Get more help at https://help.gradle.org
> 
> Deprecated Gradle features were used in this build, making it incompatible
> with Gradle 7.0.
> Use '--warning-mode all' to show the individual deprecation warnings.
> See
> https://docs.gradle.org/6.6.1/userguide/command_line_interface.html#sec:co
> mmand_line_warnings
> 
> BUILD FAILED in 16m 2s
> 
> I see no commit that may cause this, so I have the feeling it's java 16 only.
> Dawid, do you have an idea?
> 
> -
> Uwe Schindler
> Achterdiek 19, D-28357 Bremen
> https://www.thetaphi.de
> eMail: u...@thetaphi.de
> 
> > -Original Message-
> > From: Policeman Jenkins Server 
> > Sent: Thursday, March 11, 2021 8:37 PM
> > To: bui...@lucene.apache.org
> > Subject: [JENKINS-EA] Lucene-main-Linux (64bit/jdk-16-ea+36) - Build #
> 29666 -
> > Failure!
> > Importance: Low
> >
> > Build: https://jenkins.thetaphi.de/job/Lucene-main-Linux/29666/
> > Java: 64bit/jdk-16-ea+36 -XX:+UseCompressedOops -XX:+UseG1GC
> >
> > All tests passed
> 
> 
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



RE: [JENKINS-EA] Lucene-main-Linux (64bit/jdk-16-ea+36) - Build # 29666 - Failure!

2021-03-11 Thread Uwe Schindler
Hi,

it looks like due to the split something went wrong, because with JDK 16 the 
build fails - no change, but suddenly fails:

FAILURE: Build failed with an exception.

* What went wrong:
Execution failed for task ':checkUnusedConstraints'.
> There are unused pins in your versions.props: 
  [com.google.guava:guava, com.google.errorprone:*, 
com.google.protobuf:protobuf-java, com.github.ben-manes.caffeine:caffeine]
  
  Rerun with --fix to remove them.

* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug 
option to get more log output. Run with --scan to get full insights.

* Get more help at https://help.gradle.org

Deprecated Gradle features were used in this build, making it incompatible with 
Gradle 7.0.
Use '--warning-mode all' to show the individual deprecation warnings.
See 
https://docs.gradle.org/6.6.1/userguide/command_line_interface.html#sec:command_line_warnings

BUILD FAILED in 16m 2s

I see no commit that may cause this, so I have the feeling it's java 16 only. 
Dawid, do you have an idea?

-
Uwe Schindler
Achterdiek 19, D-28357 Bremen
https://www.thetaphi.de
eMail: u...@thetaphi.de

> -Original Message-
> From: Policeman Jenkins Server 
> Sent: Thursday, March 11, 2021 8:37 PM
> To: bui...@lucene.apache.org
> Subject: [JENKINS-EA] Lucene-main-Linux (64bit/jdk-16-ea+36) - Build # 29666 -
> Failure!
> Importance: Low
> 
> Build: https://jenkins.thetaphi.de/job/Lucene-main-Linux/29666/
> Java: 64bit/jdk-16-ea+36 -XX:+UseCompressedOops -XX:+UseG1GC
> 
> All tests passed


-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene (unexpected ) fsync on existing segments

2021-03-11 Thread Rahul Goswami
Thanks Michael. I thought since this discussion is closer to the code than
most discussions on the solr-users list, it seemed like a more appropriate
forum. Will be mindful going forward.
On your point about new segments, I attached a debugger and tried to do a
new commit (just pure Solr commit, no backup process running), and the code
indeed does fsync on a pre-existing segment file. Hence I was a bit baffled
since it challenged my fundamental understanding that segment files once
written are immutable, no matter what (unless picked up for a merge of
course). Hence I thought of reaching out, in case there are scenarios where
this might happen which I might be unaware of.

Thanks,
Rahul

On Thu, Mar 11, 2021 at 2:38 PM Michael Sokolov  wrote:

> This isn't a support forum; solr-users@ might be more appropriate. On
> that list someone might have a better idea about how the replication
> handler gets its list of files. This would be a good list to try if
> you wanted to propose a fix for the problem you're having. But since
> you're here -- it looks to me as if IndexWriter indeed syncs all "new"
> files in the current segments being committed; look in
> IndexWriter.startCommit and SegmentInfos.files. Caveat: (1) I'm
> looking at this code for the first time, and (2) things may have been
> different in 7.7.2? Sorry I don't know for sure, but are you sure that
> your backup process is not attempting to copy one of the new files?
>
> On Thu, Mar 11, 2021 at 1:35 PM Rahul Goswami 
> wrote:
> >
> > Hello,
> > Just wanted to follow up one more time to see if this is the right form
> for my question? Or is this suitable for some other mailing list?
> >
> > Best,
> > Rahul
> >
> > On Sat, Mar 6, 2021 at 3:57 PM Rahul Goswami 
> wrote:
> >>
> >> Hello everyone,
> >> Following up on my question in case anyone has any idea. Why it's
> important to know this is because I am thinking of allowing the backup
> process to not hold any lock on the index files, which should allow the
> fsync during parallel commits. BUT, in case doing an fsync on existing
> segment files in a saved commit point DOES have an effect, it might render
> the backed up index in a corrupt state.
> >>
> >> Thanks,
> >> Rahul
> >>
> >> On Fri, Mar 5, 2021 at 3:04 PM Rahul Goswami 
> wrote:
> >>>
> >>> Hello,
> >>> We have a process which backs up the index (Solr 7.7.2) on a schedule.
> The way we do it is we first save a commit point on the index and then
> using Solr's /replication handler, get the list of files in that
> generation. After the backup completes, we release the commit point (Please
> note that this is a separate backup process outside of Solr and not the
> backup command of the /replication handler)
> >>> The assumption is that while the commit point is saved, no changes
> happen to the segment files in the saved generation.
> >>>
> >>> Now the issue... The backup process opens the index files in a shared
> READ mode, preventing writes. This is causing any parallel commits to fail
> as it seems to be complaining about the index files to be locked by another
> process(the backup process). Upon debugging, I see that fsync is being
> called during commit on already existing segment files which is not
> expected. So, my question is, is there any reason for lucene to call fsync
> on already existing segment files?
> >>>
> >>> The line of code I am referring to is as below:
> >>> try (final FileChannel file = FileChannel.open(fileToSync, isDir ?
> StandardOpenOption.READ : StandardOpenOption.WRITE))
> >>>
> >>> in method fsync(Path fileToSync, boolean isDir) of the class file
> >>>
> >>> lucene\core\src\java\org\apache\lucene\util\IOUtils.java
> >>>
> >>> Thanks,
> >>> Rahul
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Lucene (unexpected ) fsync on existing segments

2021-03-11 Thread Michael Sokolov
This isn't a support forum; solr-users@ might be more appropriate. On
that list someone might have a better idea about how the replication
handler gets its list of files. This would be a good list to try if
you wanted to propose a fix for the problem you're having. But since
you're here -- it looks to me as if IndexWriter indeed syncs all "new"
files in the current segments being committed; look in
IndexWriter.startCommit and SegmentInfos.files. Caveat: (1) I'm
looking at this code for the first time, and (2) things may have been
different in 7.7.2? Sorry I don't know for sure, but are you sure that
your backup process is not attempting to copy one of the new files?

On Thu, Mar 11, 2021 at 1:35 PM Rahul Goswami  wrote:
>
> Hello,
> Just wanted to follow up one more time to see if this is the right form for 
> my question? Or is this suitable for some other mailing list?
>
> Best,
> Rahul
>
> On Sat, Mar 6, 2021 at 3:57 PM Rahul Goswami  wrote:
>>
>> Hello everyone,
>> Following up on my question in case anyone has any idea. Why it's important 
>> to know this is because I am thinking of allowing the backup process to not 
>> hold any lock on the index files, which should allow the fsync during 
>> parallel commits. BUT, in case doing an fsync on existing segment files in a 
>> saved commit point DOES have an effect, it might render the backed up index 
>> in a corrupt state.
>>
>> Thanks,
>> Rahul
>>
>> On Fri, Mar 5, 2021 at 3:04 PM Rahul Goswami  wrote:
>>>
>>> Hello,
>>> We have a process which backs up the index (Solr 7.7.2) on a schedule. The 
>>> way we do it is we first save a commit point on the index and then using 
>>> Solr's /replication handler, get the list of files in that generation. 
>>> After the backup completes, we release the commit point (Please note that 
>>> this is a separate backup process outside of Solr and not the backup 
>>> command of the /replication handler)
>>> The assumption is that while the commit point is saved, no changes happen 
>>> to the segment files in the saved generation.
>>>
>>> Now the issue... The backup process opens the index files in a shared READ 
>>> mode, preventing writes. This is causing any parallel commits to fail as it 
>>> seems to be complaining about the index files to be locked by another 
>>> process(the backup process). Upon debugging, I see that fsync is being 
>>> called during commit on already existing segment files which is not 
>>> expected. So, my question is, is there any reason for lucene to call fsync 
>>> on already existing segment files?
>>>
>>> The line of code I am referring to is as below:
>>> try (final FileChannel file = FileChannel.open(fileToSync, isDir ? 
>>> StandardOpenOption.READ : StandardOpenOption.WRITE))
>>>
>>> in method fsync(Path fileToSync, boolean isDir) of the class file
>>>
>>> lucene\core\src\java\org\apache\lucene\util\IOUtils.java
>>>
>>> Thanks,
>>> Rahul

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Lucene (unexpected ) fsync on existing segments

2021-03-11 Thread Rahul Goswami
Hello,
Just wanted to follow up one more time to see if this is the right form for
my question? Or is this suitable for some other mailing list?

Best,
Rahul

On Sat, Mar 6, 2021 at 3:57 PM Rahul Goswami  wrote:

> Hello everyone,
> Following up on my question in case anyone has any idea. Why it's
> important to know this is because I am thinking of allowing the backup
> process to not hold any lock on the index files, which should allow the
> fsync during parallel commits. BUT, in case doing an fsync on existing
> segment files in a saved commit point DOES have an effect, it might render
> the backed up index in a corrupt state.
>
> Thanks,
> Rahul
>
> On Fri, Mar 5, 2021 at 3:04 PM Rahul Goswami 
> wrote:
>
>> Hello,
>> We have a process which backs up the index (Solr 7.7.2) on a schedule.
>> The way we do it is we first save a commit point on the index and then
>> using Solr's /replication handler, get the list of files in that
>> generation. After the backup completes, we release the commit point (Please
>> note that this is a separate backup process outside of Solr and not
>> the backup command of the /replication handler)
>> The assumption is that while the commit point is saved, no changes happen
>> to the segment files in the saved generation.
>>
>> Now the issue... The backup process opens the index files in a shared
>> READ mode, preventing writes. This is causing any parallel commits to fail
>> as it seems to be complaining about the index files to be locked by another
>> process(the backup process). Upon debugging, I see that fsync is being
>> called during commit on already existing segment files which is not
>> expected. So, my question is, is there any reason for lucene to call fsync
>> on already existing segment files?
>>
>> The line of code I am referring to is as below:
>> try (final FileChannel file = FileChannel.open(fileToSync, isDir ?
>> StandardOpenOption.READ : StandardOpenOption.WRITE))
>>
>> in method fsync(Path fileToSync, boolean isDir) of the class file
>>
>> lucene\core\src\java\org\apache\lucene\util\IOUtils.java
>>
>> Thanks,
>> Rahul
>>
>


Re: Welcome Bruno to the Apache Lucene PMC

2021-03-11 Thread Tomás Fernández Löbbe
Welcome Bruno!

On Thu, Mar 11, 2021 at 7:59 AM Dawid Weiss  wrote:

> Welcome, Bruno!
>
> On Thu, Mar 11, 2021 at 4:29 PM Gus Heck  wrote:
> >
> > Welcome :)
> >
> > On Thu, Mar 11, 2021 at 9:58 AM Houston Putman 
> wrote:
> >>
> >> Congrats and welcome Bruno!
> >>
> >> On Thu, Mar 11, 2021 at 8:32 AM David Smiley 
> wrote:
> >>>
> >>> Welcome Bruno!
> >>>
> >>> ~ David Smiley
> >>> Apache Lucene/Solr Search Developer
> >>> http://www.linkedin.com/in/davidwsmiley
> >>>
> >>>
> >>> On Wed, Mar 10, 2021 at 7:56 PM Mike Drob  wrote:
> 
>  I am pleased to announce that Bruno has accepted an invitation to
> join the Lucene PMC!
> 
>  Congratulations, and welcome aboard!
> 
>  Mike
> >
> >
> >
> > --
> > http://www.needhamsoftware.com (work)
> > http://www.the111shift.com (play)
>
> -
> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
> For additional commands, e-mail: dev-h...@lucene.apache.org
>
>


Re: Welcome Bruno to the Apache Lucene PMC

2021-03-11 Thread Dawid Weiss
Welcome, Bruno!

On Thu, Mar 11, 2021 at 4:29 PM Gus Heck  wrote:
>
> Welcome :)
>
> On Thu, Mar 11, 2021 at 9:58 AM Houston Putman  
> wrote:
>>
>> Congrats and welcome Bruno!
>>
>> On Thu, Mar 11, 2021 at 8:32 AM David Smiley  wrote:
>>>
>>> Welcome Bruno!
>>>
>>> ~ David Smiley
>>> Apache Lucene/Solr Search Developer
>>> http://www.linkedin.com/in/davidwsmiley
>>>
>>>
>>> On Wed, Mar 10, 2021 at 7:56 PM Mike Drob  wrote:

 I am pleased to announce that Bruno has accepted an invitation to join the 
 Lucene PMC!

 Congratulations, and welcome aboard!

 Mike
>
>
>
> --
> http://www.needhamsoftware.com (work)
> http://www.the111shift.com (play)

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org



Re: Welcome Bruno to the Apache Lucene PMC

2021-03-11 Thread Gus Heck
Welcome :)

On Thu, Mar 11, 2021 at 9:58 AM Houston Putman 
wrote:

> Congrats and welcome Bruno!
>
> On Thu, Mar 11, 2021 at 8:32 AM David Smiley  wrote:
>
>> Welcome Bruno!
>>
>> ~ David Smiley
>> Apache Lucene/Solr Search Developer
>> http://www.linkedin.com/in/davidwsmiley
>>
>>
>> On Wed, Mar 10, 2021 at 7:56 PM Mike Drob  wrote:
>>
>>> I am pleased to announce that Bruno has accepted an invitation to join
>>> the Lucene PMC!
>>>
>>> Congratulations, and welcome aboard!
>>>
>>> Mike
>>>
>>

-- 
http://www.needhamsoftware.com (work)
http://www.the111shift.com (play)


Re: Welcome Bruno to the Apache Lucene PMC

2021-03-11 Thread Houston Putman
Congrats and welcome Bruno!

On Thu, Mar 11, 2021 at 8:32 AM David Smiley  wrote:

> Welcome Bruno!
>
> ~ David Smiley
> Apache Lucene/Solr Search Developer
> http://www.linkedin.com/in/davidwsmiley
>
>
> On Wed, Mar 10, 2021 at 7:56 PM Mike Drob  wrote:
>
>> I am pleased to announce that Bruno has accepted an invitation to join
>> the Lucene PMC!
>>
>> Congratulations, and welcome aboard!
>>
>> Mike
>>
>


Re: Welcome Bruno to the Apache Lucene PMC

2021-03-11 Thread David Smiley
Welcome Bruno!

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Wed, Mar 10, 2021 at 7:56 PM Mike Drob  wrote:

> I am pleased to announce that Bruno has accepted an invitation to join the
> Lucene PMC!
>
> Congratulations, and welcome aboard!
>
> Mike
>


Re: Welcome Bruno to the Apache Lucene PMC

2021-03-11 Thread Michael McCandless
Welcome Bruno!

Mike McCandless

http://blog.mikemccandless.com


On Wed, Mar 10, 2021 at 7:56 PM Mike Drob  wrote:

> I am pleased to announce that Bruno has accepted an invitation to join the
> Lucene PMC!
>
> Congratulations, and welcome aboard!
>
> Mike
>


Re: Lucene and Solr repositories mirrored, main branch ready

2021-03-11 Thread Bruno Roustant
Thank you Dawid!

Le jeu. 11 mars 2021 à 02:28, Michael Sokolov  a écrit :

> Big thank you, Dawid, and Jan and others for taking the bull by the horns!
>
> On Wed, Mar 10, 2021, 3:14 PM Dawid Weiss  wrote:
>
>> > Just tested out the main branch of the new repo, packaged, started,
>> loaded data, searched from the UI. All looks great.
>>
>> Thank you, great to know!
>>
>> Dawid
>>
>> -
>> To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
>> For additional commands, e-mail: dev-h...@lucene.apache.org
>>
>>


Re:Welcome Bruno to the Apache Lucene PMC

2021-03-11 Thread Christine Poerschke (BLOOMBERG/ LONDON)
Welcome Bruno!

From: dev@lucene.apache.org At: 03/11/21 00:56:33To:  dev@lucene.apache.org
Cc:  bruno.roust...@gmail.com
Subject: Welcome Bruno to the Apache Lucene PMC

I am pleased to announce that Bruno has accepted an invitation to join the 
Lucene PMC!

Congratulations, and welcome aboard!

Mike



Re: Branch cleaning/ archiving

2021-03-11 Thread Dawid Weiss
> I thought a simple git fetch would detect deleted branches?

I don't think fetch purges remote refs... Unless something has changed
from the last time I did such an operation.

I like your e-mail.

> The removal of these branches will happen on Monday March 15th.
> After the removal you will need to run "git remote prune origin"

This is an alternative that runs a fetch, followed by a prune.

git fetch --prune origin

D.

-
To unsubscribe, e-mail: dev-unsubscr...@lucene.apache.org
For additional commands, e-mail: dev-h...@lucene.apache.org