+1 (binding)
SUCCESS! [1:09:07.601567]
Also, fired up the bin/solr -e cloud example and ran through the Admin UI
Thanks Mayya!
On Tue, Jun 15, 2021 at 11:48 AM Uwe Schindler wrote:
>
> Hi again, short update to my previous mail:
>
> New maven metadata artifacts work fine, I was able to build
+1 (binding)
SUCCESS! [1:06:13.636144]
Also tried basic indexing and querying and it looks good!
On Tue, Jun 15, 2021 at 10:54 AM Uwe Schindler wrote:
> Hi again, short update to my previous mail:
>
> New maven metadata artifacts work fine, I was able to build a Solr plugin
> of a customer
Hi again, short update to my previous mail:
New maven metadata artifacts work fine, I was able to build a Solr plugin of a
customer after adding 2 repos to its POM. It then downloaded the internet and
Solr and plugin built sucessfully (including tests using the solr test
framework, indirectly
+1
SUCCESS! [1:24:00.227990]
Policeman Jenkins tested it for me:
https://jenkins.thetaphi.de/job/Lucene-Solr-Release-Tester/36/console
I had not much time to do my own checks of targz artifacts, so I trust the
technical workflow.
I only checked the maven folders and verified that the new POM
+1 SUCCESS! [1:32:12.04043]
On Tue, Jun 15, 2021 at 8:27 PM Adrien Grand wrote:
>
> +1 SUCCESS! [1:36:12.056443]
>
> On Tue, Jun 15, 2021 at 4:26 PM Mayya Sharipova
> wrote:
>>
>> Thanks Robert for such detailed investigations.
>>
>> Lucene-Solr-SmokeRelease-8.9 also had 2 recent failures.
+1 SUCCESS! [1:36:12.056443]
On Tue, Jun 15, 2021 at 4:26 PM Mayya Sharipova
wrote:
> Thanks Robert for such detailed investigations.
>
> Lucene-Solr-SmokeRelease-8.9 also had 2 recent failures. Failures are not
> reproducible on my local machine.
>
> build #13: ant test
Thanks Robert for such detailed investigations.
Lucene-Solr-SmokeRelease-8.9 also had 2 recent failures. Failures are not
reproducible on my local machine.
build #13: ant test -Dtestcase=SolrCloudReportersTest
-Dtests.method=testExplicitConfiguration -Dtests.seed=60FEAB39C2B47705
Well it definitely wouldn't be as useful as changing to a
postings-style approach. That would bring a lot more benefits to
general cases, e.g. use of PFOR and so on.
But it is also easier to implement right now, to accelerate cases
where fields are sorted, without hurting other things.
On Tue,
SegmentWriteState has a reference to SegmentInfos which itself has the
index sort, so I believe that it would be possible.
I wonder how useful it would be in practice. E.g. in the Elasticsearch
case, even though we store lots of time-based data and have been looking
into index sorting for
Glad it helped. :)
On Tue, Jun 15, 2021 at 3:28 PM Greg Miller wrote:
> Thanks for this explanation Adrien! I'd been wondering about this a bit
> myself since seeing that DrillSideways also implements a TAAT approach (in
> addition to a doc-at-a-time approach). This really helps clear that up.
+1 to that idea. Maybe a shorter-term possibility would be to only do
this compression on a field when the user has explicitly configured
index sorting on the field (can we hackishly peek at it and tell?)
On Tue, Jun 15, 2021 at 9:04 AM Adrien Grand wrote:
>
> I believe that this sort of
Thanks for this explanation Adrien! I'd been wondering about this a bit
myself since seeing that DrillSideways also implements a TAAT approach (in
addition to a doc-at-a-time approach). This really helps clear that up.
Appreciate you taking the time to explain!
Cheers,
-Greg
On Mon, Jun 14, 2021
I believe that this sort of optimization would be more effective and robust
if we made doc values look more like postings, with relatively small blocks
of values that would get compressed independently and decompressed in bulk.
This way, we wouldn't require data to be sorted across entire segments
We did this monotonic detection/compression before in older times, but
had to remove it because it caused too many slowdowns.
I think it easily causes too much type pollution, for example, for a
typical large index with unsorted docvalues field, big segments aren't
won't be sorted, tiny segments
Hi,
In class Lucene80DocValuesConsumer#writeValues(FieldInfo field,
DocValuesProducer valuesProducer), all numericDocValues will be visited to
calculate gcd, in the meantime, we can check if all values were sorted. if so,
maybe we could use DirectMonotonicWriter to store them.
15 matches
Mail list logo