Total Collection Size in Solr 7

2018-06-25 Thread Aroop Ganguly
Hi Team

I am not sure how to ascertain the total size of a collection via the Solr UI 
on a Solr7+ installation.
The collection is shared and replicated heavily so its tedious to have to look 
at each core and figure out the size of the entire collection from this in an 
additive way.

Is there an api or ui section from where this info can be obtained ?

On the flip side, it would be great to have a consolidated view of the 
collection size in GBs along with the individual shard sizes. (Should this be a 
Jira :) ?) 

Thanks
Aroop

Re: Solr Suggest Component and OOM

2018-06-25 Thread Ratnadeep Rakshit
The site_address field has all the address of United states. Idea is to
build something similar to Google Places autosuggest.

Here's an example query: curl "
http://localhost/solr/addressbook/suggest?suggest.q=1054%20club=json;

Response:

{
"responseHeader": {
"status": 0,
"QTime": 3125,
"params": {
"suggest.q": "1054 club",
"wt": "json"
}
},
"suggest": {
"mySuggester2": {
"1054 club": {
"numFound": 3,
"suggestions": [{
"term": "1054 null N COUNTRY CLUB null BLVD null STOCKTON CA
95204 5008",
"weight": 0,
"payload": "0023865882|06077|37.970769,-121.310433"
}, {
"term": "1054 null E HERITAGE CLUB null CIR null DELRAY BEACH
FL 33483 3482",
"weight": 0,
"payload": "0117190535|12099|26.445485,-80.069336"
}, {
"term": "1054 null null CORAL CLUB null DR 1054 CORAL
SPRINGS FL 33071 5657",
"weight": 0,
"payload": "0111342342|12011|26.243918,-80.267577"
}]
}
},
"mySuggester1": {
"1054 club": {
"numFound": 0,
"suggestions": []
}
}
}
}

Now when I start building with 25M address records in the addressbook core,
the process runs smoothly. I can check the Heap utilization upto 56% max
out of the 20GB allotted to Solr.
I am not very experienced in metering solr performance. But it looks like
when I increase the record size beyond 25M in the core, the build process
fails. The query process of the suggester still works.

Did that answer your questions correctly?

On Tue, Jun 12, 2018 at 3:17 PM, Alessandro Benedetti 
wrote:

> Hi,
> first of all the two different suggesters you are using are based on
> different data structures ( with different memory utilisation) :
>
> - FuzzyLookupFactory -> FST ( in memory and stored binary on disk)
> - AnalyzingInfixLookupFactory -> Auxiliary Lucene Index
>
> Both the data structures should be very memory efficient ( both in building
> and storage).
> What is the cardinality of the fields you are building suggestions from ? (
> site_address and site_address_other)
> What is the memory situation in Solr when you start the suggester building
> ?
> You are allocating much more memory to the JVM Solr process than the OS (
> which in your situation doesn't fit the entire index ideal scenario).
>
> I would recommend to put some monitoring in place ( there are plenty of
> open
> source tools to do that)
>
> Regards
>
>
>
> -
> ---
> Alessandro Benedetti
> Search Consultant, R Software Engineer, Director
> Sease Ltd. - www.sease.io
> --
> Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html
>


Re: Trouble using the MIGRATE command in the collections API on solr 7.3.1

2018-06-25 Thread Matthew Faw
Basically, we have an environment that has a large number of solr nodes (~100) 
and an environment with fewer solr nodes (~10).  In the “big” environment, we 
have lots of smaller cores (around 3Gb), and in the smaller environment, we 
have fewer bigger cores (around 30 Gb).  We transfer data between these two 
environments around once per month or so.  We’ve traditionally followed the 
model of 1 core per solr node, so we typically reindex solr when we move 
between environments, which takes 2 days typically, whereas using solr’s BACKUP 
and RESTORE apis each take a few minutes typically to run.  I’m planning to 
investigate performance differences between having several small cores on a 
single solr node vs having one big solr core on each node.  In the meantime, 
however, I was interested to see if it would be possible, at least in the short 
term, to replace our current procedure with the following:
1) BACKUP solr collection in the big environment
2) RESTORE the collection in the small environment
3) MIGRATE the collection in the small environment to another collection in the 
same environment with 1 shard per solr node.

I’ve also heard mention of an API to combine shards 
(https://github.com/bloomreach/solrcloud-rebalance-api and 
https://issues.apache.org/jira/browse/SOLR-9241).  Doesn’t seem like there’s 
been any development on integrating this work into the official solr 
distribution, but this also seems like it would probably solve my requirements.

Let me know if anything is still unclear.

Thanks,
Matthew

On 6/25/18, 1:38 PM, "Shawn Heisey"  wrote:

On 6/22/2018 12:14 PM, Matthew Faw wrote:
> So I’ve tried running MIGRATE on solr 7.3.1 using the following 
parameters:
> 1) “split.key=”
> 2) “split.key=!”
> 3) “split.key=DERP_”
> 4) “split.key=DERP/0!”
>
> For 1-3, I am seeing the same ERRORs you see.  For 4, I do not see any 
ERRORs.
>
> Interestingly, I’m seeing this WARN message for all 4 scenarios:
>
> org.apache.solr.common.SolrException: SolrCore not 
found:split_shard2_temp_shard2_shard1_replica_n3 in [derp_shard1_replica_n1, 
derp_shard2_replica_n6, herp_shard1_replica_n1, herp_shard2_replica_n6]

I saw something similar as well.  I think the way that MIGRATE works
internally is to copy data from the source collection to a temporary
index, and then from there to the final target.

I think I've figured out why split.key is required.  The entire reason
the MIGRATE api was created was for people who use route keys to split
one of those route keys into a separate collection.  It does not appear
to have been intended for handling everything in a collection, but only
for splitting indexes where such keys are in use.


https://na01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fissues.apache.org%2Fjira%2Fbrowse%2FSOLR-5308=02%7C01%7CMatthew.Faw%40verato.com%7C0f854de95816464908de08d5dac283ae%7Ca837817fdbe1417692831265955652cf%7C0%7C0%7C636655451338924556=%2FFF5tCNs5CWEGo3Qkv7589lhRjosjeVG15l7YGJ9MMU%3D=0

With id values like DERP_3e5bc047f13f6c562f985f00 you're not using
routing prefix keys, so I think you probably aren't able to use the
migrate API at all.

So let's back up a couple of steps so we can find you a workable
solution.  Is this a one-time migration that you're trying to do, or are
you expecting to do this frequently?  What requirement are you trying to
satisfy by copying data from one collection to another, and what are the
details of the requirement?

Thanks,
Shawn



The content of this email is intended solely for the individual or entity named 
above and access by anyone else is unauthorized. If you are not the intended 
recipient, any disclosure, copying, distribution, or use of the contents of 
this information is prohibited and may be unlawful. If you have received this 
electronic transmission in error, please reply immediately to the sender that 
you have received the message in error, and delete it. Thank you.


Script for requesting recovery on down replicas

2018-06-25 Thread Walter Underwood
We have a high update rate collection with a lot of replicas. Sometimes after a 
config reload, some of the replicas go down (brown in the cloud graph). I got 
really tired of fixing the by hand in a 40 node cluster.

I wrote a script to mine those out of clusterstatus and send a request recovery 
command for each one. You’ll need “jq” to run this. I’m putting it in the body 
because attachments are stripped on this list. I named it 
“request-recovery.sh”. The hardest part of this was dealing with arrays in bash.

=
#!/bin/bash

cluster=$1

if [ -z "$cluster" ]
then
echo "Must provide a hostname from Solr Cloud cluster as the first argument"
echo "usage: ./request-recovery.sh solr-cloud.mydomain.com collection_name"
exit 1
fi

collection=$2

if [ -z "$collection" ]
then
echo "Must provide a Solr collection name as the second argument"
echo "usage: ./request-recovery.sh solr-cloud.mydomain.com collection_name"
exit 1
fi

# Fetch the hostnames (node_names) and core names of the cores in $collection
# which are in the "down" state. Store those in two parallel arrays.
# We create arrays by wrapping the curl calls in ().

down_node_names=(`curl -s 
"http://${cluster}:8983/solr/admin/collections?action=CLUSTERSTATUS=json; | 
jq -r ".cluster.collections.$collection.shards[].replicas[] | 
select(.state==\"down\") | .node_name"`)
down_cores=(`curl -s 
"http://${cluster}:8983/solr/admin/collections?action=CLUSTERSTATUS=json; | 
jq -r ".cluster.collections.$collection.shards[].replicas[] | 
select(.state==\"down\") | .core"`)

echo "${#down_node_names[@]} cores are down in collection $collection"

# ${!array[@]} is the list of all the indexes set in the array
for i in ${!down_node_names[@]}
do
echo "Requesting recovery for core ${down_cores[i]} on node 
${down_node_names[i]}"
url_frag=`echo "${down_node_names[i]}" | tr _ /` 
curl 
"http://$url_frag/admin/cores?action=REQUESTRECOVERY=${down_cores[i]}=json;
done
=

wunder
Walter Underwood
wun...@wunderwood.org
http://observer.wunderwood.org/  (my blog)



Re: Suggestions for debugging performance issue

2018-06-25 Thread Chris Troullis
FYI to all, just as an update, we rebuilt the index in question from
scratch for a second time this weekend and the problem went away on 1 node,
but we were still seeing it on the other node. After restarting the
problematic node, the problem went away. Still makes me a little uneasy as
we weren't able to determine the cause, but at least we are back to normal
query times now.

Chris

On Fri, Jun 15, 2018 at 8:06 AM, Chris Troullis 
wrote:

> Thanks Shawn,
>
> As mentioned previously, we are hard committing every 60 seconds, which we
> have been doing for years, and have had no issues until enabling CDCR. We
> have never seen large tlog sizes before, and even manually issuing a hard
> commit to the collection does not reduce the size of the tlogs. I believe
> this is because when using the CDCRUpdateLog the tlogs are not purged until
> the docs have been replicated over. Anyway, since we manually purged the
> tlogs they seem to now be staying at an acceptable size, so I don't think
> that is the cause. The documents are not abnormally large, maybe ~20
> string/numeric fields with simple whitespace tokenization.
>
> To answer your questions:
>
> -Solr version: 7.2.1
> -What OS vendor and version Solr is running on: CentOS 6
> -Total document count on the server (counting all index cores): 13
> collections totaling ~60 million docs
> -Total index size on the server (counting all cores): ~60GB
> -What the total of all Solr heaps on the server is - 16GB heap (we had to
> increase for CDCR because it was using a lot more heap).
> -Whether there is software other than Solr on the server - No
> -How much total memory the server has installed - 64 GB
>
> All of this has been consistent for multiple years across multiple Solr
> versions and we have only started seeing this issue once we started using
> the CDCRUpdateLog and CDCR, hence why that is the only real thing we can
> point to. And again, the issue is only affecting 1 of the 13 collections on
> the server, so if it was hardware/heap/GC related then I would think we
> would be seeing it for every collection, not just one, as they all share
> the same resources.
>
> I will take a look at the GC logs, but I don't think that is the cause.
> The consistent nature of the slow performance doesn't really point to GC
> issues, and we have profiling set up in New Relic and it does not show any
> long/frequent GC pauses.
>
> We are going to try and rebuild the collection from scratch again this
> weekend as that has solved the issue in some lower environments, although
> it's not really consistent. At this point it's all we can think of to do.
>
> Thanks,
>
> Chris
>
>
> On Thu, Jun 14, 2018 at 6:23 PM, Shawn Heisey  wrote:
>
>> On 6/12/2018 12:06 PM, Chris Troullis wrote:
>> > The issue we are seeing is with 1 collection in particular, after we
>> set up
>> > CDCR, we are getting extremely slow response times when retrieving
>> > documents. Debugging the query shows QTime is almost nothing, but the
>> > overall responseTime is like 5x what it should be. The problem is
>> > exacerbated by larger result sizes. IE retrieving 25 results is almost
>> > normal, but 200 results is way slower than normal. I can run the exact
>> same
>> > query multiple times in a row (so everything should be cached), and I
>> still
>> > see response times way higher than another environment that is not using
>> > CDCR. It doesn't seem to matter if CDCR is enabled or disabled, just
>> that
>> > we are using the CDCRUpdateLog. The problem started happening even
>> before
>> > we enabled CDCR.
>> >
>> > In a lower environment we noticed that the transaction logs were huge
>> > (multiple gigs), so we tried stopping solr and deleting the tlogs then
>> > restarting, and that seemed to fix the performance issue. We tried the
>> same
>> > thing in production the other day but it had no effect, so now I don't
>> know
>> > if it was a coincidence or not.
>>
>> There is one other cause besides CDCR buffering that I know of for huge
>> transaction logs, and it has nothing to do with CDCR:  A lack of hard
>> commits.  It is strongly recommended to have autoCommit set to a
>> reasonably short interval (about a minute in my opinion, but 15 seconds
>> is VERY common).  Most of the time openSearcher should be set to false
>> in the autoCommit config, and other mechanisms (which might include
>> autoSoftCommit) should be used for change visibility.  The example
>> autoCommit settings might seem superfluous because they don't affect
>> what's searchable, but it is actually a very important configuration to
>> keep.
>>
>> Are the docs in this collection really big, by chance?
>>
>> As I went through previous threads you've started on the mailing list, I
>> have noticed that none of your messages provided some details that would
>> be useful for looking into performance problems:
>>
>>  * What OS vendor and version Solr is running on.
>>  * Total document count on the server (counting all index cores).
>>  * 

Re: Trouble using the MIGRATE command in the collections API on solr 7.3.1

2018-06-25 Thread Shawn Heisey
On 6/22/2018 12:14 PM, Matthew Faw wrote:
> So I’ve tried running MIGRATE on solr 7.3.1 using the following parameters:
> 1) “split.key=”
> 2) “split.key=!”
> 3) “split.key=DERP_”
> 4) “split.key=DERP/0!”
>
> For 1-3, I am seeing the same ERRORs you see.  For 4, I do not see any ERRORs.
>
> Interestingly, I’m seeing this WARN message for all 4 scenarios:
>
> org.apache.solr.common.SolrException: SolrCore not 
> found:split_shard2_temp_shard2_shard1_replica_n3 in [derp_shard1_replica_n1, 
> derp_shard2_replica_n6, herp_shard1_replica_n1, herp_shard2_replica_n6]

I saw something similar as well.  I think the way that MIGRATE works
internally is to copy data from the source collection to a temporary
index, and then from there to the final target.

I think I've figured out why split.key is required.  The entire reason
the MIGRATE api was created was for people who use route keys to split
one of those route keys into a separate collection.  It does not appear
to have been intended for handling everything in a collection, but only
for splitting indexes where such keys are in use.

https://issues.apache.org/jira/browse/SOLR-5308

With id values like DERP_3e5bc047f13f6c562f985f00 you're not using
routing prefix keys, so I think you probably aren't able to use the
migrate API at all.

So let's back up a couple of steps so we can find you a workable
solution.  Is this a one-time migration that you're trying to do, or are
you expecting to do this frequently?  What requirement are you trying to
satisfy by copying data from one collection to another, and what are the
details of the requirement?

Thanks,
Shawn



Re: Solr Default query parser

2018-06-25 Thread Kamal Kishore Aggarwal
Hi Shawn,

Thanks for the reply.

If "lucene" is the default query parser, then how can we specify Standard
Query Parser(QP) in the query.

Dismax QP can be specified by defType=dismax and Extended Dismax Qp by
defType=edismax, how about for declaration of Standard QP.

Regards
Kamal

On Wed, Jun 6, 2018 at 9:41 PM, Shawn Heisey  wrote:

> On 6/6/2018 9:52 AM, Kamal Kishore Aggarwal wrote:
> >> What is the default query parser (QP) for solr.
> >>
> >> While I was reading about this, I came across two links which looks
> >> ambiguous to me. It's not clear to me whether Standard is the default
> QP or
> >> Lucene is the default QP or they are same. Below is the screenshot and
> >> links which are confusing me.
>
> The default query parser in Solr has the name "lucene".  This query
> parser, which is part of Solr, deals with Lucene query syntax.
>
> The most recent documentation states this clearly right after the table
> of contents:
>
> https://lucene.apache.org/solr/guide/7_3/the-standard-query-parser.html
>
> It is highly unlikely that the 6.6 documentation will receive any
> changes, unless serious errors are found in it.  The omission of this
> piece of information will not be seen as a serious error.
>
> Thanks,
> Shawn
>
>


Re: SolrCloud Large Cluster Performance Issues

2018-06-25 Thread Shawn Heisey

On 6/24/2018 7:38 PM, 苗海泉 wrote:

Hello, everyone, we encountered two solr problems and hoped to get help.
Our data volume is very large, 24.5TB a day, and the number of records is
110 billion. We originally used 49 solr nodes. Because of insufficient
storage, we expanded to 100. For a solr cluster composed of multiple
machines, we found that the performance of 60 solrclouds and the overall
performance of 49 solr clusters are the same. How do we optimize it? Now
the cluster speed is 1.5 million on average per second. Why is that?


I can't really tell what your question is.  You've asked how to optimize 
something, but it's not clear exactly what you want to optimize.  You 
also asked about a cluster speed of 1.5 million per second, but you 
haven't indicated what is happening at that rate.   1.5 million *what* 
per second?  If you're talking about queries per second or documents 
indexed per second, you're already getting better performance than I 
would have expected.


We'll need a lot more detail about exactly what kind of problems you've 
encountered and what you think *should* be happening that isn't happening.



The second problem solrhome can only specify a solrhome, but now the disk
is divided into two directories, another solr can be stored using hdfs, but
the overall indexing performance is not up to standard, how to do, thank
you for your attention.


I would use symlinks to point some of the index cores to the second 
directory. It is possible to reduce this to one symlink rather than one 
for each core.  Moving things to the second location will likely be a 
manual process.


If you're on Windows, things are a little bit different, but NTFS does 
have a feature that offers very similar functionality to symlinks:


https://en.wikipedia.org/wiki/NTFS_junction_point

Thanks,
Shawn



Re: Using LUWAK in SOLR

2018-06-25 Thread SOLR4189
Ok. If somebody needs I found solution:

https://github.com/flaxsearch/luwak/issues/173



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Re: Drive Change for existing Solr Setup

2018-06-25 Thread Shawn Heisey

On 6/25/2018 1:41 AM, Srinivas Muppu (US) wrote:

Is there any possible solution/steps for the moving solr installation setup
from 'E' drive to 'D'-Drive (New Drive) without any impact to the existing
application(it should not create re indexing again)


You started a previous thread on this topic five days ago. I replied to 
the list at that time.  This time I'm going to include you as a BCC - 
normally I do not do this, because list membership is usually required 
to post to the list.  Here is the previous reply:


-

Exactly what needs to be done will be highly dependent on how you 
installed Solr on your system.  The project doesn't have any specific 
installation steps for Windows, so we have absolutely no idea what you 
have done.  Whoever set up your Solr install is going to know a LOT more 
about it than we ever can.


At a high level, without any information specific to your setup, here's 
the steps you need:


 * Stop Solr
 * Move or copy files to the new location
 * Change the solr home and possibly other config
 * Start Solr.

-

It's very important that you understand that we have absolutely no idea 
how you've set *anything* up, so we cannot tell you what files need to 
be changed or where they will be.  If you can get in touch with the 
person who set your Solr up, they should be able to help you make the 
change that you want to make.


Thanks,
Shawn



Re: tlogs not deleting

2018-06-25 Thread Amrit Sarkar
Brian,

If you are still facing the issue after disabling buffer, kindly shut down
all the nodes at source and then start them again, stale tlogs will start
purging themselves.

Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2

On Wed, Jun 20, 2018 at 8:15 PM, Susheel Kumar 
wrote:

> Not in my knowledge.  Please double check or wait for some time but after
> DISABLEBUFFER on source, your logs should start rolling and its the exact
> same issue I have faced with 6.6 which you resolve by DISABLEBUFFER.
>
> On Tue, Jun 19, 2018 at 1:39 PM, Brian Yee  wrote:
>
> > Does anyone have any additional possible causes for this issue? I checked
> > the buffer status using "/cdcr?action=STATUS" and it says buffer disabled
> > at both target and source.
> >
> > -Original Message-
> > From: Erick Erickson [mailto:erickerick...@gmail.com]
> > Sent: Tuesday, June 19, 2018 11:55 AM
> > To: solr-user 
> > Subject: Re: tlogs not deleting
> >
> > bq. Do you recommend disabling the buffer on the source SolrCloud as
> well?
> >
> > Disable them all on both source and target IMO.
> >
> > On Tue, Jun 19, 2018 at 8:50 AM, Brian Yee  wrote:
> > > Thank you Erick. I am running Solr 6.6. From the documentation:
> > > "Replicas do not need to buffer updates, and it is recommended to
> > disable buffer on the target SolrCloud."
> > >
> > > Do you recommend disabling the buffer on the source SolrCloud as well?
> > It looks like I already have the buffer disabled at target locations but
> > not the source location. Would it even make sense at the source location?
> > >
> > > This is what I have at the target locations:
> > > 
> > >   
> > >   100
> > >   
> > >   
> > > disabled
> > >   
> > > 
> > >
> > >
> > > -Original Message-
> > > From: Erick Erickson [mailto:erickerick...@gmail.com]
> > > Sent: Tuesday, June 19, 2018 11:00 AM
> > > To: solr-user 
> > > Subject: Re: tlogs not deleting
> > >
> > > Take a look at the CDCR section of your reference guide, be sure you
> get
> > the version which you can download from here:
> > > https://archive.apache.org/dist/lucene/solr/ref-guide/
> > >
> > > There's the CDCR API call you can use for in-flight disabling, and
> > depending on the version of Solr you can set it in solrconfig.
> > >
> > > Basically, buffering was there in the original CDCR to allow a larger
> > maintenance window, you could enable buffering and all updates were saved
> > until you disabled it, during which period you could do whatever you
> needed
> > with your target cluster and not lose any updates.
> > >
> > > Later versions can do the full sync of the index and buffering is being
> > removed.
> > >
> > > Best,
> > > Erick
> > >
> > > On Tue, Jun 19, 2018 at 7:31 AM, Brian Yee  wrote:
> > >> Thanks for the suggestion. Can you please elaborate a little bit about
> > what DISABLEBUFFER does? The documentation is not very detailed. Is this
> > something that needs to be done manually whenever this problem happens or
> > is it something that we can do to fix it so it won't happen again?
> > >>
> > >> -Original Message-
> > >> From: Susheel Kumar [mailto:susheel2...@gmail.com]
> > >> Sent: Monday, June 18, 2018 9:12 PM
> > >> To: solr-user@lucene.apache.org
> > >> Subject: Re: tlogs not deleting
> > >>
> > >> You may have to DISABLEBUFFER in source to get rid of tlogs.
> > >>
> > >> On Mon, Jun 18, 2018 at 6:13 PM, Brian Yee  wrote:
> > >>
> > >>> So I've read a bunch of stuff on hard/soft commits and tlogs. As I
> > >>> understand, after a hard commit, solr is supposed to delete old
> > >>> tlogs depending on the numRecordsToKeep and maxNumLogsToKeep values
> > >>> in the autocommit settings in solrconfig.xml. I am occasionally
> > >>> seeing solr fail to do this and the tlogs just build up over time
> > >>> and eventually we run out of disk space on the VM and this causes
> > problems for us.
> > >>> This does not happen all the time, only sometimes. I currently have
> > >>> a tlog directory that has 123G worth of tlogs. The last hard commit
> > >>> on this node was 10 minutes ago but these tlogs date back to 3 days
> > ago.
> > >>>
> > >>> We have sometimes found that restarting solr on the node will get it
> > >>> to clean up the old tlogs, but we really want to find the root cause
> > >>> and fix it if possible so we don't keep getting disk space alerts
> > >>> and have to adhoc restart nodes. Has anyone seen an issue like this
> > before?
> > >>>
> > >>> My update handler settings look like this:
> > >>>   
> > >>>
> > >>>   
> > >>>
> > >>>   ${solr.ulog.dir:}
> > >>>   ${solr.ulog.numVersionBuckets:
> > >>> 65536}
> > >>> 
> > >>> 
> > >>> 60
> > >>> 25
> > >>> false
> > >>> 
> > >>> 
> > >>> 12
> > >>> 
> > >>>
> > >>>   
> > >>> 100
> > >>>   
> 

Re: CDCR traffic

2018-06-25 Thread Amrit Sarkar
Hi Rajeswari,

No it is not. Source forwards the update to the Target in classic manner.

Amrit Sarkar
Search Engineer
Lucidworks, Inc.
415-589-9269
www.lucidworks.com
Twitter http://twitter.com/lucidworks
LinkedIn: https://www.linkedin.com/in/sarkaramrit2
Medium: https://medium.com/@sarkaramrit2

On Fri, Jun 22, 2018 at 11:38 PM, Natarajan, Rajeswari <
rajeswari.natara...@sap.com> wrote:

> Hi,
>
> Would like to know , if the CDCR traffic is encrypted.
>
> Thanks
> Ra
>


Re: Mailing List Subscription

2018-06-25 Thread Steve Rowe
Hi Rainman,

See http://lucene.apache.org/solr/community.html#mailing-lists-irc for 
subscription information.

--
Steve
www.lucidworks.com

> On Jun 25, 2018, at 12:07 AM, Rainman Sián  wrote:
> 
> Hello
> 
> I'm Rainman, I have worked with Solr in a couple of projects in the past
> and about to start a new one.
> 
> I want to be part of this list and collaborate to the project,
> 
> Best regards,
> 
> --
> Rainman Sián



Re: WELCOME to solr-user@lucene.apache.org

2018-06-25 Thread Erick Erickson
First, understand that this list is maintained by volunteers, so
answers aren't guaranteed.

If you require dedicated support there are various organizations that
provide same, but
you'll have to contact them.

That said, the community is quite responsive, just post questions to
solr-user like this
one.

Best,
Erick

On Sun, Jun 24, 2018 at 11:35 PM, Srinivas Muppu (US)
 wrote:
> Hi Solr Team,
>
> We are facing Solr System Configuration issues which needs help. Please let
> us know whom to post our Questions/Queries.
>
> Thanks,
> Srinivas
>
> On Mon, Jun 25, 2018 at 2:22 AM,  wrote:
>
>> Hi! This is the ezmlm program. I'm managing the
>> solr-user@lucene.apache.org mailing list.
>>
>> I'm working for my owner, who can be reached
>> at solr-user-ow...@lucene.apache.org.
>>
>> Acknowledgment: I have added the address
>>
>>srinivas.mu...@pwc.com
>>
>> to the solr-user mailing list.
>>
>> Welcome to solr-user@lucene.apache.org!
>>
>> Please save this message so that you know the address you are
>> subscribed under, in case you later want to unsubscribe or change your
>> subscription address.
>>
>>
>> --- Administrative commands for the solr-user list ---
>>
>> I can handle administrative requests automatically. Please
>> do not send them to the list address! Instead, send
>> your message to the correct command address:
>>
>> To subscribe to the list, send a message to:
>>
>>
>> To remove your address from the list, send a message to:
>>
>>
>> Send mail to the following for info and FAQ for this list:
>>
>>
>>
>> Similar addresses exist for the digest list:
>>
>>
>>
>> To get messages 123 through 145 (a maximum of 100 per request), mail:
>>
>>
>> To get an index with subject and author for messages 123-456 , mail:
>>
>>
>> They are always returned as sets of 100, max 2000 per request,
>> so you'll actually get 100-499.
>>
>> To receive all messages with the same subject as message 12345,
>> send a short message to:
>>
>>
>> The messages should contain one line or word of text to avoid being
>> treated as sp@m, but I will ignore their content.
>> Only the ADDRESS you send to is important.
>>
>> You can start a subscription for an alternate address,
>> for example "john@host.domain", just add a hyphen and your
>> address (with '=' instead of '@') after the command word:
>> 
>>
>> To stop subscription for this address, mail:
>> 
>>
>> In both cases, I'll send a confirmation message to that address. When
>> you receive it, simply reply to it to complete your subscription.
>>
>> If despite following these instructions, you do not get the
>> desired results, please contact my owner at
>> solr-user-ow...@lucene.apache.org. Please be patient, my owner is a
>> lot slower than I am ;-)
>>
>> --- Enclosed is a copy of the request I received.
>>
>> Return-Path: 
>> Received: (qmail 84164 invoked by uid 99); 25 Jun 2018 06:22:12 -
>> Received: from pnap-us-west-generic-nat.apache.org (HELO
>> spamd1-us-west.apache.org) (209.188.14.142)
>> by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 25 Jun 2018 06:22:12
>> +
>> Received: from localhost (localhost [127.0.0.1])
>> by spamd1-us-west.apache.org (ASF Mail Server at
>> spamd1-us-west.apache.org) with ESMTP id 63CB9CA4A5
>> for > pwc@lucene.apache.org>; Mon, 25 Jun 2018 06:22:12 + (UTC)
>> X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org
>> X-Spam-Flag: NO
>> X-Spam-Score: -1
>> X-Spam-Level:
>> X-Spam-Status: No, score=-1 tagged_above=-999 required=6.31
>> tests=[HTML_MESSAGE=2, KAM_BADIPHTTP=2, KAM_SHORT=0.001,
>> NORMAL_HTTP_TO_IP=0.001, RCVD_IN_DNSWL_HI=-5, SPF_HELO_PASS=-0.001,
>> SPF_PASS=-0.001] autolearn=disabled
>> Received: from mx1-lw-us.apache.org ([10.40.0.8])
>> by localhost (spamd1-us-west.apache.org [10.40.0.7])
>> (amavisd-new, port 10024)
>> with ESMTP id NuBVNjDIIyqW
>> for > pwc@lucene.apache.org>;
>> Mon, 25 Jun 2018 06:22:10 + (UTC)
>> Received: from lxsmpr20.pwc.com (lxsmpr20.pwc.com [155.201.248.112])
>> by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org)
>> with ESMTPS id 500895F1B4
>> for > pwc@lucene.apache.org>; Mon, 25 Jun 2018 06:22:10 + (UTC)
>> Received: from mail-vk0-f71.google.com (mail-vk0-f71.google.com
>> [209.85.213.71])
>> by lxsmpr20.nam.pwcinternal.com (8.16.0.21/8.16.0.21) with ESMTPS
>> id w5P6M3MF054491
>> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128
>> verify=OK)
>> for > pwc@lucene.apache.org>; Mon, 25 Jun 2018 02:22:03 -0400
>> Received: by mail-vk0-f71.google.com with SMTP id j123-v6so5886670vkc.4
>> for > pwc@lucene.apache.org>; Sun, 24 Jun 2018 23:22:03 -0700 (PDT)
>> X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
>> d=1e100.net; s=20161025;
>> h=x-gm-message-state:mime-version:in-reply-to:references:from:date
>>  :message-id:subject:to;
>> 

Re: Solr objects consuming more GC (Garbage collector) on our application

2018-06-25 Thread Erick Erickson
Well, this is a user's list, not a paid support channel. People here
volunteer their time/expertise.

First of all, Solr 4.2 is very old. From what you're showing, you've simply
grown too big for the server and are running into memory issues. Your
choices are:
1> get a bigger machine and allocate more space to the JVM
2> use SolrCloud and split the index into 2 shards.
3> use docValues for all fields that are used for sorting, grouping or
faceting (although I frankly don't remember if docValues were available in
4.2).

Second, your stack traces reference SolrNET, which is a separate project.
Not many people on this list cah nelp with that.

Best,
Erick

On Mon, Jun 25, 2018 at 3:49 AM, Jagdeeshwar S 
wrote:

> Can you please update on this?
>
>
>
> *From:* Jagdeeshwar S [mailto:jagdeeshw...@revalsys.com]
> *Sent:* 22 June 2018 10:41
> *To:* 'solr-user@lucene.apache.org' 
> *Cc:* 'Raj Samala' 
> *Subject:* Solr objects consuming more GC (Garbage collector) on our
> application
>
>
>
> Hi Support,
>
>
>
> We are using Solr 4.2.0 version for one of our ecommerce application where
> we are storing entire our catalogue / products.
>
>
>
> Application developed in ASP.Net 4.5.2 and hosted in IIS 8.5
>
>
>
> Ecommerce application flow is
>
>
>
>1. Home page
>2. Product Listing page
>3. Product Detail page
>4. Search products listing page
>5. Cart flow (Add to cart, login and payment)
>
>
>
> In above flow, We are using Solr in all first 4 steps where users come and
> browse for the products and will calling the DB from 5th step.
>
>
>
> We are using the solr from last 4 years but recently we have encountered a
> High CPU for servers.
>
>
>
> We have captured logs in the same time and found that Solr objects are
> consuming more memory (GC).
>
>
>
> Below are the log details . Can you please help us to identify the issue.
>
>
>
> From the below analysis its pretty clear that the high cpu issue is due to
> the garbage collection being triggered very frequently due to the HIGH
> allocation rate from within your application.
>
>
>
> The objects whose allocations are on the higher side is mentioned below.
> They are all rooted to the above highlighted *SolrNet* component which is
> highlighted in GREEN.
>
>
>
>
>
>
>
> 0:105> lmvm SolrNet
>
> Browse full module list
>
> start end module name
>
> 004c`57d5 004c`57db   SolrNet(deferred)
>
> Image path: SolrNet.dll
>
> Image name: SolrNet.dll
>
> Browse all global symbols  functions  data
>
> Using CLR debugging support for all symbols
>
> Has CLR image header, track-debug-data flag not set
>
> Timestamp:Tue Apr 16 03:52:53 2013 (516C7DBD)
>
> CheckSum: 
>
> ImageSize:0006
>
> File version: 0.4.0.2002
>
> Product version:  0.4.0.2002
>
> File flags:   0 (Mask 3F)
>
> File OS:  4 Unknown Win32
>
> File type:2.0 Dll
>
> File date:.
>
> Translations: .04b0
>
> ProductName:  SolrNet
>
> InternalName: SolrNet.dll
>
> OriginalFilename: SolrNet.dll
>
> ProductVersion:   0.4.0.2002
>
> FileVersion:  0.4.0.2002
>
> FileDescription:  SolrNet
>
> LegalCopyright:   Copyright Mauricio Scheffer 2007-2013
>
> Comments: SolrNet
>
>
>
> CPU utilization: 100%
>
> Worker Thread: Total: 53 Running: 18 Idle: 35 MaxLimit: 800 MinLimit: 8
>
> Work Request in Queue: 0
>
> --
>
> Number of Timers: 2
>
> --
>
> Completion Port Thread:Total: 4 Free: 4 MaxFree: 16 CurrentLimit: 4
> MaxLimit: 800 MinLimit: 200
>
>
>
>
>
> Top 10 threads which are consuming HIGH CPU cycles are below:
>
>
>
> Showing top 10 threads
>
> Thread ID   User Time
>
> ==
>
>58 | 0 days 0:00:26.812
>
>64 | 0 days 0:00:23.750
>
>55 | 0 days 0:00:23.718
>
>75 | 0 days 0:00:22.546
>
>47 | 0 days 0:00:21.875
>
>46 | 0 days 0:00:21.625
>
>63 | 0 days 0:00:18.953
>
>22 | 0 days 0:00:18.921
>
>24 | 0 days 0:00:18.453
>
>28 | 0 days 0:00:18.359
>
> ==
>
> Thread ID   User Time
>
>
>
>
>
> Taking one of the random thread from above, I could see the below
> callstack:
>
>
>
> 0:064> kL
>
> # Child-SP  RetAddr   Call Site
>
> 00 004c`5ea1ab38 7ffa`057e1118 ntdll!ZwWaitForSingleObject+0xa
>
> 01 004c`5ea1ab40 7ff9`fdc07a1f KERNELBASE!
> WaitForSingleObjectEx+0x94
>
> 02 004c`5ea1abe0 7ff9`fdc079d7 clr!CLREventWaitHelper2+0x3c
>
> 03 004c`5ea1ac20 7ff9`fdc07958 clr!CLREventWaitHelper+0x1f
>
> 04 004c`5ea1ac80 7ff9`fdc14c2d clr!CLREventBase::WaitEx+0x7c
>
> 05 (Inline Function) ` clr!CLREventBase::Wait+
> 0x`fffa63f1
>
> 06 004c`5ea1ad10 7ff9`fdc14ef4 

RE: Solr objects consuming more GC (Garbage collector) on our application

2018-06-25 Thread Jagdeeshwar S
Can you please update on this?

 

From: Jagdeeshwar S [mailto:jagdeeshw...@revalsys.com] 
Sent: 22 June 2018 10:41
To: 'solr-user@lucene.apache.org' 
Cc: 'Raj Samala' 
Subject: Solr objects consuming more GC (Garbage collector) on our
application

 

Hi Support,

 

We are using Solr 4.2.0 version for one of our ecommerce application where
we are storing entire our catalogue / products.

 

Application developed in ASP.Net 4.5.2 and hosted in IIS 8.5

 

Ecommerce application flow is

 

1.  Home page
2.  Product Listing page
3.  Product Detail page
4.  Search products listing page
5.  Cart flow (Add to cart, login and payment)

 

In above flow, We are using Solr in all first 4 steps where users come and
browse for the products and will calling the DB from 5th step.

 

We are using the solr from last 4 years but recently we have encountered a
High CPU for servers.

 

We have captured logs in the same time and found that Solr objects are
consuming more memory (GC).

 

Below are the log details . Can you please help us to identify the issue.

 

>From the below analysis its pretty clear that the high cpu issue is due to
the garbage collection being triggered very frequently due to the HIGH
allocation rate from within your application.

 

The objects whose allocations are on the higher side is mentioned below.
They are all rooted to the above highlighted SolrNet component which is
highlighted in GREEN.

 

 

 

0:105> lmvm SolrNet

Browse full module list

start end module name

004c`57d5 004c`57db   SolrNet(deferred) 

Image path: SolrNet.dll

Image name: SolrNet.dll

Browse all global symbols  functions  data

Using CLR debugging support for all symbols

Has CLR image header, track-debug-data flag not set

Timestamp:Tue Apr 16 03:52:53 2013 (516C7DBD)

CheckSum: 

ImageSize:0006

File version: 0.4.0.2002

Product version:  0.4.0.2002

File flags:   0 (Mask 3F)

File OS:  4 Unknown Win32

File type:2.0 Dll

File date:.

Translations: .04b0

ProductName:  SolrNet

InternalName: SolrNet.dll

OriginalFilename: SolrNet.dll

ProductVersion:   0.4.0.2002

FileVersion:  0.4.0.2002

FileDescription:  SolrNet

LegalCopyright:   Copyright Mauricio Scheffer 2007-2013

Comments: SolrNet

 

CPU utilization: 100%

Worker Thread: Total: 53 Running: 18 Idle: 35 MaxLimit: 800 MinLimit: 8

Work Request in Queue: 0

--

Number of Timers: 2

--

Completion Port Thread:Total: 4 Free: 4 MaxFree: 16 CurrentLimit: 4
MaxLimit: 800 MinLimit: 200

 

 

Top 10 threads which are consuming HIGH CPU cycles are below:

 

Showing top 10 threads

Thread ID   User Time

==

   58 | 0 days 0:00:26.812

   64 | 0 days 0:00:23.750

   55 | 0 days 0:00:23.718

   75 | 0 days 0:00:22.546

   47 | 0 days 0:00:21.875

   46 | 0 days 0:00:21.625

   63 | 0 days 0:00:18.953

   22 | 0 days 0:00:18.921

   24 | 0 days 0:00:18.453

   28 | 0 days 0:00:18.359

==

Thread ID   User Time

 

 

Taking one of the random thread from above, I could see the below callstack:

 

0:064> kL

# Child-SP  RetAddr   Call Site

00 004c`5ea1ab38 7ffa`057e1118 ntdll!ZwWaitForSingleObject+0xa

01 004c`5ea1ab40 7ff9`fdc07a1f KERNELBASE!WaitForSingleObjectEx+0x94

02 004c`5ea1abe0 7ff9`fdc079d7 clr!CLREventWaitHelper2+0x3c

03 004c`5ea1ac20 7ff9`fdc07958 clr!CLREventWaitHelper+0x1f

04 004c`5ea1ac80 7ff9`fdc14c2d clr!CLREventBase::WaitEx+0x7c

05 (Inline Function) `
clr!CLREventBase::Wait+0x`fffa63f1

06 004c`5ea1ad10 7ff9`fdc14ef4
clr!SVR::gc_heap::wait_for_gc_done+0x66

07 004c`5ea1ad40 7ff9`fdc06709
clr!SVR::GCHeap::GarbageCollectGeneration+0x108

08 (Inline Function) `
clr!SVR::gc_heap::try_allocate_more_space+0x535

09 (Inline Function) `
clr!SVR::gc_heap::allocate_more_space+0x54a

0a (Inline Function) ` clr!SVR::gc_heap::allocate+0x5a1

0b (Inline Function) ` clr!SVR::GCHeap::Alloc+0x601

0c (Inline Function) ` clr!Alloc+0x961

0d (Inline Function) ` clr!AllocateObject+0x9e3

0e 004c`5ea1ada0 7ff9`a0190d0a clr!JIT_New+0xac9

0f 004c`5ea1b1e0 7ff9`a018fb43
SolrNet!SolrNet.Impl.FieldParsers.AggregateFieldParser.CanHandleType(System.
Type)+0x3a

10 004c`5ea1b220 7ff9`a018f9c6
SolrNet!SolrNet.Impl.DocumentPropertyVisitors.RegularDocumentVisitor.Visit(S
ystem.Object, System.String, System.Xml.Linq.XElement)+0xe3

11 004c`5ea1b290 7ff9`a018f7d1

Mailing List Subscription

2018-06-25 Thread Rainman Sián
Hello

I'm Rainman, I have worked with Solr in a couple of projects in the past
and about to start a new one.

I want to be part of this list and collaborate to the project,

Best regards,

--
Rainman Sián


Re: SolrCloud Large Cluster Performance Issues

2018-06-25 Thread Emir Arnautović
Hi,
With such a big cluster a lot of things can go wrong and it is hard to give any 
answer without looking into it more and understanding your model. I assume that 
you are monitoring your system (both Solr/ZK and components that index/query) 
so it should be the first thing to look at and see if there are some 
bottlenecks. If you doubled number of nodes and don’t see increase in indexing 
throughput, it is likely that the bottleneck is indexing component or that you 
did not spread the load to your entire cluster. With more nodes, there is more 
pressure on ZK so check that as well. 
You will have to dive in and search for bottleneck or find some Solr consultant 
and let him do it for you.

Thanks,
Emir
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/



> On 25 Jun 2018, at 03:38, 苗海泉  wrote:
> 
> Hello, everyone, we encountered two solr problems and hoped to get help.
> Our data volume is very large, 24.5TB a day, and the number of records is
> 110 billion. We originally used 49 solr nodes. Because of insufficient
> storage, we expanded to 100. For a solr cluster composed of multiple
> machines, we found that the performance of 60 solrclouds and the overall
> performance of 49 solr clusters are the same. How do we optimize it? Now
> the cluster speed is 1.5 million on average per second. Why is that?
> 
> The second problem solrhome can only specify a solrhome, but now the disk
> is divided into two directories, another solr can be stored using hdfs, but
> the overall indexing performance is not up to standard, how to do, thank
> you for your attention.
> [image: Mailtrack]
> 
> Sender
> notified by
> Mailtrack
> 
> 18/06/25
> 上午9:38:13



Drive Change for existing Solr Setup

2018-06-25 Thread Srinivas Muppu (US)
Hi Solr Team,

After subscription done with the *solr-user@lucene.apache.org
* sending below issue details again to the
Solr Mailing list. Please help us as earliest.

As part of Solr project installation setup and instances(including
clustered solr, zk services and indexing jobs scheduler services) are
available in Windows 'E:\ ' drive in production environment. As business
needs to remove the E:\ drive, going forward D:\  drive will be used and
operational.

Is there any possible solution/steps for the moving solr installation setup
from 'E' drive to 'D'-Drive (New Drive) without any impact to the existing
application(it should not create re indexing again)

Please let us know your suggestions if required will create the JIRA ticket
for this.

Your earliest response will be appreciated!!

Thanks,
Srinivas

__
The information transmitted, including any attachments, is intended only for 
the person or entity to which it is addressed and may contain confidential 
and/or privileged material. Any review, retransmission, dissemination or other 
use of, or taking of any action in reliance upon, this information by persons 
or entities other than the intended recipient is prohibited, and all liability 
arising therefrom is disclaimed. If you received this in error, please contact 
the sender and delete the material from any computer. PricewaterhouseCoopers 
LLP is a Delaware limited liability partnership.  This communication may come 
from PricewaterhouseCoopers LLP or one of its subsidiaries.


Re: Error in solr Plugin

2018-06-25 Thread Zahra Aminolroaya
Thanks Andrea and Erick



--
Sent from: http://lucene.472066.n3.nabble.com/Solr-User-f472068.html


Reg Solr System configuration

2018-06-25 Thread Srinivas Muppu (US)
 Hi Solr Team,

We are facing Solr System Configuration issues(Existing Solr installation
setup needs to move window New Drive) which needs help. Please let us know
whom to post our Questions/Queries.

Thanks,
Srinivas

__
The information transmitted, including any attachments, is intended only for 
the person or entity to which it is addressed and may contain confidential 
and/or privileged material. Any review, retransmission, dissemination or other 
use of, or taking of any action in reliance upon, this information by persons 
or entities other than the intended recipient is prohibited, and all liability 
arising therefrom is disclaimed. If you received this in error, please contact 
the sender and delete the material from any computer. PricewaterhouseCoopers 
LLP is a Delaware limited liability partnership.  This communication may come 
from PricewaterhouseCoopers LLP or one of its subsidiaries.


Re: WELCOME to solr-user@lucene.apache.org

2018-06-25 Thread Srinivas Muppu (US)
Hi Solr Team,

We are facing Solr System Configuration issues which needs help. Please let
us know whom to post our Questions/Queries.

Thanks,
Srinivas

On Mon, Jun 25, 2018 at 2:22 AM,  wrote:

> Hi! This is the ezmlm program. I'm managing the
> solr-user@lucene.apache.org mailing list.
>
> I'm working for my owner, who can be reached
> at solr-user-ow...@lucene.apache.org.
>
> Acknowledgment: I have added the address
>
>srinivas.mu...@pwc.com
>
> to the solr-user mailing list.
>
> Welcome to solr-user@lucene.apache.org!
>
> Please save this message so that you know the address you are
> subscribed under, in case you later want to unsubscribe or change your
> subscription address.
>
>
> --- Administrative commands for the solr-user list ---
>
> I can handle administrative requests automatically. Please
> do not send them to the list address! Instead, send
> your message to the correct command address:
>
> To subscribe to the list, send a message to:
>
>
> To remove your address from the list, send a message to:
>
>
> Send mail to the following for info and FAQ for this list:
>
>
>
> Similar addresses exist for the digest list:
>
>
>
> To get messages 123 through 145 (a maximum of 100 per request), mail:
>
>
> To get an index with subject and author for messages 123-456 , mail:
>
>
> They are always returned as sets of 100, max 2000 per request,
> so you'll actually get 100-499.
>
> To receive all messages with the same subject as message 12345,
> send a short message to:
>
>
> The messages should contain one line or word of text to avoid being
> treated as sp@m, but I will ignore their content.
> Only the ADDRESS you send to is important.
>
> You can start a subscription for an alternate address,
> for example "john@host.domain", just add a hyphen and your
> address (with '=' instead of '@') after the command word:
> 
>
> To stop subscription for this address, mail:
> 
>
> In both cases, I'll send a confirmation message to that address. When
> you receive it, simply reply to it to complete your subscription.
>
> If despite following these instructions, you do not get the
> desired results, please contact my owner at
> solr-user-ow...@lucene.apache.org. Please be patient, my owner is a
> lot slower than I am ;-)
>
> --- Enclosed is a copy of the request I received.
>
> Return-Path: 
> Received: (qmail 84164 invoked by uid 99); 25 Jun 2018 06:22:12 -
> Received: from pnap-us-west-generic-nat.apache.org (HELO
> spamd1-us-west.apache.org) (209.188.14.142)
> by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 25 Jun 2018 06:22:12
> +
> Received: from localhost (localhost [127.0.0.1])
> by spamd1-us-west.apache.org (ASF Mail Server at
> spamd1-us-west.apache.org) with ESMTP id 63CB9CA4A5
> for  pwc@lucene.apache.org>; Mon, 25 Jun 2018 06:22:12 + (UTC)
> X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org
> X-Spam-Flag: NO
> X-Spam-Score: -1
> X-Spam-Level:
> X-Spam-Status: No, score=-1 tagged_above=-999 required=6.31
> tests=[HTML_MESSAGE=2, KAM_BADIPHTTP=2, KAM_SHORT=0.001,
> NORMAL_HTTP_TO_IP=0.001, RCVD_IN_DNSWL_HI=-5, SPF_HELO_PASS=-0.001,
> SPF_PASS=-0.001] autolearn=disabled
> Received: from mx1-lw-us.apache.org ([10.40.0.8])
> by localhost (spamd1-us-west.apache.org [10.40.0.7])
> (amavisd-new, port 10024)
> with ESMTP id NuBVNjDIIyqW
> for  pwc@lucene.apache.org>;
> Mon, 25 Jun 2018 06:22:10 + (UTC)
> Received: from lxsmpr20.pwc.com (lxsmpr20.pwc.com [155.201.248.112])
> by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org)
> with ESMTPS id 500895F1B4
> for  pwc@lucene.apache.org>; Mon, 25 Jun 2018 06:22:10 + (UTC)
> Received: from mail-vk0-f71.google.com (mail-vk0-f71.google.com
> [209.85.213.71])
> by lxsmpr20.nam.pwcinternal.com (8.16.0.21/8.16.0.21) with ESMTPS
> id w5P6M3MF054491
> (version=TLSv1.2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128
> verify=OK)
> for  pwc@lucene.apache.org>; Mon, 25 Jun 2018 02:22:03 -0400
> Received: by mail-vk0-f71.google.com with SMTP id j123-v6so5886670vkc.4
> for  pwc@lucene.apache.org>; Sun, 24 Jun 2018 23:22:03 -0700 (PDT)
> X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
> d=1e100.net; s=20161025;
> h=x-gm-message-state:mime-version:in-reply-to:references:from:date
>  :message-id:subject:to;
> bh=+MKXiCktrcuycddIpUqd9ljQ2oLqYBsgU3qPgb6oZ2M=;
> b=q4Vku4HdqSxx2NyQ1G2GtPG7ahk5icEeT8jaTkyyVNW+
> yq9o1oxQoQnsDVxLJF5n7j
>  kXE+R3STA6F1XfvdCznfy5qCY2BbHBqfex3UO+njnp+
> tiwfl5FDpzR9ZA9Hy2WYe4F9y
>  6GrupTi+IXDLY62n0/Zz8YEDlPUc0SBT/xOAuU12vB7jvGzgAJX+
> lYep328dPosKWz19
>  XWFX2+AlhKPCGGIIDI6Feg9PJAWMa7SDmAANdhgllYE+4e3zmHqaF+
> WpQNUnY3IilWFD
>  1HzYLoQHjIadBI5NSEaUFYVSFnQqXM8HgHA7XNnOdIkCvQ31bN/
> lxGpNSXGf7oF+dnjl
>  iK0w==
> X-Gm-Message-State: