Re: unable to find valid certification path to requested target

2019-04-01 Thread Branham, Jeremy (Experis)
Hi Joseph –
I don’t think this is a Solr issue. It sounds like your http crawling process 
doesn’t trust the cert that Solr is using.

Looks like you’re on the right track here – [I stumbled onto your post at 
Github]
https://github.com/Norconex/collector-http/issues/581

 
Jeremy Branham
jb...@allstate.com

On 3/31/19, 9:26 PM, "JTytler"  wrote:

I have created a keystore file and have enabled SSL on my solr server using
the following  procedures:
 
1) Created pkcs#12 file using the command:
Keytool –genkey –alias aliasname –keystore /solr-ssl.keystore.pfx –storetype
PKCS12 –keyalg RSA –storepass password –ext
SAN=dns:localhost,dns:solr-devapp01.devt1.restOfDomain –validity 730
–keysize 2048
 
2) Imported the pkcs keystore file into Trusted Root Certification Authority
 
3) Copied the pkcs file solr-ssl.keystore.pfx to the solr /server/etc folder
 
4) Modified solr.in.cmd file with the following:
 
set SOLR_SSL_ENABLED=true
set SOLR_SSL_KEY_STORE=etc/solr-ssl.keystore.pfx
set SOLR_SSL_KEY_STORE_PASSWORD=secret
set SOLR_SSL_TRUST_STORE=etc/solr-ssl.keystore.pfx
set SOLR_SSL_TRUST_STORE_PASSWORD=secret
 
set SOLR_SSL_NEED_CLIENT_AUTH=false
set SOLR_SSL_WANT_CLIENT_AUTH=false
set SOLR_SSL_KEY_STORE_TYPE=PKCS12
set SOLR_SSL_TRUST_STORE_TYPE=PKCS12
 
 
I can access the Solr admin at 
https://urldefense.proofpoint.com/v2/url?u=https-3A__localhost-3A8983_solr=DwIFaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=rnbRtumEySeUlFWuHX0AE4JO-I9o94nUnAfkrNPaAss=F7YCAJHvVKTe_QYZF14Rwcodu9JysDyVLVOzvLfc2l4=
 and can also
crawl websites using Norconex httpcrawler.   However, after the documents
are crawled, I am unable to commit the crawled documents into the Solr
index.   I get the error "unable to find valid certification path to
requested target".  

I will appreciate if someone can help me with this as this is the first time
I am trying to set up SSL/TLM.




--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__lucene.472066.n3.nabble.com_Solr-2DUser-2Df472068.html=DwIFaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=rnbRtumEySeUlFWuHX0AE4JO-I9o94nUnAfkrNPaAss=ex4KC7OKX1YMFfDWsANRffjk8DLl0SES-X04KWZzowg=




Re: Stopwords param of edismax parser not working

2019-03-29 Thread Branham, Jeremy (Experis)
Hi Ashish –
Are you using v7.3?
If so, I think this is the spot in code that should be executing:
https://github.com/apache/lucene-solr/blob/releases/lucene-solr/7.3.0/solr/core/src/java/org/apache/solr/search/ExtendedDismaxQParser.java#L310

 Haven’t dug into the logic, but I tested on my server [v7.7.0], and the debug 
output doesn’t show whether or not the stopword filter was removed.
I don’t know your use-case, but maybe you could use the field analysis tool in 
Solr Admin to get more insight.
 
Jeremy Branham
jb...@allstate.com

On 3/28/19, 4:47 AM, "Ashish Bisht"  wrote:

Hi,

We are trying  to remove stopwords from analysis using edismax parser
parameter.The documentation says

*stopwords
A Boolean parameter indicating if the StopFilterFactory configured in the
query analyzer should be respected when parsing the query. If this is set to
false, then the StopFilterFactory in the query analyzer is ignored.*


https://urldefense.proofpoint.com/v2/url?u=https-3A__lucene.apache.org_solr_guide_7-5F3_the-2Dextended-2Ddismax-2Dquery-2Dparser.html=DwICAg=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=e4J09_tlle6pJ7cObY_3FNbT4FR9VqDKCmLDx2B1ZCs=fcdcV-zmNEPuHTwm3OIwC_pnXlfnBWBPxjH5Ah-5dsI=


But seems like its not working.


https://urldefense.proofpoint.com/v2/url?u=http-3A__Box-2D1-3A8983_solr_SalesCentralDev-5F4_select-3Fq-3Dinternet=DwICAg=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=e4J09_tlle6pJ7cObY_3FNbT4FR9VqDKCmLDx2B1ZCs=tsSjzyF4rk8ld7IZKfbLbXeTqLlRxChfOr8kJw5ASr4=
 of
things=0=edismax=search_field
content*=false*=true


"parsedquery":"+(DisjunctionMaxQuery((content:internet |
search_field:internet)) DisjunctionMaxQuery((content:thing |
search_field:thing)))",
  *  "parsedquery_toString":"+((content:internet | search_field:internet)
(content:thing | search_field:thing))",*


Are we missing something here?



--
Sent from: 
https://urldefense.proofpoint.com/v2/url?u=http-3A__lucene.472066.n3.nabble.com_Solr-2DUser-2Df472068.html=DwICAg=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=e4J09_tlle6pJ7cObY_3FNbT4FR9VqDKCmLDx2B1ZCs=zUk8ppVtIoJ0kfwqBmFVsGooDkMnNjeHYp_yfZkGgDk=




Re: Re: solr _route_ key now working

2019-03-26 Thread Branham, Jeremy (Experis)
Jay –
I’m not familiar with the document ID format you mention [having a “:” in the 
prefix], but it looks similar to the composite ID routing I’m using.
Document Id format: “a/1!id”

Then I can use a _route_ value of “a/1!” when querying.

Example Doc IDs:
a/1!768456
a/1!563575
b/1!456234
b/1!245698

The document ID prefix “x/1!” tells Solr to spread the documents over ½ of the 
available shards. When querying with the same value for _route_ it will 
retrieve documents only from those shards.
 
Jeremy Branham
jb...@allstate.com

On 3/25/19, 9:13 PM, "Zheng Lin Edwin Yeo"  wrote:

Hi,

Sorry, didn't see that you have an exclamation mark in your query as well.
You will need to escape the exclamation mark as well.
So you can try it with the query _route_=“123\:456\!”

You can refer to the message in the link on which special characters
requires escaping.

https://urldefense.proofpoint.com/v2/url?u=https-3A__stackoverflow.com_questions_21914956_which-2Dspecial-2Dcharacters-2Dneed-2Descaping-2Din-2Da-2Dsolr-2Dquery=DwIFaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=81cWucTr4zf8Cn2FliZ2fYFfqIb_g605mWVAxLxuQCc=30JCckpa6ctmrBupqeGhxJ7pPIcicy7VcIoeTEw_vpQ=

By the way, which Solr version are you using?

Regards,
Edwin

On Tue, 26 Mar 2019 at 01:12, Jay Potharaju  wrote:

> That did not work . Any other suggestions
> My id is 123:456!678
> Tried running query as _route_=“123\:456!” But didn’t give expected
> results
> Thanks
> Jay
>
> > On Mar 24, 2019, at 8:30 PM, Zheng Lin Edwin Yeo 
> wrote:
> >
> > Hi,
> >
> > The character ":" is a special character, so it requires escaping during
> > the search.
> > You can try to search with query _route_="a\:b!".
> >
> > Regards,
> > Edwin
> >
> >> On Mon, 25 Mar 2019 at 07:59, Jay Potharaju 
> wrote:
> >>
> >> Hi,
> >> My document id has a format of a:b!c, when I query _route_="a:b!" it
> does
> >> not return any values. Any suggestions?
> >>
> >> Thanks
> >> Jay Potharaju
> >>
>




Re: Re: Re: obfuscated password error

2019-03-20 Thread Branham, Jeremy (Experis)
LR_URL_SCHEME=https
>
>   SOLR_SSL_OPTS=" -Dsolr.jetty.keystore=$SOLR_SSL_KEY_STORE \
>
> -Dsolr.jetty.keystore.password=$SOLR_SSL_KEY_STORE_PASSWORD \
>
> -Dsolr.jetty.truststore=$SOLR_SSL_TRUST_STORE \
>
> -Dsolr.jetty.truststore.password=$SOLR_SSL_TRUST_STORE_PASSWORD \
>
> -Dsolr.jetty.ssl.needClientAuth=$SOLR_SSL_NEED_CLIENT_AUTH \
>
> -Dsolr.jetty.ssl.wantClientAuth=$SOLR_SSL_WANT_CLIENT_AUTH"
>
>   if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then
>
> SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE \
>
>   -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD
> \
>
>   -Djavax.net.ssl.trustStore=$SOLR_SSL_CLIENT_TRUST_STORE \
>
>
> -Djavax.net.ssl.trustStorePassword=$SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD"
>
>   else
>
> SOLR_SSL_OPTS+="
> 
-Dcom.sun.management.jmxremote.ssl.config.file=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/https://urldefense.proofpoint.com/v2/url?u=http-3A__ssl.properties=DwIFaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=nIFuSrMfKCWUmJGtJXgZ_y91GZw9SK5EBljlXsjJgMk=2Rbg_Jc8K1tqOJBPdQt4lsSC0Y3rbEdiug2q577ZoLU=;
>
>   fi
>
> else
>
>   SOLR_JETTY_CONFIG+=("--module=http")
>
> Fi
>
>
> *Not working one (basically overriding again and is causing the incorrect
> password):*
>
>
>
> SOLR_SSL_OPTS=""
>
> if [ -n "$SOLR_SSL_KEY_STORE" ]; then
>
>   SOLR_JETTY_CONFIG+=("--module=https")
>
>   SOLR_URL_SCHEME=https
>
>   SOLR_SSL_OPTS=" -Dsolr.jetty.keystore=$SOLR_SSL_KEY_STORE \
>
> -Dsolr.jetty.keystore.password=$SOLR_SSL_KEY_STORE_PASSWORD \
>
> -Dsolr.jetty.truststore=$SOLR_SSL_TRUST_STORE \
>
> -Dsolr.jetty.truststore.password=$SOLR_SSL_TRUST_STORE_PASSWORD \
>
> -Dsolr.jetty.ssl.needClientAuth=$SOLR_SSL_NEED_CLIENT_AUTH \
>
> -Dsolr.jetty.ssl.wantClientAuth=$SOLR_SSL_WANT_CLIENT_AUTH"
>
>   if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then
>
> SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE \
>
>   -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD
> \
>
>   -Djavax.net.ssl.trustStore=$SOLR_SSL_CLIENT_TRUST_STORE \
>
>
> -Djavax.net.ssl.trustStorePassword=$SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD"
>
>   else
>
> SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_KEY_STORE \
>
>   -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_KEY_STORE_PASSWORD \
>
>   -Djavax.net.ssl.trustStore=$SOLR_SSL_TRUST_STORE \
>
>   -Djavax.net.ssl.trustStorePassword=$SOLR_SSL_TRUST_STORE_PASSWORD"
>
>   fi
>
> On Tue, Mar 19, 2019 at 10:10 AM Satya Marivada 

> wrote:
>
>> Hi Jeremy,
    >>
    >> Thanks for the points. Yes, agreed that there is some conflicting
>> property somewhere that is not letting it work. So I basically restored
>> solr-6.3.0 directory from another environment and replace the host name
>> appropriately for this environment. And I used the original keystore that
>> has been generated for this environment and it worked fine. So basically
>> the keystore is good as well except that there is some conflicting 
property
>> which is not letting it do deobfuscation right.
>>
>> Thanks,
>> Satya
>>
>> On Mon, Mar 18, 2019 at 2:32 PM Branham, Jeremy (Experis) <
>> jb...@allstate.com> wrote:
>>
>>> I’m not sure if you are sharing the trust/keystores, so I may be
>>> off-base here…
>>>
>>> Some thoughts –
>>> - Verify your VM arguments, to be sure there aren’t conflicting SSL
>>> properties.
>>> - Verify the environment is targeting the correct version of Java
>>> - Verify the trust/key stores exist where they are expected, and you can
>>> list the contents with the keytool
>>> - Verify the correct CA certs are trusted
>>>
>>>
>>> Jeremy Branham
>>> jb...@allstate.com
>>>
>>> On 3/18/19, 1:08 PM, "Satya Marivada"  wrote:
>>>
>>> Any suggestions please.
>>>
>>> Thanks,

Re: Upgrading tika

2019-03-19 Thread Branham, Jeremy (Experis)
I’m not positive – But I think you should match the CXF jar versions.

"cxf-core-3.2.8.jar" 


org.apache.cxf
cxf-rt-frontend-jaxrs
3.2.8


 
Jeremy Branham
jb...@allstate.com

On 3/19/19, 10:03 AM, "levtannen"  wrote:

"cxf-core-3.2.8.jar" and "cxf-rt-fromtend-jaxrs-2.6.3.jar"



Re: Re: obfuscated password error

2019-03-18 Thread Branham, Jeremy (Experis)
I’m not sure if you are sharing the trust/keystores, so I may be off-base here…

Some thoughts –
- Verify your VM arguments, to be sure there aren’t conflicting SSL properties.
- Verify the environment is targeting the correct version of Java
- Verify the trust/key stores exist where they are expected, and you can list 
the contents with the keytool
- Verify the correct CA certs are trusted

 
Jeremy Branham
jb...@allstate.com

On 3/18/19, 1:08 PM, "Satya Marivada"  wrote:

Any suggestions please.

Thanks,
Satya

On Mon, Mar 18, 2019 at 11:12 AM Satya Marivada 
wrote:

> Hi All,
>
> Using solr-6.3.0, to obfuscate the password, have used jetty util to
> generate obfuscated password
>
>
> java -cp jetty-util-9.3.8.v20160314.jar
> org.eclipse.jetty.util.security.Password mypassword
>
>
> The output has been used in 
https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=YtmCJK2U90u6mqx-FOmBS5nqy03luM2J-Zc_LhImnG0=
 as below
>
>
>
> 
SOLR_SSL_KEY_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks
>
> SOLR_SSL_KEY_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"
>
>
> 
SOLR_SSL_TRUST_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks
>
>
> 
SOLR_SSL_TRUST_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"
>
> Solr does not start fine with below exception, any suggestions? If I use
> the plain text password, it works fine. One more thing is that the same
> setup with obfuscated password works in other environments except one 
which
> got this exception. Recently system level patches are applied, just saying
> though dont think that could have impact,
>
> Caused by: java.net.SocketException:
> java.security.NoSuchAlgorithmException: Error constructing implementation
> (algorithm: Default, provider: SunJSSE, class: 
sun.security.ssl.SSLContextIm
> pl$DefaultSSLContext)
> at
> 
javax.net.ssl.DefaultSSLSocketFactory.throwException(https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:248)
> at
> 
javax.net.ssl.DefaultSSLSocketFactory.createSocket(https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:255)
> at
> 
org.apache.http.conn.ssl.SSLSocketFactory.createSocket(https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:513)
> at
> 
org.apache.http.conn.ssl.SSLSocketFactory.createSocket(https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:383)
> at
> 
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(https://urldefense.proofpoint.com/v2/url?u=http-3A__DefaultClientConnectionOperator.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=EATR9hBi7P9kYpCcJ8maLn81bHA72GhhvwWQY0V9EQw=:165)
> at
> 
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(https://urldefense.proofpoint.com/v2/url?u=http-3A__ManagedClientConnectionImpl.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=yuCHQjzNKMtl0uWKiDWB01ChPkiY1tCaPX8n8lhdR-s=:304)
> at
> 
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(https://urldefense.proofpoint.com/v2/url?u=http-3A__DefaultRequestDirector.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=BuInFyYyCadGREvZzUoJMKX-9SWG7lzHzdO-A3x3rGA=:611)
> at
> 
org.apache.http.impl.client.DefaultRequestDirector.execute(https://urldefense.proofpoint.com/v2/url?u=http-3A__DefaultRequestDirector.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=BuInFyYyCadGREvZzUoJMKX-9SWG7lzHzdO-A3x3rGA=:446)
> at
> 

Re: Re: Garbage Collection Metrics

2019-03-18 Thread Branham, Jeremy (Experis)
I get these metrics by pushing the JMX data into Graphite, then use the 
non-negative derivative function on the GC ‘time’ metric.
It essentially shows the amount of change on a counter, at the specific time it 
occurred. 
 
Jeremy Branham
jb...@allstate.com

On 3/18/19, 12:06 PM, "Jeff Courtade"  wrote:

The only way I found to track GC times was by truning on GC logging and the
writing cronjob data collection script and graphing it in zabbix

On Mon, Mar 18, 2019 at 12:34 PM Erick Erickson 
wrote:

> Attachments are pretty aggressively stripped by the apache mail server, so
> it didn’t come through.
>
> That said, I’m not sure how much use just the last GC time is. What do you
> want it for? This
> sounds a bit like an XY problem.
>
> Best,
> Erick
>
> > On Mar 17, 2019, at 2:43 PM, Karthik K G  wrote:
> >
> > Hi Team,
> >
> > I was looking for Old GC duration time metrics, but all I could find was
> the API for this "/solr/admin/metrics?wt=json=jvm=gc.G1-
> Old-Generation", but I am not sure if this is for
> 'gc_g1_gen_o_lastgc_duration'. I tried to hookup the IP to the jconsole 
and
> was looking for the metrics, but all I could see was the collection time
> but not last GC duration as attached in the screenshot. Can you please 
help
> here with finding the correct metrics. I strongly believe we are not
> capturing this information. Please correct me if I am wrong.
> >
> > Thanks & Regards,
> > Karthik
>
>




Re: Re: Authorization fails but api still renders

2019-03-15 Thread Branham, Jeremy (Experis)
// Adding the dev DL, as this may be a bug

Solr v7.7.0

I’m expecting the 401 on all the servers in all 3 clusters using the security 
configuration.
For example, when I access the core or collection APIs without authentication, 
it should return a 401.

On one of the servers, in one of the clusters, the authorization is completely 
ignored. The http response is 200 and the API returns results.
The other server in this cluster works properly, returning a 401 when the 
protected API is accessed without authentication.

Interesting notes –
- If I use the IP or FQDN to access the server, authorization works properly 
and a 401 is returned. It’s only when I use the short hostname to access the 
server, that the authorization is bypassed.
- On the broken server, a 401 is returned correctly when the ‘autoscaling 
suggestions’ api is accessed. This api uses a different resource path, which 
may be a clue to why the others fail.
  https://solr:8443/api/cluster/autoscaling/suggestions

Here is the security.json with sensitive data changed/removed –

{
"authentication":{
   "blockUnknown": false,
   "class":"solr.BasicAuthPlugin",
   "credentials":{
 "admin":"--REDACTED--",
 "reader":"--REDACTED--",
 "writer":"--REDACTED--"
   },
   "realm":"solr"
},
"authorization":{
   "class":"solr.RuleBasedAuthorizationPlugin",
   "permissions":[
 {"name":"security-edit", "role":"admin"},
 {"name":"security-read", "role":"admin"},
 {"name":"schema-edit", "role":"admin"},
 {"name":"config-edit", "role":"admin"},
 {"name":"core-admin-edit", "role":"admin"},
 {"name":"collection-admin-edit", "role":"admin"},
 {"name":"autoscaling-read", "role":"admin"},
 {"name":"autoscaling-write", "role":"admin"},
 {"name":"autoscaling-history-read", "role":"admin"},
 {"name":"read","role":"*"},
 {"name":"schema-read","role":"*"},
 {"name":"config-read","role":"*"},
     {"name":"collection-admin-read", "role":"*"},
 {"name":"core-admin-read","role":"*"},
 {"name":"update", "role":"write"},
 {"collection":null, "path":"/admin/info/system", "role":"admin"}
   ],
   "user-role":{
 "admin": "admin",
 "reader": "read",
 "writer": "write"
   }
}}


 
Jeremy Branham
jb...@allstate.com

On 3/14/19, 10:06 PM, "Zheng Lin Edwin Yeo"  wrote:

Hi,

Can't really catch your question. Are you facing the error 401 on all the
clusters or just one of them?

Also, which Solr version are you using?

Regards,
Edwin

On Fri, 15 Mar 2019 at 05:15, Branham, Jeremy (Experis) 
wrote:

> I’ve discovered the authorization works properly if I use the FQDN to
> access the Solr node, but the short hostname completely circumvents it.
> They are all internal server clusters, so I’m using self-signed
> certificates [the same exact certificate] on each. The SAN portion of the
> cert contains the IP, short, and FQDN of each server.
>
> I also diff’d the two servers Solr installation directories, and confirmed
> they are identical.
> They are using the same exact versions of Java and zookeeper, with the
> same chroot configuration. [different zk clusters]
>
>
> Jeremy Branham
> jb...@allstate.com
>
> On 3/14/19, 10:44 AM, "Branham, Jeremy (Experis)" 
> wrote:
>
> I’m using Basic Auth on 3 different clusters.
> On 2 of the clusters, authorization works fine. A 401 is returned when
> I try to access the core/collection apis.
>
> On the 3rd cluster I can see the authorization failed, but the api
> results are still returned.
>
> Solr.log
> 2019-03-14 09:25:47.680 INFO  (qtp1546693040-152) [   ]
> o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal.
> failed permission {
>   "name":"core-admin-read",
>   "role":"*"}
>
>
> I’m using different zookeeper clusters for each solr cluster, but
> using the same security.json contents.
> I’ve tried refreshing the ZK node, and bringing the whole Solr cluster
> down and back up.
>
> Is there some sort of caching that could be happening?
>
> I wrote an installation script that I’ve used to setup each cluster,
> so I’m thinking I’ll wipe it out and re-run.
> But before I do this, I thought I’d ask the community for input. Maybe
> a bug?
>
>
> Jeremy Branham
> jb...@allstate.com<mailto:jb...@allstate.com>
> Allstate Insurance Company | UCV Technology Services | Information
> Services Group
>
>
>
>




Re: Authorization fails but api still renders

2019-03-14 Thread Branham, Jeremy (Experis)
I’ve discovered the authorization works properly if I use the FQDN to access 
the Solr node, but the short hostname completely circumvents it.
They are all internal server clusters, so I’m using self-signed certificates 
[the same exact certificate] on each. The SAN portion of the cert contains the 
IP, short, and FQDN of each server.

I also diff’d the two servers Solr installation directories, and confirmed they 
are identical.
They are using the same exact versions of Java and zookeeper, with the same 
chroot configuration. [different zk clusters]

 
Jeremy Branham
jb...@allstate.com

On 3/14/19, 10:44 AM, "Branham, Jeremy (Experis)"  wrote:

I’m using Basic Auth on 3 different clusters.
On 2 of the clusters, authorization works fine. A 401 is returned when I 
try to access the core/collection apis.

On the 3rd cluster I can see the authorization failed, but the api results 
are still returned.

Solr.log
2019-03-14 09:25:47.680 INFO  (qtp1546693040-152) [   ] 
o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. failed 
permission {
  "name":"core-admin-read",
  "role":"*"}


I’m using different zookeeper clusters for each solr cluster, but using the 
same security.json contents.
I’ve tried refreshing the ZK node, and bringing the whole Solr cluster down 
and back up.

Is there some sort of caching that could be happening?

I wrote an installation script that I’ve used to setup each cluster, so I’m 
thinking I’ll wipe it out and re-run.
But before I do this, I thought I’d ask the community for input. Maybe a 
bug?


Jeremy Branham
jb...@allstate.com<mailto:jb...@allstate.com>
Allstate Insurance Company | UCV Technology Services | Information Services 
Group





Authorization fails but api still renders

2019-03-14 Thread Branham, Jeremy (Experis)
I’m using Basic Auth on 3 different clusters.
On 2 of the clusters, authorization works fine. A 401 is returned when I try to 
access the core/collection apis.

On the 3rd cluster I can see the authorization failed, but the api results are 
still returned.

Solr.log
2019-03-14 09:25:47.680 INFO  (qtp1546693040-152) [   ] 
o.a.s.s.RuleBasedAuthorizationPlugin request has come without principal. failed 
permission {
  "name":"core-admin-read",
  "role":"*"}


I’m using different zookeeper clusters for each solr cluster, but using the 
same security.json contents.
I’ve tried refreshing the ZK node, and bringing the whole Solr cluster down and 
back up.

Is there some sort of caching that could be happening?

I wrote an installation script that I’ve used to setup each cluster, so I’m 
thinking I’ll wipe it out and re-run.
But before I do this, I thought I’d ask the community for input. Maybe a bug?


Jeremy Branham
jb...@allstate.com
Allstate Insurance Company | UCV Technology Services | Information Services 
Group



Re: Re: Suppress stack trace in error response

2019-02-22 Thread Branham, Jeremy (Experis)
Thanks Edwin – You’re right, I could explain that a bit more.
My security team has run a scan against the SOLR servers and identified a few 
things they want suppressed, one being the stack trace in an error message.

For example –


500
1

`



For input string: "`"

java.lang.NumberFormatException: For input string: "`" at 
java.lang.NumberFormatException.forInputString(NumberFormatException.java:65) 
at …


I’ve got a long-term solution involving middleware changes, but I’m not sure 
there is a quick fix for this.

 
Jeremy Branham
jb...@allstate.com

On 2/21/19, 9:53 PM, "Zheng Lin Edwin Yeo"  wrote:

Hi,

There's too little information provided in your questions.
You can explain more on the issue or the exception that you are facing.

Regards,
Edwin

On Thu, 21 Feb 2019 at 23:45, Branham, Jeremy (Experis) 
wrote:

> When Solr throws an exception, like when a client sends a badly formed
> query string, is there a way to suppress the stack trace in the error
> response?
>
>
>
> Jeremy Branham
> jb...@allstate.com<mailto:jb...@allstate.com>
> Allstate Insurance Company | UCV Technology Services | Information
> Services Group
>
>




Re: Re: Re: Suppress stack trace in error response

2019-02-22 Thread Branham, Jeremy (Experis)
BTW – Congratulations on joining the PMC!

 
Jeremy Branham
jb...@allstate.com

On 2/22/19, 9:46 AM, "Branham, Jeremy (Experis)"  wrote:

Thanks Jason –
That’s what I was thinking too. It would require some development.

 
Jeremy Branham
jb...@allstate.com

On 2/22/19, 8:50 AM, "Jason Gerlowski"  wrote:

Hi Jeremy,

Unfortunately Solr doesn't offer anything like what you're looking
for, at least that I know of.  There's no sort of global "quiet" or
"suppressStack" option that you can pass on a request to _not_ get the
stacktrace information back.  There might be individual APIs which
offer something like this, but I've never run into them, so I doubt
it.

Best,

Jason

On Thu, Feb 21, 2019 at 10:53 PM Zheng Lin Edwin Yeo
 wrote:
>
> Hi,
>
> There's too little information provided in your questions.
> You can explain more on the issue or the exception that you are 
facing.
>
> Regards,
    > Edwin
    >
> On Thu, 21 Feb 2019 at 23:45, Branham, Jeremy (Experis) 

> wrote:
>
> > When Solr throws an exception, like when a client sends a badly 
formed
> > query string, is there a way to suppress the stack trace in the 
error
> > response?
> >
> >
> >
> > Jeremy Branham
> > jb...@allstate.com<mailto:jb...@allstate.com>
> > Allstate Insurance Company | UCV Technology Services | Information
> > Services Group
> >
> >






Re: Re: Suppress stack trace in error response

2019-02-22 Thread Branham, Jeremy (Experis)
Thanks Jason –
That’s what I was thinking too. It would require some development.

 
Jeremy Branham
jb...@allstate.com

On 2/22/19, 8:50 AM, "Jason Gerlowski"  wrote:

Hi Jeremy,

Unfortunately Solr doesn't offer anything like what you're looking
for, at least that I know of.  There's no sort of global "quiet" or
"suppressStack" option that you can pass on a request to _not_ get the
stacktrace information back.  There might be individual APIs which
offer something like this, but I've never run into them, so I doubt
it.

Best,

Jason

On Thu, Feb 21, 2019 at 10:53 PM Zheng Lin Edwin Yeo
 wrote:
>
> Hi,
>
> There's too little information provided in your questions.
> You can explain more on the issue or the exception that you are facing.
>
> Regards,
    > Edwin
    >
    > On Thu, 21 Feb 2019 at 23:45, Branham, Jeremy (Experis) 

> wrote:
>
> > When Solr throws an exception, like when a client sends a badly formed
> > query string, is there a way to suppress the stack trace in the error
> > response?
> >
> >
> >
> > Jeremy Branham
> > jb...@allstate.com<mailto:jb...@allstate.com>
> > Allstate Insurance Company | UCV Technology Services | Information
> > Services Group
> >
> >




Suppress stack trace in error response

2019-02-21 Thread Branham, Jeremy (Experis)
When Solr throws an exception, like when a client sends a badly formed query 
string, is there a way to suppress the stack trace in the error response?



Jeremy Branham
jb...@allstate.com
Allstate Insurance Company | UCV Technology Services | Information Services 
Group



Re: Re: Delayed/waiting requests

2019-01-15 Thread Branham, Jeremy (Experis)
Hi Gael –

Could you share this information?
Size of the index
Server memory available
Server CPU count
JVM memory settings

You mentioned a cloud configuration of 3 replicas.
Does that mean you have 1 shard with a replication factor of 3?
Do the pauses occur on all 3 servers?
Is the traffic evenly balanced across those servers?

 
Jeremy Branham
jb...@allstate.com


On 1/15/19, 9:50 AM, "Erick Erickson"  wrote:

Well, it was a nice theory anyway.

"Other collections with the same settings"
doesn't really mean much unless those other collections are very similar,
especially in terms of numbers of docs.

You should only see a new searcher opening when you do a
hard-commit-with-opensearcher-true or soft commit.

So what happens when you just try lowering the autowarm
count? I'm assuming you're free to test in some non-prod
system.

Focusing on the hit ratio is something of a red herring. Remember
that each entry in your filterCache is roughly maxDoc/8 + a little
overhead, the increase in GC pressure has to be balanced
against getting the hits from the cache.

Now, all that said if there's no correlation, then you need to put
a profiler on the system when you see this kind of thing and
find out where the hotspots are, otherwise it's guesswork and
I'm out of ideas.

Best,
Erick

On Tue, Jan 15, 2019 at 12:06 AM Gael Jourdan-Weil
 wrote:
>
> Hi Erick,
>
>
> Thank you for your detailed answer, I better understand autowarming.
>
>
> We have an autowarming time of ~10s for filterCache (queryResultCache is 
not used at all, ratio = 0.02).
>
> We increased the size of the filterCache from 6k to 12k (and autowarming 
size set to same values) to have a better ratio which is _only_ around 
0.85/0.90.
>
>
> The thing I don't understand is I should see "Opening new searcher" in 
the logs everytime a new searcher is opened and thus an autowarming happens, 
right?
>
> But I don't see "Opening new searcher" very often, and I don't see it 
being correlated with the response time peaks.
>
>
> Also, I didn't mention it earlier but, we have other SolrCloud clusters 
with similar settings and load (~10s filterCache autowarming, 10k entries) and 
we don't observe the same behavior.
>
>
> Regards,
>
> 
> De : Erick Erickson 
> Envoyé : lundi 14 janvier 2019 17:44:38
> À : solr-user
> Objet : Re: Delayed/waiting requests
>
> Gael:
>
> bq. Nevertheless, our filterCache is set to autowarm 12k entries which
> is also the maxSize
>
> That is far, far, far too many. Let's assume you actually have 12K
> entries in the filterCache.
> Every time you open a new searcher, 12K queries are executed _before_
> the searcher
> accepts any new requests. While being able to re-use a filterCache
> entry is useful, one of
> the primary purposes is to pre-load index data from disk into memory
> which can be
> the event that takes the most time.
>
> The queryResultCache has a similar function. I often find that this
> cache doesn't have a
> very high hit ratio, but again executing a _few_ of these queries
> warms the index from
> disk.
>
> I think of both caches as a map, where the key is the "thing", (fq
> clause in the case
> of filterCache, the whole query in the case of the queryResultCache).
> Autowarming
> replays the most recently executed N of these entries, essentially
> just as though
> they were submitted by a user.
>
> Hypothesis: You're massively over-warming, and when that kicks in you're 
seeing
> increased CPU and GC pressure leading to the anomalies you're seeing. 
Further,
> you have such excessive autowarming going on that it's hard to see the
> associated messages in the log.
>
> Here's what I'd recommend: Set your autowarm counts to something on the 
order
> of 16. If the culprit is just excessive autowarming, I'd expect your 
spikes to
> be much less severe. It _might_ be that your users see some increased 
(very
> temporary) variance in response time. You can tell that the autowarming
> configurations are "more art than science", I can't give you any other
> recommendations than "start small and increase until you're happy"
> unfortunately.
>
> I usually do this with some kind of load tester in a dev lab of course ;).
>
> Finally, if you use the metrics data (see:
> 
https://urldefense.proofpoint.com/v2/url?u=https-3A__lucene.apache.org_solr_guide_7-5F1_metrics-2Dreporting.html=DwIFaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=h6jTb9n4NnmdKzYWrvtmR4Hx9AKJvlxPH538vyXpE30=9BWTVr32mplsfAWQ3hnWuVx5V1cL_RgLNDDpg8S2mtk=)
> you can see the autowarm times. Don't get too lost in 

Re: Re: Re: Page faults

2019-01-09 Thread Branham, Jeremy (Experis)
Thanks for the information Erick –
I’ve learned there are 2 ‘classes’ of documents being stored in this collection.
There are about 4x as many documents in class A as class B.
When the documents are indexed, the document ID includes the key prefix like 
‘A/1!’ or ‘B/1!’, which I understand spreads the documents over ½ of the 
available shards.

I don’t suppose there is a way to say “I want 75% of the shards to store class 
A, and 25% to store class B”.
If we dropped the ‘/1’ from the prefix, all the documents would be indexed on a 
single shard, correct?


Currently, half the servers are under heavy load, and the other half are 
under-utilized. [8 servers total, 4 shards with replication factor of 2]
I’ve considered a few remedies, but I’m not sure which would be best.

We could drop the document ID prefix and let SOLR distribute the documents 
evenly, then use a discriminator field to filter queries.
- Requires re-indexing
- Code changes in our APIs and indexing process
We could create 2 separate collections.
- Requires re-indexing
- Code changes in our APIs and indexing process
- Lost ability to query all the docs at once
We could split the shards.
- More than 1 shard would be on a node. What if we end up with 2 big replicas 
on a single node?

If we split the shards, I’m unsure how the prefix would work in this scenario.
Would ‘A/1!’ continue to use the original shard range?

Like if we split just the 2 big shards –
4 shards become 6
Does ‘A/1!’ spread the documents across 3 shards [half of the new total] or 
across the 4 new shards?

Or if we split all 4 shards, ‘A/1!’ should spread across 8 shards, which would 
be half of the new total.
Could it be difficult trying to balance 8 shards across 8 servers?
I’m concerned 2 big shards would end up on the same server, and we would have 
imbalance again.

I think dropping the prefix all-together would be the easiest to maintain and 
scale, but has a code-impact on our apps.
Or maybe I’m over-thinking the complexity of splitting the shards, and they 
will balance out naturally.

I’ll split the shards in our test environment to see what happens.

 
Jeremy Branham
jb...@allstate.com

On 1/7/19, 6:13 PM, "Erick Erickson"  wrote:

having some replicas at 90G and some at 18G is totally unexpected with
compisiteID routing unless you're using "multi-level routing", see:

https://urldefense.proofpoint.com/v2/url?u=https-3A__lucidworks.com_2014_01_06_multi-2Dlevel-2Dcomposite-2Did-2Drouting-2Dsolrcloud_=DwIFaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=3W1fPV3il56N1yZXMpkr8tctxVeKkZ9Bi5S74c2AmSo=h67H58KbeLZIoOUaly3kVCFHllH-0Mi2FiqRDckIlBo=

But let's be clear what we're talking about here. I'm talking about
specifically the size of the index on disk for any particular
_replica_, meaning the size in places similar to:
pdv201806_shard1_replica1/data/index. I've never seen as much
disparity as you're talking about so we should get to the bottom of
that.

Do you have massive numbers of deleted docs in any of those shards?
The admin screen for any particular replica will show this number.


On another note: Your cache sizes are probably not part of the page
fault question, but on the surface they're badly misconfigured, at
least the filterCache and queryResultCache. Each entry in the
filterCache is a map entry, the key is roughly the query and the value
is bounded by maxDoc/8. So if you have, say, 8M documents, your
filterCache could theoretically be 1M each (give or take) and you
could have up to 20,000 of them. You're probably just being lucky and
either not having very many distinct fq clauses or are indexing often
enough that it isn't growing for very long before being flushed.

Your queryResultCache takes up a lot less space, but still it's quite
large. It has two primary purposes:
> paging. It generally stores a few integers (40 is common, maybe several 
hundred but who cares?) so hitting the next page won't have to search again. 
This isn't terribly important in modern installations.

> being used in autowarming to pre-load parts of the index into memory.

I'd consider knocking each of these back to the defaults (512), except
I'd put the autowarm count at, say, 16 or so.

The document cache is less clear, the recommendation is (number of
simultaneous queries you expect) X (your average row parameter)

Best,
Erick

On Mon, Jan 7, 2019 at 12:43 PM Branham, Jeremy (Experis)
 wrote:
>
> Thanks Erick/Chris for the information.
> The page faults are occurring on each node of the cluster.
> These are VMs running SOLR v7.2.1 on RHEL 7. CPUx8, 64GB mem.
>
> We’re collecting GC information and using a DynaTrace agent, so I’m not 
sure if / how much that contributes to the overhead.
>
> This clu

Re: Re: Page faults

2019-01-07 Thread Branham, Jeremy (Experis)
Thanks Erick/Chris for the information.
The page faults are occurring on each node of the cluster.
These are VMs running SOLR v7.2.1 on RHEL 7. CPUx8, 64GB mem.

We’re collecting GC information and using a DynaTrace agent, so I’m not sure if 
/ how much that contributes to the overhead.

This cluster is used strictly for type-ahead/auto-complete functionality. 

I’ve also just noticed that the shards are imbalanced – 2 having about 90GB and 
2 having about 18GB of data.
Having just joined this team, I’m not too familiar yet with the documents or 
queries/updates [and maybe not relevant to the page faults]. 
Although, I did check the schema, and most of the fields are stored=true, 
docValues=true

Solr v7.2.1
OS: RHEL 7

Collection Configuration - 
Shard count: 4
configName: pdv201806
replicationFactor: 2
maxShardsPerNode: 1
router: compositeId
autoAddReplicas: false

Cache configuration –
filterCache class="solr.FastLRUCache"
 size="2"
 initialSize="5000"
 autowarmCount="10"
queryResultCache class="solr.LRUCache"
  size="5000"
  initialSize="1000"
  autowarmCount="0"
documentCache class="solr.LRUCache"
   size="15000"
   initialSize="512"

enableLazyFieldLoading=true


JVM Information/Configuration –
java.runtime.version: 1.8.0_162-b12

-XX:+CMSParallelRemarkEnabled
-XX:+CMSScavengeBeforeRemark
-XX:+ParallelRefProcEnabled
-XX:+PrintGCApplicationStoppedTime
-XX:+PrintGCDateStamps
-XX:+PrintGCDetails
-XX:+PrintGCTimeStamps
-XX:+PrintHeapAtGC
-XX:+PrintTenuringDistribution
-XX:+ScavengeBeforeFullGC
-XX:+UseCMSInitiatingOccupancyOnly
-XX:+UseConcMarkSweepGC
-XX:+UseGCLogFileRotation
-XX:+UseParNewGC
-XX:-OmitStackTraceInFastThrow
-XX:CMSInitiatingOccupancyFraction=70
-XX:CMSMaxAbortablePrecleanTime=6000
-XX:ConcGCThreads=4
-XX:GCLogFileSize=20M
-XX:MaxTenuringThreshold=8
-XX:NewRatio=3
-XX:ParallelGCThreads=8
-XX:PretenureSizeThreshold=64m
-XX:SurvivorRatio=4
-XX:TargetSurvivorRatio=90
-Xms16g
-Xmx32g
-Xss256k
-verbose:gc


 
Jeremy Branham
jb...@allstate.com

On 1/7/19, 1:16 PM, "Christopher Schultz"  wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Erick,

On 1/7/19 11:52, Erick Erickson wrote:
> Images do not come through, so we don't see what you're seeing.
> 
> That said, I'd expect page faults to happen:
> 
> 1> when indexing. Besides what you'd expect (new segments written
> to disk), there's segment merging going on in the background which
> has to read segments from disk in order to merge.
> 
> 2> when querying, any fields returned as part of a doc that has
> stored=true docValues=false will require a disk access to get the
> stored data.

A page fault is not necessarily a disk access. It almost always *is*,
but it's not because the application is calling fopen(). It's because
the OS is performing a memory operation which often results in a dip
into virtual memory.

Jeremy, are these page-faults occurring on all the machines in your
cluster, or only some? What is the hardware configuration of each
machine (specifically, memory)? What are your JVM settings for your
Solr instances? Is anything else running on these nodes?

It would help to understand what's happening on your servers. "I'm
seeing page faults" doesn't really help us help you.

Thanks,
- -chris

> On Mon, Jan 7, 2019 at 8:35 AM Branham, Jeremy (Experis) 
>  wrote:
>> 
>> Does anyone know if it is typical behavior for a SOLR cluster to
>> have lots of page faults (50-100 per second) under heavy load?
>> 
>> We are performing load testing on a cluster with 8 nodes, and my
>> performance engineer has brought this information to attention.
>> 
>> I don’t know enough about memory management to say it is normal
>> or not.
>> 
>> 
>> 
>> The performance doesn’t appear to be suffering, but I don’t want
>> to overlook a potential hazard.
>> 
>> 
>> 
>> Thanks!
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> 
>> Jeremy Branham
>> 
>> jb...@allstate.com
>> 
>> Allstate Insurance Company | UCV Technology Services |
>> Information Services Group
>> 
>> 
> 
-BEGIN PGP SIGNATURE-
Comment: Using GnuPG with Thunderbird - https://www.enigmail.net/

iQIzBAEBCAAdFiEEMmKgYcQvxMe7tcJcHPApP6U8pFgFAlwzpYsACgkQHPApP6U8
 

Page faults

2019-01-07 Thread Branham, Jeremy (Experis)
Does anyone know if it is typical behavior for a SOLR cluster to have lots of 
page faults (50-100 per second) under heavy load?
We are performing load testing on a cluster with 8 nodes, and my performance 
engineer has brought this information to attention.
I don’t know enough about memory management to say it is normal or not.

The performance doesn’t appear to be suffering, but I don’t want to overlook a 
potential hazard.

Thanks!

[cid:image001.png@01D4A674.A5F74590]



Jeremy Branham
jb...@allstate.com
Allstate Insurance Company | UCV Technology Services | Information Services 
Group