auto complete search - slow response

2019-07-22 Thread Satya Marivada
Hi All,

We have solr 6.3.0 solr cloud running stable. We use an auto complete
search functionality where the search is being done after 3 key strokes.
Recently we ran into an issue (very slow response from solr) when one of
the user hit an auto complete search by typing initial letters of week days
and in repetitions.

Something like "S M T W T F S S M T W T F S .."

Is there a recommended approach from solr perspective to prevent such
scenarios? Thinking of preventing any searches with more than 4 or 5 single
letter searches as that would not be the use case for us.

Looking for any recommendations?

Thanks,
Satya


Re: Re: obfuscated password error

2019-03-20 Thread Satya Marivada
Sending again, with highlighted text in yellow.

So I got a chance to do a diff of the environments solr-6.3.0 folder within
contents.

solr-6.3.0/bin/solr file has the difference highlighted in yellow. Any idea
of what is going on in that if else in solr file?

*The working configuration file contents are (ssl.properties below has the
keystore path and password repeated):*

SOLR_SSL_OPTS=""

if [ -n "$SOLR_SSL_KEY_STORE" ]; then

  SOLR_JETTY_CONFIG+=("--module=https")

  SOLR_URL_SCHEME=https

  SOLR_SSL_OPTS=" -Dsolr.jetty.keystore=$SOLR_SSL_KEY_STORE \

-Dsolr.jetty.keystore.password=$SOLR_SSL_KEY_STORE_PASSWORD \

-Dsolr.jetty.truststore=$SOLR_SSL_TRUST_STORE \

-Dsolr.jetty.truststore.password=$SOLR_SSL_TRUST_STORE_PASSWORD \

-Dsolr.jetty.ssl.needClientAuth=$SOLR_SSL_NEED_CLIENT_AUTH \

-Dsolr.jetty.ssl.wantClientAuth=$SOLR_SSL_WANT_CLIENT_AUTH"

  if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then

SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE \

  -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD \

  -Djavax.net.ssl.trustStore=$SOLR_SSL_CLIENT_TRUST_STORE \


-Djavax.net.ssl.trustStorePassword=$SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD"
 else
SOLR_SSL_OPTS+="
-Dcom.sun.management.jmxremote.ssl.config.file=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/ssl.properties"
  fi

else

  SOLR_JETTY_CONFIG+=("--module=http")

Fi


*Not working one (basically overriding again and is causing the incorrect
password):*



SOLR_SSL_OPTS=""

if [ -n "$SOLR_SSL_KEY_STORE" ]; then

  SOLR_JETTY_CONFIG+=("--module=https")

  SOLR_URL_SCHEME=https

  SOLR_SSL_OPTS=" -Dsolr.jetty.keystore=$SOLR_SSL_KEY_STORE \

-Dsolr.jetty.keystore.password=$SOLR_SSL_KEY_STORE_PASSWORD \

-Dsolr.jetty.truststore=$SOLR_SSL_TRUST_STORE \

-Dsolr.jetty.truststore.password=$SOLR_SSL_TRUST_STORE_PASSWORD \

-Dsolr.jetty.ssl.needClientAuth=$SOLR_SSL_NEED_CLIENT_AUTH \

-Dsolr.jetty.ssl.wantClientAuth=$SOLR_SSL_WANT_CLIENT_AUTH"

  if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then

SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE \

  -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD \

  -Djavax.net.ssl.trustStore=$SOLR_SSL_CLIENT_TRUST_STORE \


-Djavax.net.ssl.trustStorePassword=$SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD"

  else

SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_KEY_STORE \

  -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_KEY_STORE_PASSWORD \

  -Djavax.net.ssl.trustStore=$SOLR_SSL_TRUST_STORE \

  -Djavax.net.ssl.trustStorePassword=$SOLR_SSL_TRUST_STORE_PASSWORD"

  fi



On Wed, Mar 20, 2019 at 10:45 AM Satya Marivada 
wrote:

> So I got a chance to do a diff of the environments solr-6.3.0 folder
> within contents.
>
> solr-6.3.0/bin/solr file has the difference highlighted in yellow. Any
> idea of what is going on in that if else in solr file?
>
> *The working configuration file contents are (ssl.properties below has the
> keystore path and password repeated):*
>
> SOLR_SSL_OPTS=""
>
> if [ -n "$SOLR_SSL_KEY_STORE" ]; then
>
>   SOLR_JETTY_CONFIG+=("--module=https")
>
>   SOLR_URL_SCHEME=https
>
>   SOLR_SSL_OPTS=" -Dsolr.jetty.keystore=$SOLR_SSL_KEY_STORE \
>
> -Dsolr.jetty.keystore.password=$SOLR_SSL_KEY_STORE_PASSWORD \
>
> -Dsolr.jetty.truststore=$SOLR_SSL_TRUST_STORE \
>
> -Dsolr.jetty.truststore.password=$SOLR_SSL_TRUST_STORE_PASSWORD \
>
> -Dsolr.jetty.ssl.needClientAuth=$SOLR_SSL_NEED_CLIENT_AUTH \
>
> -Dsolr.jetty.ssl.wantClientAuth=$SOLR_SSL_WANT_CLIENT_AUTH"
>
>   if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then
>
> SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE \
>
>   -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD
> \
>
>   -Djavax.net.ssl.trustStore=$SOLR_SSL_CLIENT_TRUST_STORE \
>
>
> -Djavax.net.ssl.trustStorePassword=$SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD"
>
>   else
>
> SOLR_SSL_OPTS+="
> -Dcom.sun.management.jmxremote.ssl.config.file=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/ssl.properties"
>
>   fi
>
> else
>
>   SOLR_JETTY_CONFIG+=("--module=http")
>
> Fi
>
>
> *Not working one (basically overriding again and is causing the incorrect
> password):*
>
>
>
> SOLR_SSL_OPTS=""
>
> if [ -n "$SOLR_SSL_KEY_STORE" ]; then
>
>   SOLR_JETTY_CONFIG+=("--module=https")
>
>   SOLR_URL_SCHEME=https
>
>   SOLR_SSL_OPTS=" -Dsolr.jetty.keystore=$SOLR_SSL_KEY_STORE \
>
> -Dsolr.jetty.keystore.p

Re: Re: obfuscated password error

2019-03-20 Thread Satya Marivada
So I got a chance to do a diff of the environments solr-6.3.0 folder within
contents.

solr-6.3.0/bin/solr file has the difference highlighted in yellow. Any idea
of what is going on in that if else in solr file?

*The working configuration file contents are (ssl.properties below has the
keystore path and password repeated):*

SOLR_SSL_OPTS=""

if [ -n "$SOLR_SSL_KEY_STORE" ]; then

  SOLR_JETTY_CONFIG+=("--module=https")

  SOLR_URL_SCHEME=https

  SOLR_SSL_OPTS=" -Dsolr.jetty.keystore=$SOLR_SSL_KEY_STORE \

-Dsolr.jetty.keystore.password=$SOLR_SSL_KEY_STORE_PASSWORD \

-Dsolr.jetty.truststore=$SOLR_SSL_TRUST_STORE \

-Dsolr.jetty.truststore.password=$SOLR_SSL_TRUST_STORE_PASSWORD \

-Dsolr.jetty.ssl.needClientAuth=$SOLR_SSL_NEED_CLIENT_AUTH \

-Dsolr.jetty.ssl.wantClientAuth=$SOLR_SSL_WANT_CLIENT_AUTH"

  if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then

SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE \

  -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD \

  -Djavax.net.ssl.trustStore=$SOLR_SSL_CLIENT_TRUST_STORE \


-Djavax.net.ssl.trustStorePassword=$SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD"

  else

SOLR_SSL_OPTS+="
-Dcom.sun.management.jmxremote.ssl.config.file=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/ssl.properties"

  fi

else

  SOLR_JETTY_CONFIG+=("--module=http")

Fi


*Not working one (basically overriding again and is causing the incorrect
password):*



SOLR_SSL_OPTS=""

if [ -n "$SOLR_SSL_KEY_STORE" ]; then

  SOLR_JETTY_CONFIG+=("--module=https")

  SOLR_URL_SCHEME=https

  SOLR_SSL_OPTS=" -Dsolr.jetty.keystore=$SOLR_SSL_KEY_STORE \

-Dsolr.jetty.keystore.password=$SOLR_SSL_KEY_STORE_PASSWORD \

-Dsolr.jetty.truststore=$SOLR_SSL_TRUST_STORE \

-Dsolr.jetty.truststore.password=$SOLR_SSL_TRUST_STORE_PASSWORD \

-Dsolr.jetty.ssl.needClientAuth=$SOLR_SSL_NEED_CLIENT_AUTH \

-Dsolr.jetty.ssl.wantClientAuth=$SOLR_SSL_WANT_CLIENT_AUTH"

  if [ -n "$SOLR_SSL_CLIENT_KEY_STORE" ]; then

SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_CLIENT_KEY_STORE \

  -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_CLIENT_KEY_STORE_PASSWORD \

  -Djavax.net.ssl.trustStore=$SOLR_SSL_CLIENT_TRUST_STORE \


-Djavax.net.ssl.trustStorePassword=$SOLR_SSL_CLIENT_TRUST_STORE_PASSWORD"

  else

SOLR_SSL_OPTS+=" -Djavax.net.ssl.keyStore=$SOLR_SSL_KEY_STORE \

  -Djavax.net.ssl.keyStorePassword=$SOLR_SSL_KEY_STORE_PASSWORD \

  -Djavax.net.ssl.trustStore=$SOLR_SSL_TRUST_STORE \

  -Djavax.net.ssl.trustStorePassword=$SOLR_SSL_TRUST_STORE_PASSWORD"

  fi

On Tue, Mar 19, 2019 at 10:10 AM Satya Marivada 
wrote:

> Hi Jeremy,
>
> Thanks for the points. Yes, agreed that there is some conflicting property
> somewhere that is not letting it work. So I basically restored solr-6.3.0
> directory from another environment and replace the host name appropriately
> for this environment. And I used the original keystore that has been
> generated for this environment and it worked fine. So basically the
> keystore is good as well except that there is some conflicting property
> which is not letting it do deobfuscation right.
>
> Thanks,
> Satya
>
> On Mon, Mar 18, 2019 at 2:32 PM Branham, Jeremy (Experis) <
> jb...@allstate.com> wrote:
>
>> I’m not sure if you are sharing the trust/keystores, so I may be off-base
>> here…
>>
>> Some thoughts –
>> - Verify your VM arguments, to be sure there aren’t conflicting SSL
>> properties.
>> - Verify the environment is targeting the correct version of Java
>> - Verify the trust/key stores exist where they are expected, and you can
>> list the contents with the keytool
>> - Verify the correct CA certs are trusted
>>
>>
>> Jeremy Branham
>> jb...@allstate.com
>>
>> On 3/18/19, 1:08 PM, "Satya Marivada"  wrote:
>>
>> Any suggestions please.
>>
>> Thanks,
>> Satya
>>
>> On Mon, Mar 18, 2019 at 11:12 AM Satya Marivada <
>> satya.chaita...@gmail.com>
>> wrote:
>>
>> > Hi All,
>> >
>> > Using solr-6.3.0, to obfuscate the password, have used jetty util to
>> > generate obfuscated password
>> >
>> >
>> > java -cp jetty-util-9.3.8.v20160314.jar
>> > org.eclipse.jetty.util.security.Password mypassword
>> >
>> >
>> > The output has been used in
>> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=YtmCJK2U90u6mqx-FO

Re: Re: obfuscated password error

2019-03-19 Thread Satya Marivada
Hi Jeremy,

Thanks for the points. Yes, agreed that there is some conflicting property
somewhere that is not letting it work. So I basically restored solr-6.3.0
directory from another environment and replace the host name appropriately
for this environment. And I used the original keystore that has been
generated for this environment and it worked fine. So basically the
keystore is good as well except that there is some conflicting property
which is not letting it do deobfuscation right.

Thanks,
Satya

On Mon, Mar 18, 2019 at 2:32 PM Branham, Jeremy (Experis) <
jb...@allstate.com> wrote:

> I’m not sure if you are sharing the trust/keystores, so I may be off-base
> here…
>
> Some thoughts –
> - Verify your VM arguments, to be sure there aren’t conflicting SSL
> properties.
> - Verify the environment is targeting the correct version of Java
> - Verify the trust/key stores exist where they are expected, and you can
> list the contents with the keytool
> - Verify the correct CA certs are trusted
>
>
> Jeremy Branham
> jb...@allstate.com
>
> On 3/18/19, 1:08 PM, "Satya Marivada"  wrote:
>
> Any suggestions please.
>
> Thanks,
> Satya
>
> On Mon, Mar 18, 2019 at 11:12 AM Satya Marivada <
> satya.chaita...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > Using solr-6.3.0, to obfuscate the password, have used jetty util to
> > generate obfuscated password
> >
> >
> > java -cp jetty-util-9.3.8.v20160314.jar
> > org.eclipse.jetty.util.security.Password mypassword
> >
> >
> > The output has been used in
> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=YtmCJK2U90u6mqx-FOmBS5nqy03luM2J-Zc_LhImnG0=
> as below
> >
> >
> >
> >
> SOLR_SSL_KEY_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks
> >
> >
> SOLR_SSL_KEY_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"
> >
> >
> >
> SOLR_SSL_TRUST_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks
> >
> >
> >
> SOLR_SSL_TRUST_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"
> >
> > Solr does not start fine with below exception, any suggestions? If I
> use
> > the plain text password, it works fine. One more thing is that the
> same
> > setup with obfuscated password works in other environments except
> one which
> > got this exception. Recently system level patches are applied, just
> saying
> > though dont think that could have impact,
> >
> > Caused by: java.net.SocketException:
> > java.security.NoSuchAlgorithmException: Error constructing
> implementation
> > (algorithm: Default, provider: SunJSSE, class:
> sun.security.ssl.SSLContextIm
> > pl$DefaultSSLContext)
> > at
> > javax.net.ssl.DefaultSSLSocketFactory.throwException(
> https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:248
> )
> > at
> > javax.net.ssl.DefaultSSLSocketFactory.createSocket(
> https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:255
> )
> > at
> > org.apache.http.conn.ssl.SSLSocketFactory.createSocket(
> https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:513
> )
> > at
> > org.apache.http.conn.ssl.SSLSocketFactory.createSocket(
> https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:383
> )
> > at
> >
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(
> https://urldefense.proofpoint.com/v2/url?u=http-3A__DefaultClientConnectionOperator.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIk

Re: Re: obfuscated password error

2019-03-19 Thread Satya Marivada
It has been generated with plain password. Same in other environments too,
but it works in other environments.

Thanks,
Satya

On Mon, Mar 18, 2019, 10:42 PM Zheng Lin Edwin Yeo 
wrote:

> Hi,
>
> Did you generate your keystore with the obfuscated password or the plain
> text password?
>
> Regards,
> Edwin
>
> On Tue, 19 Mar 2019 at 02:32, Branham, Jeremy (Experis) <
> jb...@allstate.com>
> wrote:
>
> > I’m not sure if you are sharing the trust/keystores, so I may be off-base
> > here…
> >
> > Some thoughts –
> > - Verify your VM arguments, to be sure there aren’t conflicting SSL
> > properties.
> > - Verify the environment is targeting the correct version of Java
> > - Verify the trust/key stores exist where they are expected, and you can
> > list the contents with the keytool
> > - Verify the correct CA certs are trusted
> >
> >
> > Jeremy Branham
> > jb...@allstate.com
> >
> > On 3/18/19, 1:08 PM, "Satya Marivada"  wrote:
> >
> > Any suggestions please.
> >
> > Thanks,
> > Satya
> >
> > On Mon, Mar 18, 2019 at 11:12 AM Satya Marivada <
> > satya.chaita...@gmail.com>
> > wrote:
> >
> > > Hi All,
> > >
> > > Using solr-6.3.0, to obfuscate the password, have used jetty util
> to
> > > generate obfuscated password
> > >
> > >
> > > java -cp jetty-util-9.3.8.v20160314.jar
> > > org.eclipse.jetty.util.security.Password mypassword
> > >
> > >
> > > The output has been used in
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__solr.in.sh=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=YtmCJK2U90u6mqx-FOmBS5nqy03luM2J-Zc_LhImnG0=
> > as below
> > >
> > >
> > >
> > >
> >
> SOLR_SSL_KEY_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks
> > >
> > >
> >
> SOLR_SSL_KEY_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"
> > >
> > >
> > >
> >
> SOLR_SSL_TRUST_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks
> > >
> > >
> > >
> >
> SOLR_SSL_TRUST_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"
> > >
> > > Solr does not start fine with below exception, any suggestions? If
> I
> > use
> > > the plain text password, it works fine. One more thing is that the
> > same
> > > setup with obfuscated password works in other environments except
> > one which
> > > got this exception. Recently system level patches are applied, just
> > saying
> > > though dont think that could have impact,
> > >
> > > Caused by: java.net.SocketException:
> > > java.security.NoSuchAlgorithmException: Error constructing
> > implementation
> > > (algorithm: Default, provider: SunJSSE, class:
> > sun.security.ssl.SSLContextIm
> > > pl$DefaultSSLContext)
> > > at
> > > javax.net.ssl.DefaultSSLSocketFactory.throwException(
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:248
> > )
> > > at
> > > javax.net.ssl.DefaultSSLSocketFactory.createSocket(
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:255
> > )
> > > at
> > > org.apache.http.conn.ssl.SSLSocketFactory.createSocket(
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs1U9_5t8wBMxJNiyQRdjXRk=:513
> > )
> > > at
> > > org.apache.http.conn.ssl.SSLSocketFactory.createSocket(
> >
> https://urldefense.proofpoint.com/v2/url?u=http-3A__SSLSocketFactory.java=DwIBaQ=gtIjdLs6LnStUpy9cTOW9w=0SwsmPELGv6GC1_5JSQ9T7ZPMLljrIkbF_2jBCrKXI0=Ix7ZcyM45ms93i2fWx4SNPgiLA7TGHVDOjCklcxbvLs=dud5QRNkwTMDiH04sCjNs

Re: obfuscated password error

2019-03-18 Thread Satya Marivada
Any suggestions please.

Thanks,
Satya

On Mon, Mar 18, 2019 at 11:12 AM Satya Marivada 
wrote:

> Hi All,
>
> Using solr-6.3.0, to obfuscate the password, have used jetty util to
> generate obfuscated password
>
>
> java -cp jetty-util-9.3.8.v20160314.jar
> org.eclipse.jetty.util.security.Password mypassword
>
>
> The output has been used in solr.in.sh as below
>
>
>
> SOLR_SSL_KEY_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks
>
> SOLR_SSL_KEY_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"
>
>
> SOLR_SSL_TRUST_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks
>
>
> SOLR_SSL_TRUST_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"
>
> Solr does not start fine with below exception, any suggestions? If I use
> the plain text password, it works fine. One more thing is that the same
> setup with obfuscated password works in other environments except one which
> got this exception. Recently system level patches are applied, just saying
> though dont think that could have impact,
>
> Caused by: java.net.SocketException:
> java.security.NoSuchAlgorithmException: Error constructing implementation
> (algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextIm
> pl$DefaultSSLContext)
> at
> javax.net.ssl.DefaultSSLSocketFactory.throwException(SSLSocketFactory.java:248)
> at
> javax.net.ssl.DefaultSSLSocketFactory.createSocket(SSLSocketFactory.java:255)
> at
> org.apache.http.conn.ssl.SSLSocketFactory.createSocket(SSLSocketFactory.java:513)
> at
> org.apache.http.conn.ssl.SSLSocketFactory.createSocket(SSLSocketFactory.java:383)
> at
> org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:165)
> at
> org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
> at
> org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
> at
> org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
> at
> org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
> at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
> at
> org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:498)
> ... 11 more
> Caused by: java.security.NoSuchAlgorithmException: Error constructing
> implementation (algorithm: Default, provider: SunJSSE, class:
> sun.security.ssl.SSLContextImpl$DefaultSSLContext)
> at java.security.Provider$Service.newInstance(Provider.java:1617)
> at sun.security.jca.GetInstance.getInstance(GetInstance.java:236)
> at sun.security.jca.GetInstance.getInstance(GetInstance.java:164)
> at javax.net.ssl.SSLContext.getInstance(SSLContext.java:156)
> at javax.net.ssl.SSLContext.getDefault(SSLContext.java:96)
>
>
> at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
> at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
> at java.lang.reflect.Method.invoke(Method.java:498)
> at org.eclipse.jetty.start.Main.invokeMain(Main.java:214)
> at org.eclipse.jetty.start.Main.start(Main.java:457)
> at org.eclipse.jetty.start.Main.main(Main.java:75)
> Caused by: java.io.IOException: Keystore was tampered with, or password
> was incorrect
> at
> sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:785)
> at
> sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56)
> at
> sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224)
> at
> sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)
> at java.security.KeyStore.load(KeyStore.java:1445)
> at
> sun.security.ssl.TrustManagerFactoryImpl.getCacertsKeyStore(TrustManagerFactoryImpl.java:226)
> at
> sun.security.ssl.SSLContextImpl$DefaultManagersHolder.getTrustManagers(SSLContextImpl.java:877)
> at
> sun.security.ssl.SSLContextImpl$DefaultManagersHolder.(SSLContextImpl.java:854)
> at
> sun.security.ssl.SSLContextImpl$DefaultSSLContext.(SSLContextImpl.java:1019)
> at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
> Method)
> at
> sun.reflect.Native

obfuscated password error

2019-03-18 Thread Satya Marivada
Hi All,

Using solr-6.3.0, to obfuscate the password, have used jetty util to
generate obfuscated password


java -cp jetty-util-9.3.8.v20160314.jar
org.eclipse.jetty.util.security.Password mypassword


The output has been used in solr.in.sh as below


SOLR_SSL_KEY_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks

SOLR_SSL_KEY_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"

SOLR_SSL_TRUST_STORE=/sanfs/mnt/vol01/solr/solr-6.3.0/server/etc/solr-ssl.keystore.jks

SOLR_SSL_TRUST_STORE_PASSWORD="OBF:1bcd1l161lts1ltu1uum1uvk1lq41lq61k221b9t"

Solr does not start fine with below exception, any suggestions? If I use
the plain text password, it works fine. One more thing is that the same
setup with obfuscated password works in other environments except one which
got this exception. Recently system level patches are applied, just saying
though dont think that could have impact,

Caused by: java.net.SocketException:
java.security.NoSuchAlgorithmException: Error constructing implementation
(algorithm: Default, provider: SunJSSE, class: sun.security.ssl.SSLContextIm
pl$DefaultSSLContext)
at
javax.net.ssl.DefaultSSLSocketFactory.throwException(SSLSocketFactory.java:248)
at
javax.net.ssl.DefaultSSLSocketFactory.createSocket(SSLSocketFactory.java:255)
at
org.apache.http.conn.ssl.SSLSocketFactory.createSocket(SSLSocketFactory.java:513)
at
org.apache.http.conn.ssl.SSLSocketFactory.createSocket(SSLSocketFactory.java:383)
at
org.apache.http.impl.conn.DefaultClientConnectionOperator.openConnection(DefaultClientConnectionOperator.java:165)
at
org.apache.http.impl.conn.ManagedClientConnectionImpl.open(ManagedClientConnectionImpl.java:304)
at
org.apache.http.impl.client.DefaultRequestDirector.tryConnect(DefaultRequestDirector.java:611)
at
org.apache.http.impl.client.DefaultRequestDirector.execute(DefaultRequestDirector.java:446)
at
org.apache.http.impl.client.AbstractHttpClient.doExecute(AbstractHttpClient.java:882)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:82)
at
org.apache.http.impl.client.CloseableHttpClient.execute(CloseableHttpClient.java:55)
at
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:498)
... 11 more
Caused by: java.security.NoSuchAlgorithmException: Error constructing
implementation (algorithm: Default, provider: SunJSSE, class:
sun.security.ssl.SSLContextImpl$DefaultSSLContext)
at java.security.Provider$Service.newInstance(Provider.java:1617)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:236)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:164)
at javax.net.ssl.SSLContext.getInstance(SSLContext.java:156)
at javax.net.ssl.SSLContext.getDefault(SSLContext.java:96)


at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.eclipse.jetty.start.Main.invokeMain(Main.java:214)
at org.eclipse.jetty.start.Main.start(Main.java:457)
at org.eclipse.jetty.start.Main.main(Main.java:75)
Caused by: java.io.IOException: Keystore was tampered with, or password was
incorrect
at
sun.security.provider.JavaKeyStore.engineLoad(JavaKeyStore.java:785)
at
sun.security.provider.JavaKeyStore$JKS.engineLoad(JavaKeyStore.java:56)
at
sun.security.provider.KeyStoreDelegator.engineLoad(KeyStoreDelegator.java:224)
at
sun.security.provider.JavaKeyStore$DualFormatJKS.engineLoad(JavaKeyStore.java:70)
at java.security.KeyStore.load(KeyStore.java:1445)
at
sun.security.ssl.TrustManagerFactoryImpl.getCacertsKeyStore(TrustManagerFactoryImpl.java:226)
at
sun.security.ssl.SSLContextImpl$DefaultManagersHolder.getTrustManagers(SSLContextImpl.java:877)
at
sun.security.ssl.SSLContextImpl$DefaultManagersHolder.(SSLContextImpl.java:854)
at
sun.security.ssl.SSLContextImpl$DefaultSSLContext.(SSLContextImpl.java:1019)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native
Method)
at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.security.Provider$Service.newInstance(Provider.java:1595)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:236)
at sun.security.jca.GetInstance.getInstance(GetInstance.java:164)
at javax.net.ssl.SSLContext.getInstance(SSLContext.java:156)

Thanks,
Satya


How to estimate Java Heap Requirement for solr.

2019-01-24 Thread Satya Nand kanodia
Hi,

I have a Solr instance having 6 cores. I have given -Xms1024m -Xmx16g heap
memory to it.


*Cores have the following number of documents in it.*
1. 86,31,043
2. 6,59,61,263
3. 4,55,31,492
4. 21,10,087
5. 1,14,477
6. 33,397


*I have following cache configuration.*


  

 


My question is if the heap size given by me is okay? If not what should be
the required heap size?


Re: error on health check from command line

2018-07-10 Thread Satya Marivada
Additional information: Using solr-6.3.0

Also tried, no luck:

./bin/solr healthcheck -c poi -z host1:2181,host2:2181,host3:2181
SOLR_AUTH_TYPE=“basic”
SOLR_AUTHENTICATION_OPTS="-Dbasicauth=username:password"

On Tue, Jul 10, 2018 at 9:09 AM Satya Marivada 
wrote:

> Hi,
>
> How do I supply the basic auth credentials for the below healthcheck that
> has to be done from command line?
>
> $ ./bin/solr healthcheck -c poi -z host1:2181,host2:2181,host3:2181
>
> ERROR: Solr requires authentication for https://host1:15101/solr/shard1/.
> Please supply valid credentials. HTTP code=401
>
> Thanks,
> Satya
>


error on health check from command line

2018-07-10 Thread Satya Marivada
Hi,

How do I supply the basic auth credentials for the below healthcheck that
has to be done from command line?

$ ./bin/solr healthcheck -c poi -z host1:2181,host2:2181,host3:2181

ERROR: Solr requires authentication for https://host1:15101/solr/shard1/.
Please supply valid credentials. HTTP code=401

Thanks,
Satya


Re: some solr replicas down

2018-06-20 Thread Satya Marivada
Chris,

You are spot on with the timestamps. The date command returns different
times on these vms and are not in sync with ntp. The ntpstat returns a
difference of about 8-10 seconds on the 4 vms and that would caused this
synchronization issues and marked the replicas as down. This just happened
recently and was working fine earlier. So would get this issue fixed and
hope everything falls in place.

Yes, there are some other errors that there is a javabin character 2
expected and is returning 60 which is "<" .

Thanks,
Satya

On Tue, Jun 19, 2018 at 8:01 AM Chris Ulicny  wrote:

> Satya,
>
> There should be some other log messages that are probably relevant to the
> issue you are having. Something along the lines of "leader cannot
> communicate with follower...publishing replica as down." It's likely there
> also is a message of "expecting json/xml but got html" in another
> instance's logs.
>
> We've seen this problem in various scenarios in our own clusters, usually
> during high volumes of requests, and what seems to be happening to us is
> the following.
>
> Since authentication is enabled, all requests between nodes must be
> authenticated, and Solr is using a timestamp to do this (in some way, not
> sure on the details). When the recipient of the request processes it, the
> timestamp is checked to see if it is within the Time-To-Live (TTL)
> millisecond value (default of 5000). If the timestamp is too old, the
> request is rejected with the above error and a response of 401 is delivered
> to the sender.
>
> When a request is sent from the leader to the follower and receives a 401
> response, the leader becomes too proactive sometimes and declares the
> replica down. In older versions (6.3.0), it seems that the replica will
> never recover automatically (manually delete the down replicas and add new
> ones to fix). Fortunately, as of 7.2.1 (maybe earlier) the down replicas
> will usually start to recover at some point (and the leaders seem less
> proactive to declare replicas down). Although, we have had cases where they
> did not recover after being down for hours on 7.2.1.
>
> Likely the solution to the problem is to increase the TTL value by adding
> the line
>
> SOLR_OPTS="$SOLR_OPTS -Dpkiauth.ttl=##"
>
> to the solr environment file (solr.in.sh) on each node and restarting
> them.
> Replace # with some millisecond value of your choice. I'd suggest just
> increasing it by intervals of 5s to start. If this does not fix your
> problem, then there is likely too much pressure on your hardware for some
> reason or another.
>
> Hopefully that helps.
>
> If anyone with more knowledge about the authentication plugin has
> corrections, wants fill in gaps, or has an idea to figure out what requests
> cause this issue. It'd be greatly appreciated.
>
> Best,
> Chris
>
> On Mon, Jun 18, 2018 at 9:38 AM Satya Marivada 
> wrote:
>
> > Hi, We are using solr 6.3.0 and a collection has 3 of 4 replicas down
> and 1
> > is up and serving.
> >
> > I see a single line error repeating in logs as below. nothing else
> specific
> > exception apart from it. Wondering what this below message is saying, is
> it
> > the cause of nodes being down, but saw that this happened even before the
> > repllicas went down.
> >
> > 2018-06-18 04:45:51.818 ERROR (qtp1528637575-27215) [c:poi s:shard1
> > r:core_node5 x:poi_shard1_replica3] o.a.s.s.PKIAuthenticationPlugin
> Invalid
> > key request timestamp: 1529297138215 , received timestamp: 1529297151817
> ,
> > TTL: 5000
> >
> > Thanks,
> > Satya
> >
>


some solr replicas down

2018-06-18 Thread Satya Marivada
Hi, We are using solr 6.3.0 and a collection has 3 of 4 replicas down and 1
is up and serving.

I see a single line error repeating in logs as below. nothing else specific
exception apart from it. Wondering what this below message is saying, is it
the cause of nodes being down, but saw that this happened even before the
repllicas went down.

2018-06-18 04:45:51.818 ERROR (qtp1528637575-27215) [c:poi s:shard1
r:core_node5 x:poi_shard1_replica3] o.a.s.s.PKIAuthenticationPlugin Invalid
key request timestamp: 1529297138215 , received timestamp: 1529297151817 ,
TTL: 5000

Thanks,
Satya


Re: inconsistent results

2018-05-03 Thread Satya Marivada
Yes, we are doing clean and full import. Is it not supposed to serve
old(existing) index till the new index is built and then do a cleanup,
replace old index after new index is built?

Would a full import without clean not give this problem?

Thanks Erick, this would be useful.

On Thu, May 3, 2018, 4:28 PM Erick Erickson <erickerick...@gmail.com> wrote:

> The short for is that different replicas in a shard have different
> commit point if you go by wall-clock time. So during heavy indexing,
> you can happen to catch the different counts. That really shouldn't
> happen, though, unless you're clearing the index first on the
> assumption that you're replacing the same docs each time
>
> One solution people use is to index to a "dark" collection, then use
> collection aliasing to atomically switch when the job is done.
>
> Best,
> Erick
>
>
> On Thu, May 3, 2018 at 11:55 AM, Satya Marivada
> <satya.chaita...@gmail.com> wrote:
> > Hi there,
> >
> > We have a solr (6.3.0) index which is being re-indexed every night, it
> > takes about 6-7 hours for the indexing to complete. During the time of
> > re-indexing, the index becomes flaky and would serve inconsistent count
> of
> > documents 70,000 at times and 80,000 at times. After the indexing is
> > completed, it serves the consistent and right number of documents that it
> > has indexed from the database. Any suggestions on this.
> >
> > Also solr writes to the same location as current index during
> re-indexing.
> > Could this be the cause of concern?
> >
> > Thanks,
> > Satya
>


inconsistent results

2018-05-03 Thread Satya Marivada
Hi there,

We have a solr (6.3.0) index which is being re-indexed every night, it
takes about 6-7 hours for the indexing to complete. During the time of
re-indexing, the index becomes flaky and would serve inconsistent count of
documents 70,000 at times and 80,000 at times. After the indexing is
completed, it serves the consistent and right number of documents that it
has indexed from the database. Any suggestions on this.

Also solr writes to the same location as current index during re-indexing.
Could this be the cause of concern?

Thanks,
Satya


solr index replace with index from another environment

2017-08-28 Thread Satya Marivada
Hi there,

We are using solr-6.3.0 and have the need to replace the solr index in
production with the solr index from another environment on periodical
basis. But the jvms have to be recycled for the updated index to take
effect. Is there any way this can be achieved without restarting the jvms?

Using aliases as described below, there is an alternative, but I dont think
it is useful in my case, where I have the index from other environment
ready. If I build new collection and replace index, again, the jvms need to
be restarted for the new index to take effect.

https://stackoverflow.com/questions/45158394/replacing-old-indexed-data-with-new-data-in-apache-solr-with-zero-downtime

Any other suggestions please.

Thanks,
satya


Re: indexed vs queried documents count mismatch

2017-08-24 Thread Satya Marivada
Source is database: 17,920,274  records in db
Indexed documents from admin screen: 17,920,274
Query the collection: 17,948,826

Thanks,
Satya

On Thu, Aug 24, 2017 at 3:44 PM Susheel Kumar <susheel2...@gmail.com> wrote:

> Does this happen again if you repeat above? How much total docs does DIH
> query/source shows to compare with Solr?
>
> On Thu, Aug 24, 2017 at 3:23 PM, Satya Marivada <satya.chaita...@gmail.com
> >
> wrote:
>
> > Hi,
> >
> > I have a weird situation, when I index the documents from admin console
> by
> > doing "clean, commit and optimize", when the indexing is completed, it
> > showed 17,920,274 documents are indexed.
> >
> > When queried from the solr admin console from the query tab for all the
> > documents in that collection, it shows 17,948,826 documents. It is so
> weird
> > that the query for all documents returns 28552 documents more.
> >
> > Any suggestions/thoughts.
> >
> > Thanks,
> > Satya
> >
>


indexed vs queried documents count mismatch

2017-08-24 Thread Satya Marivada
Hi,

I have a weird situation, when I index the documents from admin console by
doing "clean, commit and optimize", when the indexing is completed, it
showed 17,920,274 documents are indexed.

When queried from the solr admin console from the query tab for all the
documents in that collection, it shows 17,948,826 documents. It is so weird
that the query for all documents returns 28552 documents more.

Any suggestions/thoughts.

Thanks,
Satya


cpu utilization high

2017-07-18 Thread Satya Marivada
Hi All,

We are using solr-6.3.0 with external zookeeper. Setup is as below. Poi is
the collection which is big about 20G with each shard at 10G. Each jvm is
having 3G and the vms have 70G of RAM. The processors are at 6.

The cpu utilization when running queries is reaching more than 100%. Any
suggestions. Should I increase the number of cpus on each vm. Is it true
that a shard can be searched by a cpu and not multiple cpus can be put to
work on the same shard as it does not support multiple threading?

Or should the shard be split further? Each poi shard now is at 10G with
about 8 million documents.


[image: image.png]


Re: node not joining the rest of nodes in cloud

2017-06-16 Thread Satya Marivada
Never mind. I had a different config for zookeeper on second vm which
brought a different cloud.

On Fri, Jun 16, 2017, 8:48 PM Satya Marivada <satya.chaita...@gmail.com>
wrote:

> Here is the image:
>
> https://www.dropbox.com/s/hd97j4d3h3q0oyh/solr%20nodes.png?dl=0
>
> There is a node on 002: 15101 port missing from the cloud.
>
> On Fri, Jun 16, 2017 at 7:46 PM Erick Erickson <erickerick...@gmail.com>
> wrote:
>
>> Images don't come through the mailer, they're stripped. You'll have to put
>> it somewhere else and provide a link.
>>
>> Best,
>> Erick
>>
>>
>> On Fri, Jun 16, 2017 at 3:29 PM, Satya Marivada <
>> satya.chaita...@gmail.com>
>> wrote:
>>
>> > Hi,
>> >
>> > I am running solr-6.3.0. There are4 nodes, when I start solr, only 3
>> nodes
>> > are joining the cloud, the fourth one is coming up separately and not
>> > joining the other 3 nodes.
>> > Please see below in the picture on admin screen how fourth node is not
>> > joining. Any suggestions.
>> >
>> > Thanks,
>> > satya
>> >
>> >
>> > [image: image.png]
>> >
>>
>


Re: node not joining the rest of nodes in cloud

2017-06-16 Thread Satya Marivada
Here is the image:

https://www.dropbox.com/s/hd97j4d3h3q0oyh/solr%20nodes.png?dl=0

There is a node on 002: 15101 port missing from the cloud.

On Fri, Jun 16, 2017 at 7:46 PM Erick Erickson <erickerick...@gmail.com>
wrote:

> Images don't come through the mailer, they're stripped. You'll have to put
> it somewhere else and provide a link.
>
> Best,
> Erick
>
>
> On Fri, Jun 16, 2017 at 3:29 PM, Satya Marivada <satya.chaita...@gmail.com
> >
> wrote:
>
> > Hi,
> >
> > I am running solr-6.3.0. There are4 nodes, when I start solr, only 3
> nodes
> > are joining the cloud, the fourth one is coming up separately and not
> > joining the other 3 nodes.
> > Please see below in the picture on admin screen how fourth node is not
> > joining. Any suggestions.
> >
> > Thanks,
> > satya
> >
> >
> > [image: image.png]
> >
>


node not joining the rest of nodes in cloud

2017-06-16 Thread Satya Marivada
Hi,

I am running solr-6.3.0. There are4 nodes, when I start solr, only 3 nodes
are joining the cloud, the fourth one is coming up separately and not
joining the other 3 nodes.
Please see below in the picture on admin screen how fourth node is not
joining. Any suggestions.

Thanks,
satya


[image: image.png]


Re: Out of Memory Errors

2017-06-14 Thread Satya Marivada
Susheel, Please see attached. There  heap towards the end of graph has
spiked



On Wed, Jun 14, 2017 at 11:46 AM Susheel Kumar <susheel2...@gmail.com>
wrote:

> You may have gc logs saved when OOM happened. Can you draw it in GC Viewer
> or so and share.
>
> Thnx
>
> On Wed, Jun 14, 2017 at 11:26 AM, Satya Marivada <
> satya.chaita...@gmail.com>
> wrote:
>
> > Hi,
> >
> > I am getting Out of Memory Errors after a while on solr-6.3.0.
> > The
> -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
> > just kills the jvm right after.
> > Using Jconsole, I see the nice triangle pattern, where it uses the heap
> > and being reclaimed back.
> >
> > The heap size is set at 3g. The index size hosted on that particular node
> > is 17G.
> >
> > java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
> > -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> > -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> > -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
> > -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly -XX:
> > CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
> >
> > Looking at the solr_gc.log.0, the eden space is being used 100% all the
> > while and being successfully reclaimed. So don't think that has go to do
> > with it.
> >
> > Apart from that in the solr.log, I see exceptions that are aftermath of
> > killing the jvm
> >
> > org.eclipse.jetty.io.EofException: Closed
> > at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:383)
> > at org.apache.commons.io.output.ProxyOutputStream.write(
> > ProxyOutputStream.java:90)
> > at org.apache.solr.common.util.FastOutputStream.flush(
> > FastOutputStream.java:213)
> > at org.apache.solr.common.util.FastOutputStream.flushBuffer(
> > FastOutputStream.java:206)
> > at org.apache.solr.common.util.JavaBinCodec.marshal(
> > JavaBinCodec.java:136)
> >
> > Any suggestions on how to go about it.
> >
> > Thanks,
> > Satya
> >
>


Out of Memory Errors

2017-06-14 Thread Satya Marivada
Hi,

I am getting Out of Memory Errors after a while on solr-6.3.0.
The -XX:OnOutOfMemoryError=/sanfs/mnt/vol01/solr/solr-6.3.0/bin/oom_solr.sh
just kills the jvm right after.
Using Jconsole, I see the nice triangle pattern, where it uses the heap and
being reclaimed back.

The heap size is set at 3g. The index size hosted on that particular node
is 17G.

java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
-XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
-XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
-XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000

Looking at the solr_gc.log.0, the eden space is being used 100% all the
while and being successfully reclaimed. So don't think that has go to do
with it.

Apart from that in the solr.log, I see exceptions that are aftermath of
killing the jvm

org.eclipse.jetty.io.EofException: Closed
at org.eclipse.jetty.server.HttpOutput.write(HttpOutput.java:383)
at
org.apache.commons.io.output.ProxyOutputStream.write(ProxyOutputStream.java:90)
at
org.apache.solr.common.util.FastOutputStream.flush(FastOutputStream.java:213)
at
org.apache.solr.common.util.FastOutputStream.flushBuffer(FastOutputStream.java:206)
at
org.apache.solr.common.util.JavaBinCodec.marshal(JavaBinCodec.java:136)

Any suggestions on how to go about it.

Thanks,
Satya


Re: file descriptors and threads differing

2017-05-12 Thread Satya Marivada
We have the same ulimits in both cases.

/proc/2/fd:
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 124 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 125 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 126 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 127 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-core-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 128 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-dom-0.7.2.jar
lr-x-- 1 Dgisse pg014921_gisse 64 May 12 09:52 129 ->
/sanfs/mnt/vol01/solr/solr-6.3.0/contrib/extraction/lib/apache-mime4j-dom-0.7.2.jar


The same process is opening many files. In linux, isn't it that only one fd
should be opened and referenced by all threads in the process.

In case of other environment, where the jvm is different in minor version,
it opens less number of multiple files. Trying to set both of them at the
same jvm level and see how it goes.


Thanks,
Satya


On Fri, May 12, 2017 at 3:41 PM Erick Erickson <erickerick...@gmail.com>
wrote:

> Check the system settings with ulimit. Differing numbers of user processes
> or open files can cause things like this to be different on different
> boxes.
>
> Can't speak to the Java version.
>
> Best,
> Erick
>
> On Fri, May 12, 2017 at 11:56 AM, Satya Marivada <
> satya.chaita...@gmail.com>
> wrote:
>
> > Hi All,
> >
> > We have a weird problem, with the threads being opened many and crashing
> > the app in PP7 (one of our environment). It is the same index, same
> version
> > of solr (6.3.0) and zookeeper (3.4.9) in both environments.
> > Java minor version is different (1.8.0_102 in PP8 (one of our
> environment)
> > shown below vs 1.8.0_121 in PP7(one other environment)).
> >
> > Any ideas around why files open and threads are more in one environment
> vs
> > other.
> >
> > Files open is found by looking for fd in /proc/ directory and threads
> > found by ps -elfT | wc -l.
> >
> > [image: pasted1]
> >
> > Thanks,
> > Satya
> >
>


file descriptors and threads differing

2017-05-12 Thread Satya Marivada
Hi All,

We have a weird problem, with the threads being opened many and crashing
the app in PP7 (one of our environment). It is the same index, same version
of solr (6.3.0) and zookeeper (3.4.9) in both environments.
Java minor version is different (1.8.0_102 in PP8 (one of our environment)
shown below vs 1.8.0_121 in PP7(one other environment)).

Any ideas around why files open and threads are more in one environment vs
other.

Files open is found by looking for fd in /proc/ directory and threads found
by ps -elfT | wc -l.

[image: pasted1]

Thanks,
Satya


Re: SessionExpiredException

2017-05-11 Thread Satya Marivada
> For the sessionexpiredexception, the solr is throwing this exception and
> then the shard goes down.
>
> From the following discussion, it seems to be that the solr is loosing
> connection to zookeeper and throws the exception. In the zoo keeper
> configuration file, zoo.cfg, is it safe to increase the synclimit shown in
> below snippet.
>
>
> # The number of milliseconds of each tick
> tickTime=2000
> # The number of ticks that the initial
> # synchronization phase can take
> initLimit=10
> # The number of ticks that can pass between
> # sending a request and getting an acknowledgement
> syncLimit=5
> # the directory where the snapshot is stored.
> # do not use /tmp for storage, /tmp here is just
> # example sakes.
> dataDir=/sanfs/mnt/vol01/solr/zookeeperdata/2
> # the port at which the clients will connect
> clientPort=2181
> # the maximum number of client connections.
> # increase this if you need to handle more clients
> #maxClientCnxns=60
>
> Thanks,
> Satya
>
> On Mon, May 8, 2017 at 12:04 PM Satya Marivada <satya.chaita...@gmail.com>
> wrote:
>
>> The 3g memory is doing well, performing a gc at 600-700 MB.
>>
>> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
>>
>> Here are my jvm start up
>>
>> The start up parameters are:
>>
>> java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
>> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
>> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
>> -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
>> -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
>> -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
>> -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled
>> -XX:-OmitStackTraceInFastThrow -verbose:gc -XX:+PrintHeapAtGC
>> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
>> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime
>> -Xloggc:/sanfs/mnt/vol01/solr/solr-6.3.0/server/logs/solr_gc.log
>> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M
>> -DzkClientTimeout=15000 ...
>>
>> On Mon, May 8, 2017 at 11:50 AM Walter Underwood <wun...@wunderwood.org>
>> wrote:
>>
>>> Which garbage collector are you using? The default GC will probably give
>>> long pauses.
>>>
>>> You need to use CMS or G1.
>>>
>>> wunder
>>> Walter Underwood
>>> wun...@wunderwood.org
>>> http://observer.wunderwood.org/  (my blog)
>>>
>>>
>>> > On May 8, 2017, at 8:48 AM, Erick Erickson <erickerick...@gmail.com>
>>> wrote:
>>> >
>>> > 3G of memory should not lead to long GC pauses unless you're running
>>> > very close to the edge of available memory. Paradoxically, running
>>> > with 6G of memory may lead to _fewer_ noticeable pauses since the
>>> > background threads can do the work, well, in the background.
>>> >
>>> > Best,
>>> > Erick
>>> >
>>> > On Mon, May 8, 2017 at 7:29 AM, Satya Marivada
>>> > <satya.chaita...@gmail.com> wrote:
>>> >> Hi Piyush and Shawn,
>>> >>
>>> >> May I ask what is the solution for it, if it is the long gc pauses? I
>>> am
>>> >> skeptical about the same problem in our case too. We have started
>>> with 3G
>>> >> of memory for the heap.
>>> >> Did you have to adjust some of the memory allotted? Very much
>>> appreciated.
>>> >>
>>> >> Thanks,
>>> >> Satya
>>> >>
>>> >> On Sat, May 6, 2017 at 12:36 PM Piyush Kunal <piyush.ku...@myntra.com
>>> >
>>> >> wrote:
>>> >>
>>> >>> We already faced this issue and found out the issue to be long GC
>>> pauses
>>> >>> itself on either client side or server side.
>>> >>> Regards,
>>> >>> Piyush
>>> >>>
>>> >>> On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey <apa...@elyograg.org>
>>> wrote:
>>> >>>
>>> >>>> On 5/3/2017 7:32 AM, Satya Marivada wrote:
>>> >>>>> I see below exceptions in my logs sometimes. What could be causing
>>> it?
>>> >>>>>
>>> >>>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>>> >>>>
>>> >>>> Based on my limited research, this would tend to indicate that the
>>> >>>> heartbeats ZK uses to detect when sessions have gone inactive are
>>> not
>>> >>>> occurring in a timely fashion.
>>> >>>>
>>> >>>> Common causes seem to be:
>>> >>>>
>>> >>>> JVM Garbage collections.  These can cause the entire JVM to pause
>>> for an
>>> >>>> extended period of time, and this time may exceed the configured
>>> >>> timeouts.
>>> >>>>
>>> >>>> Excess client connections to ZK.  ZK limits the number of
>>> connections
>>> >>>> from each client address, with the idea of preventing denial of
>>> service
>>> >>>> attacks.  If a client is misbehaving, it may make more connections
>>> than
>>> >>>> it should.  You can try increasing the limit in the ZK config, but
>>> if
>>> >>>> this is the reason for the exception, then something's probably
>>> wrong,
>>> >>>> and you may be just hiding the real problem.
>>> >>>>
>>> >>>> Although we might have bugs causing the second situation, the first
>>> >>>> situation seems more likely.
>>> >>>>
>>> >>>> Thanks,
>>> >>>> Shawn
>>> >>>>
>>> >>>>
>>> >>>
>>>
>>>


Re: SessionExpiredException

2017-05-11 Thread Satya Marivada
For the sessionexpiredexception, the solr is throwing this exception and
then the shard goes down.

>From the following discussion, it seems to be that the solr is loosing
connection to zookeeper and throws the exception. In the zoo keeper
configuration file, zoo.cfg, is it safe to increase the synclimit shown in
below snippet.


# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/sanfs/mnt/vol01/solr/zookeeperdata/2
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60

Thanks,
Satya

On Mon, May 8, 2017 at 12:04 PM Satya Marivada <satya.chaita...@gmail.com>
wrote:

> The 3g memory is doing well, performing a gc at 600-700 MB.
>
> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC
>
> Here are my jvm start up
>
> The start up parameters are:
>
> java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
> -XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
> -XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
> -XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
> -XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
> -XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
> -XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled
> -XX:-OmitStackTraceInFastThrow -verbose:gc -XX:+PrintHeapAtGC
> -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
> -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime
> -Xloggc:/sanfs/mnt/vol01/solr/solr-6.3.0/server/logs/solr_gc.log
> -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M
> -DzkClientTimeout=15000 ...
>
> On Mon, May 8, 2017 at 11:50 AM Walter Underwood <wun...@wunderwood.org>
> wrote:
>
>> Which garbage collector are you using? The default GC will probably give
>> long pauses.
>>
>> You need to use CMS or G1.
>>
>> wunder
>> Walter Underwood
>> wun...@wunderwood.org
>> http://observer.wunderwood.org/  (my blog)
>>
>>
>> > On May 8, 2017, at 8:48 AM, Erick Erickson <erickerick...@gmail.com>
>> wrote:
>> >
>> > 3G of memory should not lead to long GC pauses unless you're running
>> > very close to the edge of available memory. Paradoxically, running
>> > with 6G of memory may lead to _fewer_ noticeable pauses since the
>> > background threads can do the work, well, in the background.
>> >
>> > Best,
>> > Erick
>> >
>> > On Mon, May 8, 2017 at 7:29 AM, Satya Marivada
>> > <satya.chaita...@gmail.com> wrote:
>> >> Hi Piyush and Shawn,
>> >>
>> >> May I ask what is the solution for it, if it is the long gc pauses? I
>> am
>> >> skeptical about the same problem in our case too. We have started with
>> 3G
>> >> of memory for the heap.
>> >> Did you have to adjust some of the memory allotted? Very much
>> appreciated.
>> >>
>> >> Thanks,
>> >> Satya
>> >>
>> >> On Sat, May 6, 2017 at 12:36 PM Piyush Kunal <piyush.ku...@myntra.com>
>> >> wrote:
>> >>
>> >>> We already faced this issue and found out the issue to be long GC
>> pauses
>> >>> itself on either client side or server side.
>> >>> Regards,
>> >>> Piyush
>> >>>
>> >>> On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey <apa...@elyograg.org>
>> wrote:
>> >>>
>> >>>> On 5/3/2017 7:32 AM, Satya Marivada wrote:
>> >>>>> I see below exceptions in my logs sometimes. What could be causing
>> it?
>> >>>>>
>> >>>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
>> >>>>
>> >>>> Based on my limited research, this would tend to indicate that the
>> >>>> heartbeats ZK uses to detect when sessions have gone inactive are not
>> >>>> occurring in a timely fashion.
>> >>>>
>> >>>> Common causes seem to be:
>> >>>>
>> >>>> JVM Garbage collections.  These can cause the entire JVM to pause
>> for an
>> >>>> extended period of time, and this time may exceed the configured
>> >>> timeouts.
>> >>>>
>> >>>> Excess client connections to ZK.  ZK limits the number of connections
>> >>>> from each client address, with the idea of preventing denial of
>> service
>> >>>> attacks.  If a client is misbehaving, it may make more connections
>> than
>> >>>> it should.  You can try increasing the limit in the ZK config, but if
>> >>>> this is the reason for the exception, then something's probably
>> wrong,
>> >>>> and you may be just hiding the real problem.
>> >>>>
>> >>>> Although we might have bugs causing the second situation, the first
>> >>>> situation seems more likely.
>> >>>>
>> >>>> Thanks,
>> >>>> Shawn
>> >>>>
>> >>>>
>> >>>
>>
>>


Re: SessionExpiredException

2017-05-08 Thread Satya Marivada
This is on solr-6.3.0 and external zookeeper 3.4.9

On Wed, May 3, 2017 at 11:39 PM Zheng Lin Edwin Yeo <edwinye...@gmail.com>
wrote:

> Are you using SolrCloud with external ZooKeeper, or Solr's internal
> ZooKeeper?
>
> Also, which version of Solr are you using?
>
> Regards,
> Edwin
>
> On 3 May 2017 at 21:32, Satya Marivada <satya.chaita...@gmail.com> wrote:
>
> > Hi,
> >
> > I see below exceptions in my logs sometimes. What could be causing it?
> >
> > org.apache.zookeeper.KeeperException$SessionExpiredException:
> > KeeperErrorCode = Session expired for /overseer
> > at
> > org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
> > at
> > org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> > at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
> >
> > Thanks,
> > Satya
> >
>


OutOfMemoryError and Too many open files

2017-05-08 Thread Satya Marivada
Hi,

Started getting below errors/exceptions. I have listed the resolution
inline. Could you please see if I am headed right?

The below error basically says that there are no more threads can be
created as the limit has reached. We have big index and I assume the
threads are being created outside of jvm and could not be because of low
ulimit setting of nproc (4096). It has been increased to 131072. This
number can be found by ulimit -u

java.lang.OutOfMemoryError: unable to create new native thread
at java.lang.Thread.start0(Native Method)
at java.lang.Thread.start(Thread.java:714)
at
java.util.concurrent.ThreadPoolExecutor.addWorker(ThreadPoolExecutor.java:950)
at
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:1368)
at
org.apache.solr.common.util.ExecutorUtil$MDCAwareThreadPoolExecutor.execute(ExecutorUtil.java:214)
at
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:112)
at
org.apache.solr.common.cloud.SolrZkClient$3.process(SolrZkClient.java:268)
at
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at
org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)

The below error basically says that there are no more files can be opened
as the limit has reached. It has been increased to 65536 from 4096. This
number can be found by ulimit -Hn, ulimit -Sn

java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422)
at
sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250)
at
org.eclipse.jetty.server.ServerConnector.accept(ServerConnector.java:382)
at
org.eclipse.jetty.server.AbstractConnector$Acceptor.run(AbstractConnector.java:593)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)

Thanks,
Satya


Re: SessionExpiredException

2017-05-08 Thread Satya Marivada
The 3g memory is doing well, performing a gc at 600-700 MB.

-XX:+UseConcMarkSweepGC -XX:+UseParNewGC

Here are my jvm start up

The start up parameters are:

java -server -Xms3g -Xmx3g -XX:NewRatio=3 -XX:SurvivorRatio=4
-XX:TargetSurvivorRatio=90 -XX:MaxTenuringThreshold=8
-XX:+UseConcMarkSweepGC -XX:+UseParNewGC -XX:ConcGCThreads=4
-XX:ParallelGCThreads=4 -XX:+CMSScavengeBeforeRemark
-XX:PretenureSizeThreshold=64m -XX:+UseCMSInitiatingOccupancyOnly
-XX:CMSInitiatingOccupancyFraction=50 -XX:CMSMaxAbortablePrecleanTime=6000
-XX:+CMSParallelRemarkEnabled -XX:+ParallelRefProcEnabled
-XX:-OmitStackTraceInFastThrow -verbose:gc -XX:+PrintHeapAtGC
-XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintGCTimeStamps
-XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime
-Xloggc:/sanfs/mnt/vol01/solr/solr-6.3.0/server/logs/solr_gc.log
-XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=9 -XX:GCLogFileSize=20M
-DzkClientTimeout=15000 ...

On Mon, May 8, 2017 at 11:50 AM Walter Underwood <wun...@wunderwood.org>
wrote:

> Which garbage collector are you using? The default GC will probably give
> long pauses.
>
> You need to use CMS or G1.
>
> wunder
> Walter Underwood
> wun...@wunderwood.org
> http://observer.wunderwood.org/  (my blog)
>
>
> > On May 8, 2017, at 8:48 AM, Erick Erickson <erickerick...@gmail.com>
> wrote:
> >
> > 3G of memory should not lead to long GC pauses unless you're running
> > very close to the edge of available memory. Paradoxically, running
> > with 6G of memory may lead to _fewer_ noticeable pauses since the
> > background threads can do the work, well, in the background.
> >
> > Best,
> > Erick
> >
> > On Mon, May 8, 2017 at 7:29 AM, Satya Marivada
> > <satya.chaita...@gmail.com> wrote:
> >> Hi Piyush and Shawn,
> >>
> >> May I ask what is the solution for it, if it is the long gc pauses? I am
> >> skeptical about the same problem in our case too. We have started with
> 3G
> >> of memory for the heap.
> >> Did you have to adjust some of the memory allotted? Very much
> appreciated.
> >>
> >> Thanks,
> >> Satya
> >>
> >> On Sat, May 6, 2017 at 12:36 PM Piyush Kunal <piyush.ku...@myntra.com>
> >> wrote:
> >>
> >>> We already faced this issue and found out the issue to be long GC
> pauses
> >>> itself on either client side or server side.
> >>> Regards,
> >>> Piyush
> >>>
> >>> On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey <apa...@elyograg.org>
> wrote:
> >>>
> >>>> On 5/3/2017 7:32 AM, Satya Marivada wrote:
> >>>>> I see below exceptions in my logs sometimes. What could be causing
> it?
> >>>>>
> >>>>> org.apache.zookeeper.KeeperException$SessionExpiredException:
> >>>>
> >>>> Based on my limited research, this would tend to indicate that the
> >>>> heartbeats ZK uses to detect when sessions have gone inactive are not
> >>>> occurring in a timely fashion.
> >>>>
> >>>> Common causes seem to be:
> >>>>
> >>>> JVM Garbage collections.  These can cause the entire JVM to pause for
> an
> >>>> extended period of time, and this time may exceed the configured
> >>> timeouts.
> >>>>
> >>>> Excess client connections to ZK.  ZK limits the number of connections
> >>>> from each client address, with the idea of preventing denial of
> service
> >>>> attacks.  If a client is misbehaving, it may make more connections
> than
> >>>> it should.  You can try increasing the limit in the ZK config, but if
> >>>> this is the reason for the exception, then something's probably wrong,
> >>>> and you may be just hiding the real problem.
> >>>>
> >>>> Although we might have bugs causing the second situation, the first
> >>>> situation seems more likely.
> >>>>
> >>>> Thanks,
> >>>> Shawn
> >>>>
> >>>>
> >>>
>
>


Re: SessionExpiredException

2017-05-08 Thread Satya Marivada
Hi Piyush and Shawn,

May I ask what is the solution for it, if it is the long gc pauses? I am
skeptical about the same problem in our case too. We have started with 3G
of memory for the heap.
Did you have to adjust some of the memory allotted? Very much appreciated.

Thanks,
Satya

On Sat, May 6, 2017 at 12:36 PM Piyush Kunal <piyush.ku...@myntra.com>
wrote:

> We already faced this issue and found out the issue to be long GC pauses
> itself on either client side or server side.
> Regards,
> Piyush
>
> On Sat, May 6, 2017 at 6:10 PM, Shawn Heisey <apa...@elyograg.org> wrote:
>
> > On 5/3/2017 7:32 AM, Satya Marivada wrote:
> > > I see below exceptions in my logs sometimes. What could be causing it?
> > >
> > > org.apache.zookeeper.KeeperException$SessionExpiredException:
> >
> > Based on my limited research, this would tend to indicate that the
> > heartbeats ZK uses to detect when sessions have gone inactive are not
> > occurring in a timely fashion.
> >
> > Common causes seem to be:
> >
> > JVM Garbage collections.  These can cause the entire JVM to pause for an
> > extended period of time, and this time may exceed the configured
> timeouts.
> >
> > Excess client connections to ZK.  ZK limits the number of connections
> > from each client address, with the idea of preventing denial of service
> > attacks.  If a client is misbehaving, it may make more connections than
> > it should.  You can try increasing the limit in the ZK config, but if
> > this is the reason for the exception, then something's probably wrong,
> > and you may be just hiding the real problem.
> >
> > Although we might have bugs causing the second situation, the first
> > situation seems more likely.
> >
> > Thanks,
> > Shawn
> >
> >
>


solr authentication error

2017-05-04 Thread Satya Marivada
Hi,


Can someone please say what I am missing in this case? I have solr
6.3.0, and enabled http authentication, the configuration has been
uploaded to zookeeper. But I do see below error in logs sometimes. Are
the nodes not able to ciommunicate because of this error? I am not
seeing any functionality loss.

Authentication for the admin screen works great.


In solr.in.sh, should I set? SOLR_AUTHENTICATION_OPTS=""



There was a problem making a request to the
leader:org.apache.solr.client.solrj.impl.HttpSolrClient$RemoteSolrException:
Error from server at https://:15111/solr: Expected mime type
application/octet-stream but got text/html. 


Error 401 require authentication

HTTP ERROR 401
Problem accessing /solr/admin/cores. Reason:
require authentication



at 
org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:561)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:262)
at 
org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:251)
at org.apache.solr.client.solrj.SolrClient.request(SolrClient.java:1219)
at 
org.apache.solr.cloud.ZkController.waitForLeaderToSeeDownState(ZkController.java:1647)
at 
org.apache.solr.cloud.ZkController.registerAllCoresAsDown(ZkController.java:471)
at org.apache.solr.cloud.ZkController.access$500(ZkController.java:119)
at org.apache.solr.cloud.ZkController$1.command(ZkController.java:335)
at 
org.apache.solr.common.cloud.ConnectionManager$1.update(ConnectionManager.java:168)
at 
org.apache.solr.common.cloud.DefaultConnectionStrategy.reconnect(DefaultConnectionStrategy.java:57)
at 
org.apache.solr.common.cloud.ConnectionManager.process(ConnectionManager.java:142)
at 
org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.java:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)


solr 6.3.0 monitoring

2017-05-03 Thread Satya Marivada
Hi,

We stood up solr 6.3.0 with external zookeeper 3.4.9. We are moving to
production and setting up monitoring for solr, to check on all cores of a
collection to see they are up. Similary any other pointers towards the
entire collection monitoring or any other suggestions would be useful.

For zookeeper, planning to use MNTR command to check on its status.

Thanks,
Satya


Re: solr-6.3.0 error port is running already

2017-05-03 Thread Satya Marivada
Hi Rick and Erick,

Thanks for responding. I made sure that there is no other process running
on that port.

Also when this happened, the admin page had all the nodes as live nodes
though some of them are down. So I went ahead and emptied the zookeeper
data directory where all the configuration is stored in version-2 folder. I
then had to upload the configuration again and placed my index fresh in the
solr.

It then came up fine. I was playing with JMX parameters to be passed to jvm
before this started to happen. Not sure if this got to do something with it.

Thanks,
Satya

On Wed, May 3, 2017 at 7:01 AM Rick Leir <rl...@leirtech.com> wrote:

> Here it is on Fedora/Redhat/Centos (similar on Unix like systems). Look
> for other processes which might already be listening on the port you
> want to listen on:
>
> $ sudo netstat --inet -lp -4
> Active Internet connections (only servers)
> Proto Recv-Q Send-Q Local Address   Foreign Address
> State   PID/Program name
> tcp0  0 0.0.0.0:imaps 0.0.0.0:*   LISTEN
> 2577/dovecot
> tcp0  0 0.0.0.0:pop3s 0.0.0.0:*   LISTEN
> 2577/dovecot
>
> ...
>
> $ grep pop3s /etc/services
> pop3s   995/tcp # POP-3 over SSL
> pop3s   995/udp # POP-3 over SSL
>
> I mention pop3s just because you will run into misleading port numbers
> which alias some well known services which are listed in the services file.
>
> cheers -- Rick
>
> On 2017-05-02 05:09 PM, Rick Leir wrote:
> > Satya
> > Say netstat --inet -lP
> > You might need to add -ipv4 to that command. The P might be lower case
> (I am on the bus!). And the output might show misleading service names, see
> /etc/services.
> > Cheers-- Rick
> >
> > On May 2, 2017 3:10:30 PM EDT, Satya Marivada <satya.chaita...@gmail.com>
> wrote:
> >> Hi,
> >>
> >> I am getting the below exception all of a sudden with solr-6.3.0.
> >> "null:org.apache.solr.common.SolrException: A previous ephemeral live
> >> node
> >> still exists. Solr cannot continue. Please ensure that no other Solr
> >> process using the same port is running already."
> >>
> >> We are using external zookeeper and have restarted solr many times.
> >> There
> >> is no solr running on those ports already. Any suggestions. Looks like
> >> a
> >> bug. Had started using jmx option and then started getting it. Turned
> >> jmx
> >> off, still getting the same issue.
> >>
> >> We are in crunch of time, any workaround to get it started would be
> >> helpful. Not sure where solr is seeing that port, when everything is
> >> started clean.
> >>
> >> Thanks,
> >> Satya
>
>


SessionExpiredException

2017-05-03 Thread Satya Marivada
Hi,

I see below exceptions in my logs sometimes. What could be causing it?

org.apache.zookeeper.KeeperException$SessionExpiredException:
KeeperErrorCode = Session expired for /overseer
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at
org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)

Thanks,
Satya


Re: solr-6.3.0 error port is running already

2017-05-02 Thread Satya Marivada
Any ideas?  "null:org.apache.solr.common.SolrException: A previous
ephemeral live node still exists. Solr cannot continue. Please ensure that
no other Solr process using the same port is running already."

Not sure, if JMX enablement has caused this.

Thanks,
Satya

On Tue, May 2, 2017 at 3:10 PM Satya Marivada <satya.chaita...@gmail.com>
wrote:

> Hi,
>
> I am getting the below exception all of a sudden with solr-6.3.0.
> "null:org.apache.solr.common.SolrException: A previous ephemeral live node
> still exists. Solr cannot continue. Please ensure that no other Solr
> process using the same port is running already."
>
> We are using external zookeeper and have restarted solr many times. There
> is no solr running on those ports already. Any suggestions. Looks like a
> bug. Had started using jmx option and then started getting it. Turned jmx
> off, still getting the same issue.
>
> We are in crunch of time, any workaround to get it started would be
> helpful. Not sure where solr is seeing that port, when everything is
> started clean.
>
> Thanks,
> Satya
>


solr-6.3.0 error port is running already

2017-05-02 Thread Satya Marivada
Hi,

I am getting the below exception all of a sudden with solr-6.3.0.
"null:org.apache.solr.common.SolrException: A previous ephemeral live node
still exists. Solr cannot continue. Please ensure that no other Solr
process using the same port is running already."

We are using external zookeeper and have restarted solr many times. There
is no solr running on those ports already. Any suggestions. Looks like a
bug. Had started using jmx option and then started getting it. Turned jmx
off, still getting the same issue.

We are in crunch of time, any workaround to get it started would be
helpful. Not sure where solr is seeing that port, when everything is
started clean.

Thanks,
Satya


solr directories

2017-03-09 Thread Satya Marivada
Hi,

We had solr running on embedded zookeeper. Moved to external zookeeper, as
part of this setup on the same vm, had done a fresh solr distribution
setup, zookeeper distribution and created new solrdata folder to hold the
nodes. All the old folders are archived (zipped and backed up). What
wonders is the new deploy pointing to external zookeeper still shows the
old collections that were created on embedded zookeeper on the solr admin
console. How is that possible, when I did a clean fresh install of the solr
distribution and zookeeper distribution. Is that the old collection
information stored in another location as well apart from solr-6.3.0
distribution package, zookeeper-3.4.9 package and the solrdata folder?
Would solr write into any other directories?

Thanks,
Satya


Re: solr warning - filling logs

2017-03-03 Thread Satya Marivada
There is nothing else running on port that I am trying to use: 15101. 15102
works fine.

On Fri, Mar 3, 2017 at 2:25 PM Satya Marivada <satya.chaita...@gmail.com>
wrote:

> Dave and All,
>
> The below exception is not happening anymore when I change the startup
> port to something else apart from that I had in original startup. The
> original starup, if I have started without ssl enabled and then startup on
> the same port with ssl enabled, it is when this warning is happening. But I
> really need to use the original port that I had. Any suggestion for getting
> around.
>
> Thanks,
> Satya
>
> java.lang.IllegalArgumentException: No Authority for
> HttpChannelOverHttp@a01eef8{r=0,c=false,a=IDLE,uri=null}
> java.lang.IllegalArgumentException: No Authority
> at
> org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
>
>
> On Sun, Feb 26, 2017 at 8:00 PM Dave <hastings.recurs...@gmail.com> wrote:
>
> I don't know about your network setup but a port scanner sometimes can be
> an it security device that, well, scans ports looking to see if they're
> open.
>
> > On Feb 26, 2017, at 7:14 PM, Satya Marivada <satya.chaita...@gmail.com>
> wrote:
> >
> > May I ask about the port scanner running? Can you please elaborate?
> > Sure, will try to move out to external zookeeper
> >
> >> On Sun, Feb 26, 2017 at 7:07 PM Dave <hastings.recurs...@gmail.com>
> wrote:
> >>
> >> You shouldn't use the embedded zookeeper with solr, it's just for
> >> development not anywhere near worthy of being out in production.
> Otherwise
> >> it looks like you may have a port scanner running. In any case don't use
> >> the zk that comes with solr
> >>
> >>> On Feb 26, 2017, at 6:52 PM, Satya Marivada <satya.chaita...@gmail.com
> >
> >> wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I have configured solr with SSL and enabled http authentication. It is
> >> all
> >>> working fine on the solr admin page, indexing and querying process. One
> >>> bothering thing is that it is filling up logs every second saying no
> >>> authority, I have configured host name, port and authentication
> >> parameters
> >>> right in all config files. Not sure, where is it coming from. Any
> >>> suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud
> >> with
> >>> embedded zookeeper. Could it be some bug with solr-6.3.0 or am I
> missing
> >>> some configuration?
> >>>
> >>> 2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
> >>> r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse
> exception:
> >>> java.lang.IllegalArgumentException: No Authority for
> >>> HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
> >>> java.lang.IllegalArgumentException: No Authority
> >>> at
> >>>
> >>
> org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
> >>> at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
> >>> at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
> >>> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
> >>> at
> >>>
> >>
> org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
> >>> at
> >>>
> >>
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
> >>> at org.eclipse.jetty.io
> >>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> >>> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> >>> at
> >>
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
> >>> at org.eclipse.jetty.io
> >>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> >>> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> >>> at org.eclipse.jetty.io
> >>> .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> >>> at java.lang.Thread.run(Thread.java:745)
> >>
>
>


Re: solr warning - filling logs

2017-03-03 Thread Satya Marivada
Dave and All,

The below exception is not happening anymore when I change the startup port
to something else apart from that I had in original startup. The original
starup, if I have started without ssl enabled and then startup on the same
port with ssl enabled, it is when this warning is happening. But I really
need to use the original port that I had. Any suggestion for getting around.

Thanks,
Satya

java.lang.IllegalArgumentException: No Authority for
HttpChannelOverHttp@a01eef8{r=0,c=false,a=IDLE,uri=null}
java.lang.IllegalArgumentException: No Authority
at
org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)


On Sun, Feb 26, 2017 at 8:00 PM Dave <hastings.recurs...@gmail.com> wrote:

> I don't know about your network setup but a port scanner sometimes can be
> an it security device that, well, scans ports looking to see if they're
> open.
>
> > On Feb 26, 2017, at 7:14 PM, Satya Marivada <satya.chaita...@gmail.com>
> wrote:
> >
> > May I ask about the port scanner running? Can you please elaborate?
> > Sure, will try to move out to external zookeeper
> >
> >> On Sun, Feb 26, 2017 at 7:07 PM Dave <hastings.recurs...@gmail.com>
> wrote:
> >>
> >> You shouldn't use the embedded zookeeper with solr, it's just for
> >> development not anywhere near worthy of being out in production.
> Otherwise
> >> it looks like you may have a port scanner running. In any case don't use
> >> the zk that comes with solr
> >>
> >>> On Feb 26, 2017, at 6:52 PM, Satya Marivada <satya.chaita...@gmail.com
> >
> >> wrote:
> >>>
> >>> Hi All,
> >>>
> >>> I have configured solr with SSL and enabled http authentication. It is
> >> all
> >>> working fine on the solr admin page, indexing and querying process. One
> >>> bothering thing is that it is filling up logs every second saying no
> >>> authority, I have configured host name, port and authentication
> >> parameters
> >>> right in all config files. Not sure, where is it coming from. Any
> >>> suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud
> >> with
> >>> embedded zookeeper. Could it be some bug with solr-6.3.0 or am I
> missing
> >>> some configuration?
> >>>
> >>> 2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
> >>> r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse
> exception:
> >>> java.lang.IllegalArgumentException: No Authority for
> >>> HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
> >>> java.lang.IllegalArgumentException: No Authority
> >>> at
> >>>
> >>
> org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
> >>> at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
> >>> at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
> >>> at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
> >>> at
> >>>
> >>
> org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
> >>> at
> >>>
> >>
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
> >>> at org.eclipse.jetty.io
> >>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> >>> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> >>> at
> >>
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
> >>> at org.eclipse.jetty.io
> >>> .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> >>> at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> >>> at org.eclipse.jetty.io
> >>> .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> >>> at
> >>>
> >>
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> >>> at java.lang.Thread.run(Thread.java:745)
> >>
>


solr warning - filling logs

2017-02-27 Thread Satya Marivada
Hi All,

I have configured solr with SSL and enabled http authentication. It is all
working fine on the solr admin page, indexing and querying process. One
bothering thing is that it is filling up logs every second saying no
authority, I have configured host name, port and authentication parameters
right in all config files. Not sure, where is it coming from. Any
suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud with
embedded zookeeper. Could it be some bug with solr-6.3.0 or am I missing
some configuration?

2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse exception:
java.lang.IllegalArgumentException: No Authority for
HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
java.lang.IllegalArgumentException: No Authority
at
org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
at
org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
at org.eclipse.jetty.io
.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
at org.eclipse.jetty.io
.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io
.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)


Re: solr warning - filling logs

2017-02-26 Thread Satya Marivada
May I ask about the port scanner running? Can you please elaborate?
Sure, will try to move out to external zookeeper

On Sun, Feb 26, 2017 at 7:07 PM Dave <hastings.recurs...@gmail.com> wrote:

> You shouldn't use the embedded zookeeper with solr, it's just for
> development not anywhere near worthy of being out in production. Otherwise
> it looks like you may have a port scanner running. In any case don't use
> the zk that comes with solr
>
> > On Feb 26, 2017, at 6:52 PM, Satya Marivada <satya.chaita...@gmail.com>
> wrote:
> >
> > Hi All,
> >
> > I have configured solr with SSL and enabled http authentication. It is
> all
> > working fine on the solr admin page, indexing and querying process. One
> > bothering thing is that it is filling up logs every second saying no
> > authority, I have configured host name, port and authentication
> parameters
> > right in all config files. Not sure, where is it coming from. Any
> > suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud
> with
> > embedded zookeeper. Could it be some bug with solr-6.3.0 or am I missing
> > some configuration?
> >
> > 2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
> > r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse exception:
> > java.lang.IllegalArgumentException: No Authority for
> > HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
> > java.lang.IllegalArgumentException: No Authority
> > at
> >
> org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
> > at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
> > at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
> > at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
> > at
> >
> org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
> > at
> >
> org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
> > at org.eclipse.jetty.io
> > .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> > at
> org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
> > at org.eclipse.jetty.io
> > .AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
> > at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
> > at org.eclipse.jetty.io
> > .SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
> > at
> >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
> > at
> >
> org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
> > at
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
> > at
> >
> org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
> > at java.lang.Thread.run(Thread.java:745)
>


solr warning - filling logs

2017-02-26 Thread Satya Marivada
Hi All,

I have configured solr with SSL and enabled http authentication. It is all
working fine on the solr admin page, indexing and querying process. One
bothering thing is that it is filling up logs every second saying no
authority, I have configured host name, port and authentication parameters
right in all config files. Not sure, where is it coming from. Any
suggestions, please. Really appreciate it. It is with sol-6.3.0 cloud with
embedded zookeeper. Could it be some bug with solr-6.3.0 or am I missing
some configuration?

2017-02-26 23:32:43.660 WARN (qtp606548741-18) [c:plog s:shard1
r:core_node2 x:plog_shard1_replica1] o.e.j.h.HttpParser parse exception:
java.lang.IllegalArgumentException: No Authority for
HttpChannelOverHttp@6dac689d{r=0,c=false,a=IDLE,uri=null}
java.lang.IllegalArgumentException: No Authority
at
org.eclipse.jetty.http.HostPortHttpField.(HostPortHttpField.java:43)
at org.eclipse.jetty.http.HttpParser.parsedHeader(HttpParser.java:877)
at org.eclipse.jetty.http.HttpParser.parseHeaders(HttpParser.java:1050)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:1266)
at
org.eclipse.jetty.server.HttpConnection.parseRequestBuffer(HttpConnection.java:344)
at
org.eclipse.jetty.server.HttpConnection.onFillable(HttpConnection.java:227)
at org.eclipse.jetty.io
.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io.ssl.SslConnection.onFillable(SslConnection.java:186)
at org.eclipse.jetty.io
.AbstractConnection$ReadCallback.succeeded(AbstractConnection.java:273)
at org.eclipse.jetty.io.FillInterest.fillable(FillInterest.java:95)
at org.eclipse.jetty.io
.SelectChannelEndPoint$2.run(SelectChannelEndPoint.java:93)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.produceAndRun(ExecuteProduceConsume.java:246)
at
org.eclipse.jetty.util.thread.strategy.ExecuteProduceConsume.run(ExecuteProduceConsume.java:156)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:654)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:572)
at java.lang.Thread.run(Thread.java:745)


Re: help with field definition

2016-09-16 Thread Gandham, Satya
Great, that worked. Thanks Ray and Emir for the solutions.



On 9/16/16, 3:49 PM, "Ray Niu" <newry1...@gmail.com> wrote:

Just add q.op=OR to change default operator to OR and it should work

2016-09-16 12:44 GMT-07:00 Gandham, Satya <sgand...@stubhub.com>:

> Hi Emir,
>
>Thanks for your reply. But I’m afraid I’m not seeing the
> expected response. I’ve included the query and the corresponding debug
> portion of the response:
>
> select?q=Justin\ Beiber=exactName_noAlias_en_US
>
>  Debug:
>
> "rawquerystring":"Justin\\ Beiber",
> "querystring":"Justin\\ Beiber",
> "parsedquery":"+((exactName_noAlias_en_US:justin
> exactName_noAlias_en_US:justin beiber)/no_coord) +exactName_noAlias_en_US:
> beiber",
> "parsedquery_toString":"+(exactName_noAlias_en_US:justin
> exactName_noAlias_en_US:justin beiber) +exactName_noAlias_en_US:beiber",
> "explain":{},
>
>
> Satya.
>
> On 9/16/16, 2:46 AM, "Emir Arnautovic" <emir.arnauto...@sematext.com>
> wrote:
>
> Hi,
>
> I missed that you already did define field and you are having troubles
> with query (did not read stackoverflow). Added answer there, but just
> in
> case somebody else is having similar troubles, issue is how query is
> written - space has to be escaped:
>
>q=Justin\ Bieber
>
> Regards,
> Emir
>
> On 13.09.2016 23:27, Gandham, Satya wrote:
> > HI,
> >
> >I need help with defining a field ‘singerName’ with the
> right tokenizers and filters such that it gives me the below described
> behavior:
> >
> > I have a few documents as given below:
> >
> > Doc 1
> >singerName: Justin Beiber
> > Doc 2:
> >singerName: Justin Timberlake
> > …
> >
> >
> > Below is the list of quries and the corresponding matches:
> >
> > Query 1: “My fav artist Justin Beiber is very impressive”
> > Docs Matched : Doc1
> >
> > Query 2: “I have a Justin Timberlake poster on my wall”
> > Docs Matched: Doc2
> >
> > Query 3: “The name Bieber Justin is unique”
> > Docs Matched: None
> >
> > Query 4: “Timberlake is a lake of timber..?”
> > Docs Matched: None.
> >
> > I have this described a bit more detailed here:
> http://stackoverflow.com/questions/39399321/solr-shingle-query-matching-
> keyword-tokenized-field
> >
> > I’d appreciate any help in addressing this problem.
> >
> > Thanks !!
> >
>
> --
> Monitoring * Alerting * Anomaly Detection * Centralized Log Management
> Solr & Elasticsearch Support * http://sematext.com/
>
>
>
>




Re: help with field definition

2016-09-16 Thread Gandham, Satya
Hi Emir,

   Thanks for your reply. But I’m afraid I’m not seeing the expected 
response. I’ve included the query and the corresponding debug portion of the 
response:

select?q=Justin\ Beiber=exactName_noAlias_en_US
 
 Debug:
 
"rawquerystring":"Justin\\ Beiber",
"querystring":"Justin\\ Beiber",
"parsedquery":"+((exactName_noAlias_en_US:justin 
exactName_noAlias_en_US:justin beiber)/no_coord) 
+exactName_noAlias_en_US:beiber",
"parsedquery_toString":"+(exactName_noAlias_en_US:justin 
exactName_noAlias_en_US:justin beiber) +exactName_noAlias_en_US:beiber",
"explain":{},


Satya.

On 9/16/16, 2:46 AM, "Emir Arnautovic" <emir.arnauto...@sematext.com> wrote:

Hi,

I missed that you already did define field and you are having troubles 
with query (did not read stackoverflow). Added answer there, but just in 
case somebody else is having similar troubles, issue is how query is 
written - space has to be escaped:

   q=Justin\ Bieber

Regards,
Emir

On 13.09.2016 23:27, Gandham, Satya wrote:
> HI,
>
>I need help with defining a field ‘singerName’ with the right 
tokenizers and filters such that it gives me the below described behavior:
>
> I have a few documents as given below:
>
> Doc 1
>singerName: Justin Beiber
> Doc 2:
>singerName: Justin Timberlake
> …
>
>
> Below is the list of quries and the corresponding matches:
>
> Query 1: “My fav artist Justin Beiber is very impressive”
> Docs Matched : Doc1
>
> Query 2: “I have a Justin Timberlake poster on my wall”
> Docs Matched: Doc2
>
> Query 3: “The name Bieber Justin is unique”
> Docs Matched: None
>
> Query 4: “Timberlake is a lake of timber..?”
> Docs Matched: None.
>
> I have this described a bit more detailed here: 
http://stackoverflow.com/questions/39399321/solr-shingle-query-matching-keyword-tokenized-field
>
> I’d appreciate any help in addressing this problem.
>
> Thanks !!
>

-- 
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/





help with field definition

2016-09-13 Thread Gandham, Satya
HI,

  I need help with defining a field ‘singerName’ with the right 
tokenizers and filters such that it gives me the below described behavior:

I have a few documents as given below:

Doc 1
  singerName: Justin Beiber
Doc 2:
  singerName: Justin Timberlake
…


Below is the list of quries and the corresponding matches:

Query 1: “My fav artist Justin Beiber is very impressive”
Docs Matched : Doc1

Query 2: “I have a Justin Timberlake poster on my wall”
Docs Matched: Doc2

Query 3: “The name Bieber Justin is unique”
Docs Matched: None

Query 4: “Timberlake is a lake of timber..?”
Docs Matched: None.

I have this described a bit more detailed here: 
http://stackoverflow.com/questions/39399321/solr-shingle-query-matching-keyword-tokenized-field

I’d appreciate any help in addressing this problem.

Thanks !!



shingle query matching keyword tokenized field

2016-09-08 Thread Gandham, Satya
Can anyone help with this question that I posted on stackOverflow.

http://stackoverflow.com/questions/39399321/solr-shingle-query-matching-keyword-tokenized-field

Thanks in advance.


Search and index Result

2011-04-14 Thread satya swaroop
Hi all,
   i just made a duplication  of solrdispatchfilter as
solrdispatchfilter1 and solrdispatchfilter2 such that all the /update or
/update/extract things are passed through the solrdispatchfilter1
and all search (/select)  things are passes through the
solrdispatchfilter2. It is because i need to establish a privacy concern for
the search result.
I need to check whether the required user has access to the particular files
or not.. it was success in implementing the privacy of results.
one major problem i am getting is after indexing some documents and
commiting it, i am not getting the commited data in the search result, i am
getting the old data that was before commit...
But i get the result only after restarting the server.. can anyone tell me
where to modify such that the search will give the results from the recent
commit...


Thanks and Regards,
satya


Re: how to set cookie for url requesting in stream_url

2011-04-08 Thread satya swaroop
Hi All,
 I was able to set the cookie value to the Stream_url connection, i was
able to pass the cookie value upto contentstreamBase.URLStream class and i
added
conn.setRequestProperty(Cookie,cookie[0].name=cookie[0].value) in the
connection setup.. and it is working fine now...

Regards,
satya


Fwd: how to set cookie for url requesting in stream_url

2011-04-01 Thread satya swaroop
HI Markus,
   I am using solr branch_3x, in tomcat web server
Regards,
satya


how to set cookie for url requesting in stream_url

2011-03-31 Thread satya swaroop
Hi All,
for indexing the documents in the other server i need to include a
cookie value in the url requesting through the stream_url.
can anybody tell me how to include the cookie in the url???
have anybody done this type??? or if there are any suggestions please tell
me???

ex:
http://localhost:8456/solr/update/extract?stream_url=remote_server_urlliteral.id=13748
;

here i need to include a cookie value while requesting for the
remote_server_url.


Regards,
satya


Solr coding

2011-03-23 Thread satya swaroop
Hi All,
  As for my project Requirement i need to keep privacy for search of
files so that i need to modify the code of solr,

for example if there are 5 users and each user indexes some files as
  user1 - java1, c1,sap1
  user2 - java2, c2,sap2
  user3 - java3, c3,sap3
  user4 - java4, c4,sap4
  user5 - java5, c5,sap5

   and if a user2 searches for the keyword java then it should be display
only  the file java2 and not other files

so inorder to keep this filtering inside solr itself may i know where to
modify the code... i will access a database to check the user indexed files
and then filter the result... i didnt have any cores.. i indexed all files
in a single index...

Regards,
satya


Re: Solr coding

2011-03-23 Thread satya swaroop
Hi Jayendra,
I forgot to mention the result also depends on the group of
user too It is some wat complex so i didnt tell it.. now i explain the
exact way..

  user1, group1 - java1, c1,sap1
  user2 ,group2- java2, c2,sap2
  user3 ,group1,group3- java3, c3,sap3
  user4 ,group3- java4, c4,sap4
  user5 ,group3- java5, c5,sap5

 user1,group1 means user1 belong to group1


Here the filter includes the group too.., if for eg: user1 searches for
java then the results should show as java1,java3 since java3 file is
acessable to all users who are related to the group1, so i thought of to
edit the code...

Thanks,
satya


Re: Solr coding

2011-03-23 Thread satya swaroop
Hi Jayendra,
  the group field can be kept if the no. of groups are
small... if a user may belong to 1000 groups in that case it would be
difficult to make a query???,   if a user changes the groups then we have to
reindex the data again...

ok i will try ur suggestion, if it can fulfill the needs then task will be
very easy...

Regards,
satya


solr indexing

2011-02-22 Thread satya swaroop
Hi all,
   to my keen intrest on solr indexing mechanism i started mining the
code of solr indexing (/update/extract), i read the indexing file formats,
scoring procedure, i have some queries regarding this..
1) the scoring is performed on the dynamic and precalculated value(doc
boost, field boost, lengthnorm). In calculating the score if suppose a term
in the index consits nearly one million docs then is solr calculating the
score for each and every doc present for the term and getting the top docs
from the index??? or is it undergoing any mechanism such that limiting the
calculation of score to only a particular docs???

If anybody know about it or any documentation regarding this please inform
me...


Regards,
satya


is solr dynamic calculation??

2011-02-17 Thread satya swaroop
Hi All,
 I have a query whether the solr shows the results of documents by
calculating the score on dynamic or is it pre calculating and supplying??..

for example:
if a query is made on q=solr in my index... i get a results of 25
documents... what is it calculating?? i am very keen to know its way of
calculation of score and ordering of results


Regards,
satya


Re: is solr dynamic calculation??

2011-02-17 Thread satya swaroop
Hi Markus,
As far i gone through the scoring of solr. The scoring is
done during searching on the use of boost values which were given during the
indexing.
I have a query now if i search for a keyword java then
1)if for a term named java in index contain 50,000 documents then do solr
calculate the score value for each and every document and filter them and
then sort it and   server results??? if it does the dynamic calculation
for each and every document then it takes a long time, but how can solr
reduced it??
 Am i right??? or if any wrong please tell me???

Regards,
satya


Re: spell suggest response

2011-01-17 Thread satya swaroop
Hi Grijesh,
   Though i use autosuggest i maynot get the exact results, the
order is not accurate.. As for example if i type
http://localhost:8080/solr/terms/?terms.fl=spellterms.prefix=solrterms.sort=indexterms.lower=solrterms.upper.incl=true
 i get results as...
solr
solr.amp
solr.datefield
solr.p
solr.pdf
   like that.But this may not lead to getting accurate results as we get in
spellchecking,

i require suggestions for any word irrespective of whether it is correct or
not, is there anything to be changed in solr to get suggestions as we get
when we type a wrong word in spellchecking... If so please let me know...

Regards,
satya


Re: spell suggest response

2011-01-17 Thread satya swaroop
Hi Grijesh,
i added both the termscomponent and spellcheck component to the
terms requesthandler, when i send a query as
http://localhost:8080/solr/terms?terms.fl=textterms.prefix=javarows=7omitHeader=truespellcheck=truespellcheck.q=javaspellcheck.count=20

the result i get is
response
-
lst name=terms
-
lst name=text
int name=java6/int
int name=javabas6/int
int name=javas6/int
int name=javascript6/int
int name=javac6/int
int name=javax6/int
/lst
/lst
-
lst name=spellcheck
lst name=suggestions/
/lst
/response



when i send this
http://localhost:8080/solr/terms?terms.fl=textterms.prefix=jawarows=5omitHeader=truespellcheck=truespellcheck.q=jawaspellcheck.count=20
i get the result as

response
-
lst name=terms
lst name=text/
/lst
-
lst name=spellcheck
-
lst name=suggestions
-
lst name=jawa
int name=numFound20/int
int name=startOffset0/int
int name=endOffset4/int
-
arr name=suggestion
strjava/str
straway/str
strjav/str
strjar/str
strara/str
strapa/str
strana/str
strajax/str


Now i need to know how to make ordering of the terms as in the 1st query the
result obtained is inorder and i want only javax, javac,javascript but not
javas,javabas how can it be done??

Regards,
satya


spellchecking even the key is true....

2011-01-17 Thread satya swaroop
Hi All,
can we get the spellchecking results even when the keyword is true.
As for spellchecking will give only to the wrong keywords, cant we get
similar and near words of the keyword though the spellcheck.q is true..
as an example
http://localhost:8080/solr/spellcheck?q=javaspellcheck=truespellcheck.count=5
the result will be

1)-
response
-
lst name=spellcheck
lst name=suggestions/
/lst
/response


can we get the result as
2)
response
-
lst name=spellcheck
lst name=suggestions
strjavax/str
strjavac/str
strjavabean/str
strjavascript/str
/lst
/response

NOTE:: all the keywords in the 2nd result is are in index...

Regards,
satya


Re: spell suggest response

2011-01-16 Thread satya swaroop
Hi Grijesh,
As you said you are implementing this type. Can you tell how
did you made in brief..

Regards,
satya


Re: spell suggest response

2011-01-12 Thread satya swaroop
Hi stefan,
I need the words from the index record itself. If java is given
then the relevant or similar or near words in the index should be shown.
Even the given keyword is true... can it be possible???


ex:-

http://localhost:8080/solr/spellcheckCompRH?q=javarows=0spellcheck=truespellcheck.count=10
   In the o/p the suggestions will not be coming as
java is a word that spelt correctly...
  But cant we get near suggestions as javax,javacetc.., ???(the
terms in the index)

I read  about  suggester in solr wiki at
http://wiki.apache.org/solr/Suggester . But i tried to implement it but got
errors as

*error loading class org.apache.solr.spelling.suggest.suggester*

Regards,
satya


Re: spell suggest response

2011-01-12 Thread satya swaroop
Hi Juan,
 yeah.. i tried of onlyMorePopular and got some results but are
not similar words or near words to the word i have given in the query..
Here i state you the output..

http://localhost:8080/solr/spellcheckCompRH?q=javarows=0spellcheck=truespellcheck.collate=truespellcheck.onlyMorePopular=truespellcheck.count=20

the o/p i get is
-arr name=suggestion
strdata/str
strhave/str
strcan/str
strany/str
strall/str
strhas/str
streach/str
strpart/str
strmake/str
strthan/str
stralso/str
/arr



but this words are not similar to the given word 'java' the near words
would be javac,javax,data,java.io... etc.., the stated words are present in
the index..


Regards,
satya


spell suggest response

2011-01-11 Thread satya swaroop
Hi All,
 can we get just suggestions only without the files response??
Here I state an example
when i query
http://localhost:8080/solr/spellcheckCompRH?q=java daka
usarspellcheck=truespellcheck.count=5spellcheck.collate=true

i get some result of java files and then the suggestions for the words
daka-data , usar-user. But actually i need only the spell suggestions.
But here time is getting consumed for displaying of files and then giving
spell suggestions. Cant we post a query to solr where we can get
the response as only spell suggestions???

Regards,
satya


Re: spell suggest response

2011-01-11 Thread satya swaroop
Hi Gora,
   I am using solr for file indexing and searching, But i have a
module where i dont need any files result but only the spell suggestions, so
i asked is der anyway in solr where i would get the spell suggestion
responses only.. I think it is clear for u now.. If not tell me I will try
to explain still furthur...

Regards,
satya


Re: spell suggest response

2011-01-11 Thread satya swaroop
Hi Stefan,
  Ya it works :). Thanks...
  But i have a question... can it be done only getting spell
suggestions even if the spelled word is correct... I mean near words to
it...
   ex:-

http://localhost:8080/solr/spellcheckCompRH?q=javarows=0spellcheck=truespellcheck.count=10
   In the o/p the suggestions will not be coming as
java is a word that spelt correctly...
  But cant we get near suggestions as javax,javacetc.., ???

Regards,
satya


error in html???

2010-12-23 Thread satya swaroop
Hi All,

 I am able to get the response in the success case in json format by
stating wt=json in the query. But as in case if any errors i am geting in
html format.
 1) Is there any specified reason to get in html format??
  2)cant we get the error result in json format??

Regards,
satya


Re: error in html???

2010-12-23 Thread satya swaroop
Hi Erick,
   Every result comes in xml format. But when you get any errors
like http 500 or http 400 like wise we will get in html format. My query is
cant we make that html file into json or vice versa..

Regards,
satya


Different Results..

2010-12-22 Thread satya swaroop
Hi All,
 i am getting different results when i used with some escape keys..
for example:::
1) when i use this request
http://localhost:8080/solr/select?q=erlang!ericson
   the result obtained is
   result name=response numFound=1934 start=0

2) when the request is
 http://localhost:8080/solr/select?q=erlang/ericson
the result is
  result name=response numFound=1 start=0


My query here is, do solr consider both the queries differently and what do
it consider for !,/ and all other escape characters.


Regards,
satya


Re: Google like search

2010-12-16 Thread satya swaroop
Hi All,

 Thanks for your suggestions.. I got the result of what i expected..

Cheers,
Satya


Testing Solr

2010-12-16 Thread satya swaroop
Hi All,

 I built solr successfully and i am thinking to test it  with nearly
300 pdf files, 300 docs, 300 excel files,...and so on of each type with 300
files nearly
 Is there any dummy data available to test for solr,Otherwise i need to
download each and every file individually..??
Another question is there any Benchmarks of solr...??

Regards,
satya


Google like search

2010-12-14 Thread satya swaroop
Hi All,
 Can we get the results like google  having some data  about the
search... I was able to get the data that is the first 300 characters of a
file, but it is not helpful for me, can i be get the data that is having the
first found key in that file

Regards,
Satya


Re: Google like search

2010-12-14 Thread satya swaroop
Hi Tanguy,
  I am not asking for highlighting.. I think it can be
explained with an example.. Here i illustarte it::

when i post the query like dis::

http://localhost:8080/solr/select?q=Javaversion=2.2start=0rows=10indent=on

i Would be getting the result as follows::

-response
-lst name=responseHeader
int name=status0/int
int name=QTime1/int
/lst
-result name=response numFound=1 start=0
-doc
str name=filenameJava%20debugging.pdf/str
str name=id122/str
-arr name=text1
-str
Table of Contents
If you're viewing this document online, you can click any of the topics
below to link directly to that section.
1. Tutorial tips 2
2. Introducing debugging  4
3. Overview of the basics 6
4. Lessons in client-side debugging 11
5. Lessons in server-side debugging 15
6. Multithread debugging 18
7. Jikes overview 20
/str
/arr
/doc
/result
/response

Here the str field contains the first 300 characters of the file as i kept a
field to copy only 300 characters in schema.xml...
But i dont want the content like dis.. Is there any way to make an o/p as
follows::

str Java is one of the best language,java is easy to learn.../str


where this content is at start of the chapter,where the first word of java
is occured in the file...


Regards,
Satya


Re: Google like search

2010-12-14 Thread satya swaroop
Hi Tanguy,
 Thanks for ur reply. sorry to ask this type of question.
how can we index each chapter of a file as seperate document.As for i know
we just give the path of file to solr to index it... Can u provide me any
sources for this type... I mean any blogs or wiki's...

Regards,
satya


Re: RAM increase

2010-10-29 Thread satya swaroop
Hi All,

 Thanks for your reply.I have a doubt whether to increase the ram or
heap size to java or to tomcat where the solr is running


Regards,
satya


Re: solr result....

2010-10-28 Thread satya swaroop
Hi Lance,
  I actually copied tika exceptions in one html file and indexed
it. It is just a content of a file and here i tell u  what i mean::


if i post a query like *java* then the result or response from solr should
hit only a part of the content like as follows::

http://localhost:8456/solr/select/?q=javaversion=2.2start=10rows=10indent=on

-response
-lst name=responseHeader
int name=status0/int
int name=QTime453/int
/lst
-result name=response numFound=62 start=10
-doc
-arr name=content_type
strapplication/pdf/str
/arr
str name=idjavaebuk/str
date name=last_modified2001-07-02T11:54:10Z/date
-arr name=text
-str

A Java program with two main methods  The following is an example of a java
program with two main methods with different signatures.
Program 3
public class TwoMains
{
/** This class has two main methods with
* different signatures */
public static void main (String args[])  .
  /str
 /arr
/doc.

/response




the doc in the result should not contain the entire content of a file. It
should have only a part of the content.The content should be the first hit
of the word java in that file...


Regards,
satya


solr result....

2010-10-27 Thread satya swaroop
Hi ,
  Can the result of solr show the only a part of the content of a
document that got in the result.
example

if i send a query for to search tika then the result should be as follows:::

response
-lst name=responseHeader
int name=status0/int
int name=QTime79/int
/lst
-result name=response numFound=62 start=0
doc
-arr name=content_type
   strtext/html/str
/arr
 str name=id1html/str
-arr name=text
-str
   Apache Tomcat/6.0.26 - Error reportHTTP Status 500 -
org.apache.tika.exception.TikaException: Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@cc9d70

org.apache.solr.common.SolrException:
org.apache.tika.exception.TikaException: Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@cc9d70
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:214)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:237)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1323)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)...

 /str
   /arr
/doc


The result should not show the entire content of a file. It should show up
only a part of the content where the query word is present..As like the
google result and like search result in the lucidimagionation

Regards,
satya


RAM increase

2010-10-20 Thread satya swaroop
Hi all,
  I increased my RAM size to 8GB and i want 4GB of it to be used
for solr itself. can anyone tell me the way to allocate the RAM for the
solr.


Regards,
satya


solr requirements

2010-10-18 Thread satya swaroop
Hi All,
I am planning to have a separate server for solr and regarding
hardware requirements i have a doubt about what configuration to be needed.
I know it will be hard to tell but i just need a minimum requirement for the
particular situation as follows::


1) There are 1000 regular users using solr and Every day each user indexes
10 files of 1KB each and totally it leads to a size of 10MB for a day and it
goes on...???

2)How much of RAM is used by solr in genral???

Thanks,
satya


Re: solr requirements

2010-10-18 Thread satya swaroop
Hi,
   here is some more info about it. I use Solr to output only the file
names(file id's). Here i enclose the fields in my schema.xml and presently i
have only about 40MB of indexed data.


   field name=id type=string indexed=true stored=true
required=true /
   field name=sku type=textTight indexed=true stored=false
omitNorms=true/
   field name=name type=textgen indexed=true stored=false/

   field name=manu type=textgen indexed=true stored=false
omitNorms=true/
   field name=cat type=text_ws indexed=true stored=false
multiValued=true omitNorms=true /
   field name=features type=text indexed=true stored=false
multiValued=true/
   field name=includes type=text indexed=true stored=false
termVectors=true termPositions=true termOffsets=true /

   field name=weight type=float indexed=true stored=false/
   field name=price  type=float indexed=true stored=false/
   field name=popularity type=int indexed=true stored=false /
   field name=inStock type=boolean indexed=true stored=false /

   !--
   The following store examples are used to demonstrate the various ways one
might _CHOOSE_ to
implement spatial.  It is highly unlikely that you would ever have ALL
of these fields defined.
--
   field name=store type=location indexed=true stored=false/
   field name=store_lat_lon type=latLon indexed=true stored=false/
   field name=store_hash type=geohash indexed=true stored=false/


   !-- Common metadata fields, named specifically to match up with
 SolrCell metadata when parsing rich documents such as Word, PDF.
 Some fields are multiValued only because Tika currently may return
 multiple values for them.
   --
   field name=title type=text indexed=true stored=true
multiValued=true/
   field name=subject type=text indexed=true stored=false/
   field name=description type=text indexed=true stored=false/
   field name=comments type=text indexed=true stored=false/
   field name=author type=textgen indexed=true stored=false/
   field name=keywords type=textgen indexed=true stored=false/
   field name=category type=textgen indexed=true stored=false/
   field name=content_type type=string indexed=true stored=false
multiValued=true/
   field name=last_modified type=date indexed=true stored=false/
   field name=links type=string indexed=true stored=false
multiValued=true/
!-- added here content satya--
   field name=content type=spell indexed=true stored=false
multiValued=true/


   !-- catchall field, containing all other searchable text fields
(implemented
via copyField further on in this schema  --
   field name=text type=text indexed=true stored=false
multiValued=true termVectors=true/

   !-- catchall text field that indexes tokens both normally and in reverse
for efficient
leading wildcard queries.  here satya--
   field name=text_rev type=text_rev indexed=true stored=false
multiValued=true/

   !-- non-tokenized version of manufacturer to make it easier to sort or
group
results by manufacturer.  copied from manu via copyField here
satya--
   field name=manu_exact type=string indexed=true stored=false/
   field name=spell type=spell indexed=true stored=false
multiValued=true/
!-- heere changed --
   field name=payloads type=payloads indexed=true stored=false/

 field name=timestamp type=date indexed=true stored=false
default=NOW multiValued=false/



Regards,
satya


ant build problem

2010-10-04 Thread satya swaroop
Hi all,
i updated my solr trunk to revision 1004527. when i go for compiling
the trunk with ant i get so many warnings, but the build is successful. the
warnings are here:::
common.compile-core:
[mkdir] Created dir:
/home/satya/temporary/trunk/lucene/build/classes/java
[javac] Compiling 475 source files to
/home/satya/temporary/trunk/lucene/build/classes/java
[javac] warning: [path] bad path element
/usr/share/ant/lib/hamcrest-core.jar: no such file or directory
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/queryParser/QueryParserTokenManager.java:455:
warning: [cast] redundant cast to int
[javac]  int hiByte = (int)(curChar  8);
[javac]   ^
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/queryParser/QueryParserTokenManager.java:705:
warning: [cast] redundant cast to int
[javac]  int hiByte = (int)(curChar  8);
[javac]   ^
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/queryParser/QueryParserTokenManager.java:812:
warning: [cast] redundant cast to int
[javac]  int hiByte = (int)(curChar  8);
[javac]   ^
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/queryParser/QueryParserTokenManager.java:983:
warning: [cast] redundant cast to int
[javac]  int hiByte = (int)(curChar  8);
[javac]   ^
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/search/FieldCacheImpl.java:209:
warning: [unchecked] unchecked cast
[javac] found   : java.lang.Object
[javac] required: T
[javac] key.creator.validate( (T)value, reader);
[javac]  ^
[javac]
/home/satya/temporary/trunk/lucene/src/java/org/apache/lucene/search/FieldCacheImpl.java:278:
warning: [unchecked] unchecked call to
Entry(java.lang.String,org.apache.lucene.search.cache.EntryCreatorT) as a
member of the raw type org.apache.lucene.search.FieldCacheImpl.Entry
[javac] return (ByteValues)caches.get(Byte.TYPE).get(reader, new
Entry(field, creator));
ptionList.addAll(exceptions);

||

[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files additionally use unchecked or unsafe
operations.
[javac] 100 warnings

BUILD SUCCESSFUL
Total time: 19 seconds


here i placed only the starting stage of warnings.
After the compiling i thought to check with the ant test and performed but
it is failed..

i didnt find any hamcrest-core.jar in my ant library
i use ant 1.7.1


Regards,
satya


ant package

2010-09-21 Thread satya swaroop
Hi all,
i want to build the package of my solr and i found it can be done
using ant. When i type ant package in solr module i get an error as:::\


sa...@swaroop:~/temporary/trunk/solr$ ant package
Buildfile: build.xml

maven.ant.tasks-check:

BUILD FAILED
/home/satya/temporary/trunk/solr/common-build.xml:522:
##
  Maven ant tasks not found.
  Please make sure the maven-ant-tasks jar is in ANT_HOME/lib, or made
  available to Ant using other mechanisms like -lib or CLASSPATH.
  ##

Total time: 0 seconds


can anyone tell me the procedure to build it or give any information about
it..

Regards,
satya


Re: ant package

2010-09-21 Thread satya swaroop
HI ,
  ya i dont have the jar file in the ant/lib where can i get the jar
file or wat is the procedure to make that maven-artifact-ant-2.0.4-dep.jar??

regards,
satya


Re: ant package

2010-09-21 Thread satya swaroop
Hi erick,
 thanks for reply and i got the jar file downloaded and kept it
in ant library
now when i make ant package command it getting error in the middle of build
in generate-maven-artifacts... and the error is

sa...@geodesic-desktop:~/temporary/trunk/solr$ sudo  ant  package
---
---
---
generate-maven-artifacts:
[mkdir] Created dir: /home/satya/temporary/trunk/solr/build/maven
[mkdir] Created dir: /home/satya/temporary/trunk/solr/dist/maven
 [copy] Copying 1 file to
/home/satya/temporary/trunk/solr/build/maven/src/maven
[artifact:install-provider] Installing provider:
org.apache.maven.wagon:wagon-ssh:jar:1.0-beta-2

BUILD FAILED
/home/satya/temporary/trunk/solr/build.xml:853: The following error occurred
while executing this line:
/home/satya/temporary/trunk/solr/common-build.xml:373: artifact:deploy
doesn't support the uniqueVersion attribute

Total time: 1 minute 51 seconds
sa...@desktop:~/temporary/trunk/solr$

Regards,
satya


SolrCloud new....

2010-09-20 Thread satya swaroop
Hi all,
I  am having 4 instances of solr in 4 systems.Each system has a
single instance of solr.. I want the result from all these servers. I came
to know using of solrcloud. I read about it and worked on the example and it
was working as given in wiki.
I am using solr 1.4 and apache tomcat. In order to implement cloud in the
solr trunk wat procedure should be followed.
1)Should i copy the libraries from cloud to trunk???
2)should i keep the cloud module in every system???
3) I am not using any cores in the solr. It is a single solr in every
system.can solrcloud support it??
4) the example is given in jetty.Is it the same way to make it in tomcat???

Regards,
satya


cloud or zookeeper

2010-09-15 Thread satya swaroop
Hi All,
   What is the difference of using shards,solr cloud and zookeeper..
which is the best way to scale the solr..
 I need to reduce the index size in every system and reduce the search time
for a query...

Regards,
satya


Re: stream.url

2010-09-08 Thread satya swaroop
Hi Hoss,

 Thanks for reply and it got working The reason was as you
said i was not double escaping i used %2520 for whitespace and it is
working now

Thanks,
satya


Re: stream.url

2010-09-03 Thread satya swaroop
Hi all,

  I am unable to index the files of remote system that contains escaped
characters in  their file names i think there is a problem in solr for
indexing the files of escaped characters in remote system...
Has anybody tried to index the files in remote system that contain the
escaped characters But solr is working good for files that has no
escaped characters in their name.


I sent the request through the curl by encoding the filename in url format
but the problem is same...

Regards,
satya


stream.url

2010-09-02 Thread satya swaroop
Hi all,

  I am using stream.url to index the files in the remote system. when i
use the url as
1) curl 
http://localhost:8080/solr/update/extract?stream.url=http://remotehost:port/file_download.yaws?file=yaws_presentation.pdfliteral.id=schb4

it works and i get the response as the file got indexed.

but when i use
2) curl 
http://localhost:8080/solr/update/extract?stream.url=http://remotehost:port/file_download.yaws?file=solr;
apache.pdf
literal.id=schb5
i get the error in the solr... i replaced the escaped characters with %20
for space and %26 for , but the error is same saying

Unexpected end of file from server java.net.SocketException..

when i used without solr as http://remotehost:port/file_download.yaws?file=solr
 apache.pdf then i get the file downloaded to my system.

I here enclose the entire error=

HTTP Status 500 - Unexpected end of file from server
java.net.SocketException: Unexpected end of file from server at
sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at
sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1368)
at java.security.AccessController.doPrivileged(Native Method) at
sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1362)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1016)
at
org.apache.solr.common.util.ContentStreamBase$URLStream.getStream(ContentStreamBase.java:88)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:169)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:57)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:133)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1355) at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:340)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619) Caused by:
java.net.SocketException: Unexpected end of file from server at
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769) at
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at
sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:766) at
sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1072)
at
sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2173)
at java.net.URLConnection.getContentType(URLConnection.java:485) at
org.apache.solr.common.util.ContentStreamBase$URLStream.init(ContentStreamBase.java:81)
at
org.apache.solr.servlet.SolrRequestParsers.buildRequestFrom(SolrRequestParsers.java:138)
at
org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:117)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:226)
... 12 more


can anybody provide information regarding this??


Regards,
Satya


Re: stream.url

2010-09-02 Thread satya swaroop
Hi stefan,
   I used escape charaters and made it... It is not problem for
a single file of 'solr apache' but it shows the same problem for the files
like Wireless lan.ppt, Tom info.pdf.

the curl i sent is::

curl 
http://localhost:8080/solr/update/extract?stream.url=http://remotehost:port/file_download.yaws%3Ffile=solrhttp://localhost:8080/solr/update/extract?stream.url=http://remotehost:port/file_download.yaws?file=solr
%20%26%20apache.pdfliteral.id=schb5

Regards,
satya


Re: stream.url

2010-09-02 Thread satya swaroop
Hi,
I made the curl from the shell(command prompt or terminal) with the
escaping characters but the error is same when i saw in the remote
system the request is not getting there Is there anything to be changed
in config file inorder to enable the escaping characters for stream.url

Did anybody try indexing files in remote system through stream.url,  where
the files name contain escape characters like ,space

regards,
satya


solr working...

2010-08-26 Thread satya swaroop
Hi all,
  I am intrested to see the working of solr.
1)Can anyone tell me how to start with to know its working 

Regards,
satya


Re: solr working...

2010-08-26 Thread satya swaroop
Hi peter,
I am already working on solr and it is working good. But i want
to understand the code and know where the actual working is going on, and
how indexing is done and how the requests are parsed and how it is
responding and all others. TO understand the  code i asked how to start???

Regards,
satya


Re: solr working...

2010-08-26 Thread satya swaroop
Hi all,

  Thanks for ur response and information. I used slf4j log and i kept
log.info method in every class of solr module to know which classes get
invoke on particular requesthandler or on start of solr I was able to
keep it only in solr Module but not in lucene module... i get error when i
use it in dat module.. can any one tell me other ways like this to track the
path solr

Regards,
  satya


reduce the content???

2010-08-25 Thread satya swaroop
Hi all,
  i indexed nearly 100 java pdf files which are of large size(min 1MB).
The solr is showing the results with the entire content that it indexed
which is taking time to show the results.. cant we reduce the content it
shows or can i just have the file names and ids instead of the entire
content in the results

Regards,
satya


Re: stream.url problem

2010-08-24 Thread satya swaroop

 Hi all,
 I got the solution for my problem. I changed my port number and i
 kept the old one in the stream.url... so problem was that...
 thanks all

 Now i got another problem, it is when i send any requests to remote
 system for the files that have names with escape characters like  ,space
 . For example= TomJerry.pdf  i get a problem as Unexpected end of
 file from server...

 the request i sent is::

 curl 
 http://localhost:8080/solr/update/extract?stream.url=http://remotehost:8011/file_download.yaws?file=Wireless%20Lan.pdfliteral.id=su8
 

 here file_download.yaws is a module that fetches the file and gives to
 solr.

 solr is able to index the files that doesnt contain the escape characters
 in the remote system.. example:: apache.txt, solr_apache.pdf

 the error i got is:::

 HTTP Status 500 - Unexpected end of file from server
 java.net.SocketException: Unexpected end of file from server at
 sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at
 sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:39)
 at
 sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
 at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at
 sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1368)
 at java.security.AccessController.doPrivileged(Native Method) at
 sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1362)
 at
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1016)
 at
 org.apache.solr.common.util.ContentStreamBase$URLStream.getStream(ContentStreamBase.java:88)
 at
 org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:161)
 at
 org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:57)
 at
 org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:133)
 at
 org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:242)
 at org.apache.solr.core.SolrCore.execute(SolrCore.java:1355) at
 org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:340)
 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:241)
 at
 org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
 at
 org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
 at
 org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
 at
 org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
 at
 org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
 at
 org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
 at
 org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
 at
 org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
 at
 org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
 at
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
 at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
 at java.lang.Thread.run(Thread.java:619) Caused by:
 java.net.SocketException: Unexpected end of file from server at
 sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:769) at
 sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at
 sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:766) at
 sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:632) at
 sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1072)
 at
 sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2173)
 at java.net.URLConnection.getContentType(URLConnection.java:485) at
 org.apache.solr.common.util.ContentStreamBase$URLStream.init(ContentStreamBase.java:81)
 at
 org.apache.solr.servlet.SolrRequestParsers.buildRequestFrom(SolrRequestParsers.java:138)
 at
 org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:117)
 at
 org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:226)
 ...




Regards,
 satya


/update/extract

2010-08-19 Thread satya swaroop
Hi all,
   when we handle extract request handler what class gets invoked.. I
need to know the navigation of classes when we send any files to solr.
can anybody tell me the classes or any sources where i can get the answer..
or can anyone tell me what classes get invoked when we start the
solr... I be thankful if anybody can help me with regarding this..

Regards,
satya


solr working...

2010-08-18 Thread satya swaroop
hi all,
i am very intrested to know the working of solr. can anyone tell me
which modules or classes that gets invoked when we start the servlet
container like tomcat or when we send any requests to solr like sending pdf
files or what files get invoked at the start of solr.??

regards,
satya


stream.url problem

2010-08-17 Thread satya swaroop
hi all,
   i am indexing the documents to solr that are in my system. now i need
to index the files that are in remote system, i enabled the remote streaming
to true in solrconfig.xml and when i use the stream.url it shows the error
as connection refused and the detail of the error is:::

when i sent the request in my browser as::

http://localhost:8080/solr/update/extract?stream.url=http://remotehost/home/san/Desktop/programming_erlang_armstrong.pdfliteral.id=schb2

i get the error as

HTTP Status 500 - Connection refused java.net.ConnectException: Connection
refused at sun.reflect.GeneratedConstructorAccessor11.newInstance(Unknown
Source) at
sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:27)
at java.lang.reflect.Constructor.newInstance(Constructor.java:513) at
sun.net.www.protocol.http.HttpURLConnection$6.run(HttpURLConnection.java:1368)
at java.security.AccessController.doPrivileged(Native Method) at
sun.net.www.protocol.http.HttpURLConnection.getChainedException(HttpURLConnection.java:1362)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1016)
at
org.apache.solr.common.util.ContentStreamBase$URLStream.getStream(ContentStreamBase.java:88)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:161)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:237)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1323) at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619) Caused by:
java.net.ConnectException: Connection refused at
java.net.PlainSocketImpl.socketConnect(Native Method) at
java.net.PlainSocketImpl.doConnect(PlainSocketImpl.java:333) at
java.net.PlainSocketImpl.connectToAddress(PlainSocketImpl.java:195) at
java.net.PlainSocketImpl.connect(PlainSocketImpl.java:182) at
java.net.SocksSocketImpl.connect(SocksSocketImpl.java:366) at
java.net.Socket.connect(Socket.java:525) at
java.net.Socket.connect(Socket.java:475) at
sun.net.NetworkClient.doConnect(NetworkClient.java:163) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:394) at
sun.net.www.http.HttpClient.openServer(HttpClient.java:529) at
sun.net.www.http.HttpClient.init(HttpClient.java:233) at
sun.net.www.http.HttpClient.New(HttpClient.java:306) at
sun.net.www.http.HttpClient.New(HttpClient.java:323) at
sun.net.www.protocol.http.HttpURLConnection.getNewHttpClient(HttpURLConnection.java:860)
at
sun.net.www.protocol.http.HttpURLConnection.plainConnect(HttpURLConnection.java:801)
at
sun.net.www.protocol.http.HttpURLConnection.connect(HttpURLConnection.java:726)
at
sun.net.www.protocol.http.HttpURLConnection.getInputStream(HttpURLConnection.java:1049)
at
sun.net.www.protocol.http.HttpURLConnection.getHeaderField(HttpURLConnection.java:2173)
at java.net.URLConnection.getContentType(URLConnection.java:485) at
org.apache.solr.common.util.ContentStreamBase$URLStream.init(ContentStreamBase.java:81)
at
org.apache.solr.servlet.SolrRequestParsers.buildRequestFrom(SolrRequestParsers.java:136)
at
org.apache.solr.servlet.SolrRequestParsers.parse(SolrRequestParsers.java:116)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:225)
...


if any body know
please help me with this

regards,
satya


Re: indexing???

2010-08-17 Thread satya swaroop
hi,

1) i use tika 0.8...

2)the url is  https://issues.apache.org/jira/browse/PDFBOX-709 and the
file is samplerequestform.pdf

 3)the entire error is::;
curl 
http://localhost:8080/solr/update/extract?stream.file=/home/satya/my_workings/satya_ebooks/8-Linux/samplerequestform.pdfliteral.id=linuxc




  htmlheadtitleApache Tomcat/6.0.26 - Error
report/titlestyle!--H1
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:22px;}
H2
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:16px;}
H3
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;font-size:14px;}
BODY
{font-family:Tahoma,Arial,sans-serif;color:black;background-color:white;} B
{font-family:Tahoma,Arial,sans-serif;color:white;background-color:#525D76;}
P
{font-family:Tahoma,Arial,sans-serif;background:white;color:black;font-size:12px;}A
{color : black;}A.name {color : black;}HR {color : #525D76;}--/style
/headbodyh1HTTP Status 500 - org.apache.tika.exception.TikaException:
Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@1d688e2

org.apache.solr.common.SolrException:
org.apache.tika.exception.TikaException: Unexpected RuntimeException from
org.apache.tika.parser.pdf.pdfpar...@1d688e2
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:214)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:54)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:131)
at
org.apache.solr.core.RequestHandlers$LazyRequestHandlerWrapper.handleRequest(RequestHandlers.java:237)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1323)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:337)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:240)
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:235)
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
at
org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852)
at
org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588)
at
org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489)
at java.lang.Thread.run(Thread.java:619)
Caused by: org.apache.tika.exception.TikaException: Unexpected
RuntimeException from org.apache.tika.parser.pdf.pdfpar...@1d688e2
at
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:144)
at
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:99)
at
org.apache.tika.parser.AutoDetectParser.parse(AutoDetectParser.java:112)
at
org.apache.solr.handler.extraction.ExtractingDocumentLoader.load(ExtractingDocumentLoader.java:193)
... 18 more
Caused by: java.lang.ClassCastException:
org.apache.pdfbox.pdmodel.font.PDFontDescriptorAFM cannot be cast to
org.apache.pdfbox.pdmodel.font.PDFontDescriptorDictionary
at
org.apache.pdfbox.pdmodel.font.PDTrueTypeFont.ensureFontDescriptor(PDTrueTypeFont.java:167)
at
org.apache.pdfbox.pdmodel.font.PDTrueTypeFont.lt;initgt;(PDTrueTypeFont.java:117)
at
org.apache.pdfbox.pdmodel.font.PDFontFactory.createFont(PDFontFactory.java:140)
at
org.apache.pdfbox.pdmodel.font.PDFontFactory.createFont(PDFontFactory.java:76)
at org.apache.pdfbox.pdmodel.PDResources.getFonts(PDResources.java:115)
at
org.apache.pdfbox.util.PDFStreamEngine.processSubStream(PDFStreamEngine.java:225)
at
org.apache.pdfbox.util.PDFStreamEngine.processStream(PDFStreamEngine.java:207)
at
org.apache.pdfbox.util.PDFTextStripper.processPage(PDFTextStripper.java:367)
at
org.apache.pdfbox.util.PDFTextStripper.processPages(PDFTextStripper.java:291)
at
org.apache.pdfbox.util.PDFTextStripper.writeText(PDFTextStripper.java:247)
at
org.apache.pdfbox.util.PDFTextStripper.getText(PDFTextStripper.java:180)
at org.apache.tika.parser.pdf.PDF2XHTML.process(PDF2XHTML.java:56)
at org.apache.tika.parser.pdf.PDFParser.parse(PDFParser.java:79)
at
org.apache.tika.parser.CompositeParser.parse(CompositeParser.java:142)
... 21 more
/h1HR size=1 noshade=noshadepbtype/b Status
report/ppbmessage/b uorg.apache.tika.exception.TikaException:
Unexpected RuntimeException from

  1   2   >