Different number of replicas for different shards

2019-06-28 Thread Nawab Zada Asad Iqbal
Hi,

is it possible to specify different number of replicas for different
shards? i.e if I expect some shard to get more queries , i can add more
replicas to that shard alone, instead of adding replicas for all the
shards.

Thanks
Nawab


Re: Discuss: virtual nodes in Solr

2019-06-28 Thread Will Martin
From: S G mailto:sg.online.em...@gmail.com>>
Subject: Discuss: virtual nodes in Solr
Date: June 28, 2019 at 8:04:44 PM EDT
To: solr-user@lucene.apache.org
Reply-To: solr-user@lucene.apache.org

Hi,

Has Solr tried to use vnodes concept like Cassandra:
https://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2

If this can be implemented carefully, we need not live with just
shard-splitting alone that can only double the number of shards.
With vnodes, shards can be increased incrementally as the need arises.
What's more, shards can be decreased too when the doc-count/traffic
decreases.

-SG

+1

Carefully? Deliberate would be a better word with this community; imho. How 
about an incubation epic story PMC?





Discuss: virtual nodes in Solr

2019-06-28 Thread S G
Hi,

Has Solr tried to use vnodes concept like Cassandra:
https://www.datastax.com/dev/blog/virtual-nodes-in-cassandra-1-2

If this can be implemented carefully, we need not live with just
shard-splitting alone that can only double the number of shards.
With vnodes, shards can be increased incrementally as the need arises.
What's more, shards can be decreased too when the doc-count/traffic
decreases.

-SG


Re: Solr 7.7.2 - Autoscaling in new cluster ignoring sysprop rules, possibly all rules

2019-06-28 Thread Andrew Kettmann
Entered ticket https://issues.apache.org/jira/browse/SOLR-13586


Sadly, no patch attached this time as it is a much more complicated issue than 
my last one, and a good bit above my paygrade with Java.




From: Andrzej Białecki 
Sent: Friday, June 28, 2019 4:29:49 AM
To: solr-user@lucene.apache.org
Subject: Re: Solr 7.7.2 - Autoscaling in new cluster ignoring sysprop rules, 
possibly all rules

Andrew, please create a JIRA issue - in my opinion this is a bug not a feature, 
or at least something that needs clarification.

> On 27 Jun 2019, at 23:56, Andrew Kettmann  
> wrote:
>
> I found the issue. Autoscaling seems to silently ignore rules (at least 
> sysprop rules). Example rule:
>
>
> {'set-policy': {'sales-uat': [{'node': '#ANY',
>   'replica': '<2',
>   'strict': 'false'},
>  {'replica': '#ALL',
>   'strict': 'true',
>   'sysprop.HELM_CHART': 'foo'}]}}
>
>
> Two cases will get the sysprop rule ignored:
>
>  1.  No nodes have a HELM_CHART system property defined
>  2.  No nodes have the value "foo" for the HELM_CHART system property
>
>
> If you have SOME nodes that have -DHELM_CHART=foo, then it will fail if it 
> cannot satisfy another strict rule. So sysprop autoscaling rules appear to be 
> unable to be strict on their own it appears.
>
>
> Hopefully this can solve some issues for other people as well.
>
> 
> From: Andrew Kettmann
> Sent: Tuesday, June 25, 2019 1:04:21 PM
> To: solr-user@lucene.apache.org
> Subject: Solr 7.7.2 - Autoscaling in new cluster ignoring sysprop rules, 
> possibly all rules
>
>
> Using docker 7.7.2 image
>
>
> Solr 7.7.2 on new Znode on ZK. Created the chroot using solr zk mkroot.
>
>
> Created a policy:
>
> {'set-policy': {'banana': [{'replica': '#ALL',
>'sysprop.HELM_CHART': 'notbanana'}]}}
>
>
> No errors on creation of the policy.
>
>
> I have no nodes that have that value for the system property "HELM_CHART", I 
> have nodes that contain "banana" and "rulesos" for that value only.
>
>
> I create the collection with a call to the /admin/collections:
>
> {'action': 'CREATE',
> 'collection.configName': 'project-solr-7',
> 'name': 'banana',
> 'numShards': '2',
> 'policy': 'banana',
> 'replicationFactor': '2'}
>
>
> and it creates the collection without an error. Which what I expected was the 
> collection creation to fail. This is the behavior I had seen in the past, but 
> after tearing down and recreating the cluster in a higher environment, it 
> does not appear to function.
>
>
> Is there some prerequisite before policies will be respected? The .system 
> collection is in place as expected, and I am not seeing anything in the logs 
> on the overseer to suggest any problems.
>
> [https://storage.googleapis.com/e24-email-images/e24logonotag.png]
>  Andrew Kettmann
> DevOps Engineer
> P: 1.314.596.2836
> [LinkedIn] [Twitter] 
>   [Instagram] 
> 
>
> evolve24 Confidential & Proprietary Statement: This email and any attachments 
> are confidential and may contain information that is privileged, confidential 
> or exempt from disclosure under applicable law. It is intended for the use of 
> the recipients. If you are not the intended recipient, or believe that you 
> have received this communication in error, please do not read, print, copy, 
> retransmit, disseminate, or otherwise use the information. Please delete this 
> email and attachments, without reading, printing, copying, forwarding or 
> saving them, and notify the Sender immediately by reply email. No 
> confidentiality or privilege is waived or lost by any transmission in error.



Re: different numFound value /select vs. /export

2019-06-28 Thread Kudrettin Güleryüz
Thank you, issue was indeed format error.

On Fri, Jun 28, 2019 at 2:23 PM Colvin Cowie 
wrote:

> */stream?explain=true&expr=sear*
>
>
> *ch(myCore,zkHost=”192.168.1.10:2181
> ",qt=”/export”,q=”*:*”, fl=”id”,sort=”id asc”)
> returns*
> * 'search(myCore,zkHost=”192.168.1.10:2181
> \",qt=”/export”,q=”**
>
>
> *:*”, fl=”id”,sort=”id asc”)' is not a proper expression clause*
> If the above is exactly as you entered it and the response that came back,
> then you've got invalid quote characters in there *”* vs *"* (e.g. zkHost=”
> rather than zkHost="), which could happen if you've used a rich editor like
> Word that autoformats text, or copied the example from this blog
>
> https://medium.com/@sarkaramrit2/getting-started-with-streaming-expressions-in-apache-solr-b49111a417e3
> which has them wrongly formatted
>
> On Fri, 28 Jun 2019 at 18:00, Kudrettin Güleryüz 
> wrote:
>
> > Thank you for responding.
> >
> > I didn't go though the parsers involved, I assume they'd be the defaults.
> >
> > I did notice later, though that /export is core specific. In fact we
> have a
> > Solr Cloud with 6 shards. I also found out that /stream can be used for
> > this but couldn't get a solution that works so far:
> > /stream?explain=true&expr=search(myCore,zkHost=”192.168.1.10:2181
> > ",qt=”/export”,q=”*:*”,
> > fl=”id”,sort=”id asc”)
> > returns
> >
> > 'search(myCore,zkHost=”192.168.1.10:2181\",qt=”/export”,q=”*:*”,
> > fl=”id”,sort=”id asc”)' is not a proper expression clause
> >
> > Is my syntax wrong or do I need to enable schema or config level
> > changes in order to get this work?
> >
> >
> > On Fri, Jun 28, 2019 at 11:50 AM Erick Erickson  >
> > wrote:
> >
> > > First I’d make sure that you were using the same query parser in both
> > > situations.
> > >
> > > Second, export is specific to a core, it is not cloud-aware so if this
> is
> > > SolrCloud I’d expect major differences, which you haven’t told us
> about,
> > > off by 5? 10,000?.
> > >
> > > Third, there was a bug at one point where export would leave off the
> last
> > > packet IIRC, what version of Solr are you using?
> > >
> > > Best,
> > > Erick
> > >
> > > > On Jun 28, 2019, at 7:11 AM, Kudrettin Güleryüz  >
> > > wrote:
> > > >
> > > > Hi,
> > > >
> > > > I'd like to give my website users ability to export a field for the
> > full
> > > > search result set. Specifying a very large pageSize seems to perform
> > very
> > > > poorly for this purpose. Therefore, considering using export
> > > requestHandler
> > > > for exporting search results.
> > > >
> > > > When I play with a core, I noticed that the numFound value was
> > different
> > > > between these two queries for the same core
> > > > export?fl=id&q=*:*&sort=id%20desc
> > > > select?fl=id&q=*:*&sort=id%20desc
> > > >
> > > > Can you please explain why this may be the case? Also any suggestions
> > on
> > > > alternatives would be nice.
> > > >
> > > > Thank you
> > >
> > >
> >
>


Re: different numFound value /select vs. /export

2019-06-28 Thread Colvin Cowie
*/stream?explain=true&expr=sear*


*ch(myCore,zkHost=”192.168.1.10:2181
",qt=”/export”,q=”*:*”, fl=”id”,sort=”id asc”)
returns*
* 'search(myCore,zkHost=”192.168.1.10:2181
\",qt=”/export”,q=”**


*:*”, fl=”id”,sort=”id asc”)' is not a proper expression clause*
If the above is exactly as you entered it and the response that came back,
then you've got invalid quote characters in there *”* vs *"* (e.g. zkHost=”
rather than zkHost="), which could happen if you've used a rich editor like
Word that autoformats text, or copied the example from this blog
https://medium.com/@sarkaramrit2/getting-started-with-streaming-expressions-in-apache-solr-b49111a417e3
which has them wrongly formatted

On Fri, 28 Jun 2019 at 18:00, Kudrettin Güleryüz 
wrote:

> Thank you for responding.
>
> I didn't go though the parsers involved, I assume they'd be the defaults.
>
> I did notice later, though that /export is core specific. In fact we have a
> Solr Cloud with 6 shards. I also found out that /stream can be used for
> this but couldn't get a solution that works so far:
> /stream?explain=true&expr=search(myCore,zkHost=”192.168.1.10:2181
> ",qt=”/export”,q=”*:*”,
> fl=”id”,sort=”id asc”)
> returns
>
> 'search(myCore,zkHost=”192.168.1.10:2181\",qt=”/export”,q=”*:*”,
> fl=”id”,sort=”id asc”)' is not a proper expression clause
>
> Is my syntax wrong or do I need to enable schema or config level
> changes in order to get this work?
>
>
> On Fri, Jun 28, 2019 at 11:50 AM Erick Erickson 
> wrote:
>
> > First I’d make sure that you were using the same query parser in both
> > situations.
> >
> > Second, export is specific to a core, it is not cloud-aware so if this is
> > SolrCloud I’d expect major differences, which you haven’t told us about,
> > off by 5? 10,000?.
> >
> > Third, there was a bug at one point where export would leave off the last
> > packet IIRC, what version of Solr are you using?
> >
> > Best,
> > Erick
> >
> > > On Jun 28, 2019, at 7:11 AM, Kudrettin Güleryüz 
> > wrote:
> > >
> > > Hi,
> > >
> > > I'd like to give my website users ability to export a field for the
> full
> > > search result set. Specifying a very large pageSize seems to perform
> very
> > > poorly for this purpose. Therefore, considering using export
> > requestHandler
> > > for exporting search results.
> > >
> > > When I play with a core, I noticed that the numFound value was
> different
> > > between these two queries for the same core
> > > export?fl=id&q=*:*&sort=id%20desc
> > > select?fl=id&q=*:*&sort=id%20desc
> > >
> > > Can you please explain why this may be the case? Also any suggestions
> on
> > > alternatives would be nice.
> > >
> > > Thank you
> >
> >
>


Re: Question regarding Solr fq query

2019-06-28 Thread Saurabh Sharma
Hi,

Images are not visible. Please upload on some image sharing platform and
share the link.

Thanks

On Fri, 28 Jun, 2019, 11:00 PM Krishna Kammadanam, 
wrote:

> Hello,
>
>
>
> I am a back-end developer working with Solr 4.0 version.
>
>
>
> I am running into so many issues, but trying to understand at the same
> time.
>
>
>
> But I have a question for anyone who can help me.
>
>
>
>
>
>
>
> A list exists within The JournalId 0036-8075. But cant able to search with
> the dash in between.
>
>
>
> I cant able to escape the dash in between..
>
>
>
> Any suggestions?
>
>
>
>
>
> Best regards
>
>
>
> *Krist Kammadanam*
>
> Back-end Developer
>
> [image: signature_708671773] 
>
>
>
> *V*:
>
> *E:* *k...@chronos-oa.com *
>
>
>


Question regarding Solr fq query

2019-06-28 Thread Krishna Kammadanam
Hello,

I am a back-end developer working with Solr 4.0 version.

I am running into so many issues, but trying to understand at the same time.

But I have a question for anyone who can help me.

[cid:image001.png@01D52DA7.A0B2FA10]

[cid:image004.png@01D52DA8.1A222A60]

A list exists within The JournalId 0036-8075. But cant able to search with the 
dash in between.

I cant able to escape the dash in between..

Any suggestions?


Best regards

Krist Kammadanam
Back-end Developer
[signature_708671773]

V:
E: k...@chronos-oa.com



Re: different numFound value /select vs. /export

2019-06-28 Thread Kudrettin Güleryüz
Thank you for responding.

I didn't go though the parsers involved, I assume they'd be the defaults.

I did notice later, though that /export is core specific. In fact we have a
Solr Cloud with 6 shards. I also found out that /stream can be used for
this but couldn't get a solution that works so far:
/stream?explain=true&expr=search(myCore,zkHost=”192.168.1.10:2181",qt=”/export”,q=”*:*”,
fl=”id”,sort=”id asc”)
returns

'search(myCore,zkHost=”192.168.1.10:2181\",qt=”/export”,q=”*:*”,
fl=”id”,sort=”id asc”)' is not a proper expression clause

Is my syntax wrong or do I need to enable schema or config level
changes in order to get this work?


On Fri, Jun 28, 2019 at 11:50 AM Erick Erickson 
wrote:

> First I’d make sure that you were using the same query parser in both
> situations.
>
> Second, export is specific to a core, it is not cloud-aware so if this is
> SolrCloud I’d expect major differences, which you haven’t told us about,
> off by 5? 10,000?.
>
> Third, there was a bug at one point where export would leave off the last
> packet IIRC, what version of Solr are you using?
>
> Best,
> Erick
>
> > On Jun 28, 2019, at 7:11 AM, Kudrettin Güleryüz 
> wrote:
> >
> > Hi,
> >
> > I'd like to give my website users ability to export a field for the full
> > search result set. Specifying a very large pageSize seems to perform very
> > poorly for this purpose. Therefore, considering using export
> requestHandler
> > for exporting search results.
> >
> > When I play with a core, I noticed that the numFound value was different
> > between these two queries for the same core
> > export?fl=id&q=*:*&sort=id%20desc
> > select?fl=id&q=*:*&sort=id%20desc
> >
> > Can you please explain why this may be the case? Also any suggestions on
> > alternatives would be nice.
> >
> > Thank you
>
>


Re: different numFound value /select vs. /export

2019-06-28 Thread Erick Erickson
First I’d make sure that you were using the same query parser in both 
situations. 

Second, export is specific to a core, it is not cloud-aware so if this is 
SolrCloud I’d expect major differences, which you haven’t told us about, off by 
5? 10,000?.

Third, there was a bug at one point where export would leave off the last 
packet IIRC, what version of Solr are you using?

Best,
Erick

> On Jun 28, 2019, at 7:11 AM, Kudrettin Güleryüz  wrote:
> 
> Hi,
> 
> I'd like to give my website users ability to export a field for the full
> search result set. Specifying a very large pageSize seems to perform very
> poorly for this purpose. Therefore, considering using export requestHandler
> for exporting search results.
> 
> When I play with a core, I noticed that the numFound value was different
> between these two queries for the same core
> export?fl=id&q=*:*&sort=id%20desc
> select?fl=id&q=*:*&sort=id%20desc
> 
> Can you please explain why this may be the case? Also any suggestions on
> alternatives would be nice.
> 
> Thank you



different numFound value /select vs. /export

2019-06-28 Thread Kudrettin Güleryüz
Hi,

I'd like to give my website users ability to export a field for the full
search result set. Specifying a very large pageSize seems to perform very
poorly for this purpose. Therefore, considering using export requestHandler
for exporting search results.

When I play with a core, I noticed that the numFound value was different
between these two queries for the same core
export?fl=id&q=*:*&sort=id%20desc
select?fl=id&q=*:*&sort=id%20desc

Can you please explain why this may be the case? Also any suggestions on
alternatives would be nice.

Thank you


Re: Best practice for saving state of large cluster?

2019-06-28 Thread Jörn Franke
I agree - and it would provide you the opportunity to use snapshots for backups 
on S3.

> Am 28.06.2019 um 15:06 schrieb Kyle Fransham :
> 
> Just my two cents, but why not put your data on EBS volumes and decouple
> from the AMI? This way you're storing the collections in the "amazon
> suggested" way:
> https://aws.amazon.com/premiumsupport/knowledge-center/instance-store-vs-ebs/
> 
> 
> Also saves the (potentially error-prone) step of creating a new AMI when
> data has changed.
> 
> Kyle
> 
>> On Fri, Jun 28, 2019 at 8:27 AM chris  wrote:
>> 
>> I have a cluster of 100 shards on 100 nodes, with solr 7.5, running in
>> AWS.The use case is read-dominant, with ingestion performed about once per
>> week. There are about 84 billion documents in the cluster. It is unused on
>> weekends and only used during normal business hours M-F.What I do now is
>> after each round of ingestion, create a new set of AMIs, then terminate
>> each instance.The next morning, the cluster is restarted by creating a new
>> set of spot requests, using the most recent AMIs. At the end of the day,
>> the cluster is turned off by terminating the instances (if no data was
>> changed), or by creating a new set of AMIs and then terminating the
>> instances.Is there a better way to do this? I'm not facing any real
>> problems with this setup, but I want to make sure I'm not missing something
>> obvious.Thanks,Chris
> 
> 
> 
> -- 
> 
> Kyle Fransham
> 
> Vice President, Research & Development
> 
> *Business Protection You Can Count On*
> 
> 104 Schneider Road | Kanata, Ontario K2K 1Y2
> 
> 
> tel 613-729-1100 | mobile 613-897-9414 <613-xxx-> |
> www.supernaeyeglass.com
> 
>   
> 
> -- 
> CONFIDENTIALITY NOTICE: The information contained in this email is 
> privileged and confidential and intended only for the use of the individual 
> or entity to whom it is addressed.   If you receive this message in error, 
> please notify the sender immediately at 613-729-1100 and destroy the 
> original message and all copies. Thank you.


Re: Relevance by term position

2019-06-28 Thread Alexandre Rafalovitch
This past thread may be relevant: https://markmail.org/message/aau6bjllkpwcpmro
It suggests that using SpanFirst of XMLQueryParser will have automatic
boost for earlier matches.
The other approach suggested was to use Payloads (which got better
since the original thread).

Regards,
   Alex.

On Thu, 27 Jun 2019 at 22:01, Jay Potharaju  wrote:
>
> Hi,
> I am trying to implement autocomplete feature that should rank documents 
> based on term position in the search field.
> Example-
> Doc1- hello world
> Doc2- blue sky hello
> Doc3 - John hello
>
> Searching for hello should return
> Hello world
> John hello
> Blue sky hello
>
> I am currently using ngram to do autocomplete. But this does not allow me to 
> rank results based on term position.
>
> Any suggestions on how this can be done?
> Thanks
>


Re: Best practice for saving state of large cluster?

2019-06-28 Thread Kyle Fransham
Just my two cents, but why not put your data on EBS volumes and decouple
from the AMI? This way you're storing the collections in the "amazon
suggested" way:
https://aws.amazon.com/premiumsupport/knowledge-center/instance-store-vs-ebs/


Also saves the (potentially error-prone) step of creating a new AMI when
data has changed.

Kyle

On Fri, Jun 28, 2019 at 8:27 AM chris  wrote:

> I have a cluster of 100 shards on 100 nodes, with solr 7.5, running in
> AWS.The use case is read-dominant, with ingestion performed about once per
> week. There are about 84 billion documents in the cluster. It is unused on
> weekends and only used during normal business hours M-F.What I do now is
> after each round of ingestion, create a new set of AMIs, then terminate
> each instance.The next morning, the cluster is restarted by creating a new
> set of spot requests, using the most recent AMIs. At the end of the day,
> the cluster is turned off by terminating the instances (if no data was
> changed), or by creating a new set of AMIs and then terminating the
> instances.Is there a better way to do this? I'm not facing any real
> problems with this setup, but I want to make sure I'm not missing something
> obvious.Thanks,Chris



-- 

Kyle Fransham

Vice President, Research & Development

*Business Protection You Can Count On*

104 Schneider Road | Kanata, Ontario K2K 1Y2


tel 613-729-1100 | mobile 613-897-9414 <613-xxx-> |
www.supernaeyeglass.com

  

-- 
CONFIDENTIALITY NOTICE: The information contained in this email is 
privileged and confidential and intended only for the use of the individual 
or entity to whom it is addressed.   If you receive this message in error, 
please notify the sender immediately at 613-729-1100 and destroy the 
original message and all copies. Thank you.


Best practice for saving state of large cluster?

2019-06-28 Thread chris
I have a cluster of 100 shards on 100 nodes, with solr 7.5, running in AWS.The 
use case is read-dominant, with ingestion performed about once per week. There 
are about 84 billion documents in the cluster. It is unused on weekends and 
only used during normal business hours M-F.What I do now is after each round of 
ingestion, create a new set of AMIs, then terminate each instance.The next 
morning, the cluster is restarted by creating a new set of spot requests, using 
the most recent AMIs. At the end of the day, the cluster is turned off by 
terminating the instances (if no data was changed), or by creating a new set of 
AMIs and then terminating the instances.Is there a better way to do this? I'm 
not facing any real problems with this setup, but I want to make sure I'm not 
missing something obvious.Thanks,Chris

Synonym SpellCheckCollator StringIndexOutOfBoundsException

2019-06-28 Thread Gonzalo Carracedo
Hello,

Using version 6.5.1, I get the error:






*java.lang.StringIndexOutOfBoundsException: String index out of range: -1
at java.lang.AbstractStringBuilder.replace(AbstractStringBuilder.java:851)
at java.lang.StringBuilder.replace(StringBuilder.java:262) at
org.apache.solr.spelling.SpellCheckCollator.getCollation(SpellCheckCollator.java:238)
at
org.apache.solr.spelling.SpellCheckCollator.collate(SpellCheckCollator.java:93)
at
org.apache.solr.handler.component.SpellCheckComponent.addCollationsToResponse(SpellCheckComponent.java:296)*

in the following escenarios:

*Escenario one:*

Having only the following two documents in the index:

*Id = "a", FieldOne = "swimwear"*
*Id = "b", FieldOne= "couture"*

Synonyms:  *"swimming costume,swim suit,swimwear"*
Query: *q= "swimwear"*

*Escenario two:*

Having only the following two documents in the index:

*Id = "a", FieldOne = "cord"*
*Id = "b", FieldOne = "oud"*

Synonyms:* "coords,coordinates,co ord"*
Query:  *q="coordinates"*

We don't have any custom code for the SpellCheckCollator or any addons that
could modify the functionality around it.

It would be great if you could shed some light given that the steps to
replicate it are quite straight forward.

Thank you,
Gonzalo


Re: refused connection

2019-06-28 Thread Colvin Cowie
I've not seen that error before (except when it's a failed JVM_BIND because
the port is in use), but a quick google suggests it might be related to
file descriptor limits being enforced by your OS
https://groups.google.com/forum/#!topic/gatling/rRpv8LPa51I

On Fri, 28 Jun 2019 at 09:34, Midas A  wrote:

> We are doing bulk indexing here . Might it be possible due to heavy
> indexing . jetty connection related thing ?
>
> On Fri, Jun 28, 2019 at 1:47 PM Markus Jelsma 
> wrote:
>
> > Hello,
> >
> > If you get a Connection Refused, then normally the server is just
> offline.
> > But, something weird is hiding in your stack trace, you should check it
> out
> > further:
> >
> > > Caused by: java.net.ConnectException: Cannot assign requested address
> > > (connect failed)
> >
> > I have not seen this before.
> >
> > Regards,
> > Markus
> >
> > -Original message-
> > > From:Midas A 
> > > Sent: Friday 28th June 2019 10:03
> > > To: solr-user@lucene.apache.org
> > > Subject: Re: refused connection
> > >
> > > Please reply .  THis error is coming intermittently.
> > >
> > > On Fri, Jun 28, 2019 at 11:50 AM Midas A  wrote:
> > >
> > > > Hi All ,
> > > >
> > > > I am getting following error while indexing . Please suggest
> > resolution.
> > > >
> > > > We are using kafka consumer to index solr .
> > > >
> > > >
> > > > org.apache.solr.client.solrj.SolrServerException: Server
> > > > *refused connection* at: http://host:port/solr/research
> > > > at
> > > >
> >
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
> > > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> > fcbe46c28cef11bc058779afba09521de1b19bef -
> > > > ab - 2019-05-22 15:20:04]
> > > > at
> > > >
> >
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
> > > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> > fcbe46c28cef11bc058779afba09521de1b19bef -
> > > > ab - 2019-05-22 15:20:04]
> > > > at
> > > >
> >
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
> > > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> > fcbe46c28cef11bc058779afba09521de1b19bef -
> > > > ab - 2019-05-22 15:20:04]
> > > > at
> > org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
> > > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> > fcbe46c28cef11bc058779afba09521de1b19bef -
> > > > ab - 2019-05-22 15:20:04]
> > > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
> > > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> > fcbe46c28cef11bc058779afba09521de1b19bef -
> > > > ab - 2019-05-22 15:20:04]
> > > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
> > > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> > fcbe46c28cef11bc058779afba09521de1b19bef -
> > > > ab - 2019-05-22 15:20:04]
> > > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:156)
> > > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> > fcbe46c28cef11bc058779afba09521de1b19bef -
> > > > ab - 2019-05-22 15:20:04]
> > > > at
> > > >
> >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.pushToSolr(ResumesDocumentRepositoryImpl.java:425)
> > > > [classes!/:1.0.0]
> > > > at
> > > >
> >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.createResumeDocument(ResumesDocumentRepositoryImpl.java:397)
> > > > [classes!/:1.0.0]
> > > > at
> > > >
> >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$FastClassBySpringCGLIB$$e5ddf9e4.invoke()
> > > > [classes!/:1.0.0]
> > > > at
> > > >
> > org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> > > > [spring-core-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > > at
> > > >
> >
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746)
> > > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > > at
> > > >
> >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
> > > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > > at
> > > >
> >
> org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139)
> > > > [spring-tx-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > > at
> > > >
> >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
> > > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > > at
> > > >
> >
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
> > > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > > at
> > > >
> >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$EnhancerBySpringCGLIB$$3885a0b4.createResumeDocument()
> > > > [classes!/:1.0.0]
> > > > at
> > > >
> >
> com.monster.blue.jay.services.ResumeDocumentService.getResumeDocument(ResumeDocumentService.java:46)
> > > > [classes!/:1.0.0]
> > > > at
> > > >
> >
> com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTask

Re: Solr 7.7.2 - Autoscaling in new cluster ignoring sysprop rules, possibly all rules

2019-06-28 Thread Andrzej Białecki
Andrew, please create a JIRA issue - in my opinion this is a bug not a feature, 
or at least something that needs clarification.

> On 27 Jun 2019, at 23:56, Andrew Kettmann  
> wrote:
> 
> I found the issue. Autoscaling seems to silently ignore rules (at least 
> sysprop rules). Example rule:
> 
> 
> {'set-policy': {'sales-uat': [{'node': '#ANY',
>   'replica': '<2',
>   'strict': 'false'},
>  {'replica': '#ALL',
>   'strict': 'true',
>   'sysprop.HELM_CHART': 'foo'}]}}
> 
> 
> Two cases will get the sysprop rule ignored:
> 
>  1.  No nodes have a HELM_CHART system property defined
>  2.  No nodes have the value "foo" for the HELM_CHART system property
> 
> 
> If you have SOME nodes that have -DHELM_CHART=foo, then it will fail if it 
> cannot satisfy another strict rule. So sysprop autoscaling rules appear to be 
> unable to be strict on their own it appears.
> 
> 
> Hopefully this can solve some issues for other people as well.
> 
> 
> From: Andrew Kettmann
> Sent: Tuesday, June 25, 2019 1:04:21 PM
> To: solr-user@lucene.apache.org
> Subject: Solr 7.7.2 - Autoscaling in new cluster ignoring sysprop rules, 
> possibly all rules
> 
> 
> Using docker 7.7.2 image
> 
> 
> Solr 7.7.2 on new Znode on ZK. Created the chroot using solr zk mkroot.
> 
> 
> Created a policy:
> 
> {'set-policy': {'banana': [{'replica': '#ALL',
>'sysprop.HELM_CHART': 'notbanana'}]}}
> 
> 
> No errors on creation of the policy.
> 
> 
> I have no nodes that have that value for the system property "HELM_CHART", I 
> have nodes that contain "banana" and "rulesos" for that value only.
> 
> 
> I create the collection with a call to the /admin/collections:
> 
> {'action': 'CREATE',
> 'collection.configName': 'project-solr-7',
> 'name': 'banana',
> 'numShards': '2',
> 'policy': 'banana',
> 'replicationFactor': '2'}
> 
> 
> and it creates the collection without an error. Which what I expected was the 
> collection creation to fail. This is the behavior I had seen in the past, but 
> after tearing down and recreating the cluster in a higher environment, it 
> does not appear to function.
> 
> 
> Is there some prerequisite before policies will be respected? The .system 
> collection is in place as expected, and I am not seeing anything in the logs 
> on the overseer to suggest any problems.
> 
> [https://storage.googleapis.com/e24-email-images/e24logonotag.png]
>  Andrew Kettmann
> DevOps Engineer
> P: 1.314.596.2836
> [LinkedIn] [Twitter] 
>   [Instagram] 
> 
> 
> evolve24 Confidential & Proprietary Statement: This email and any attachments 
> are confidential and may contain information that is privileged, confidential 
> or exempt from disclosure under applicable law. It is intended for the use of 
> the recipients. If you are not the intended recipient, or believe that you 
> have received this communication in error, please do not read, print, copy, 
> retransmit, disseminate, or otherwise use the information. Please delete this 
> email and attachments, without reading, printing, copying, forwarding or 
> saving them, and notify the Sender immediately by reply email. No 
> confidentiality or privilege is waived or lost by any transmission in error.



Querying _nest_path_ while querying child

2019-06-28 Thread Saurabh Sharma
Hi All,

I am currently working on nested documents and not able to find a way to
query the _nest_path_ field.

There is no documentation about it.we can reach to children using

q={!parent which='-_nest_path_:* *:*'}

But is there any way using which we can reach deeper levels using
_nest_path_.

Suppose I do have a nested path
/req#/PD#0/A#1
How can I reach all the documents under PD?
q={!parent which='-_nest_path_:/req/PD/'} is not working for me.

Thanks
Saurabh Sharma


Re: refused connection

2019-06-28 Thread Midas A
We are doing bulk indexing here . Might it be possible due to heavy
indexing . jetty connection related thing ?

On Fri, Jun 28, 2019 at 1:47 PM Markus Jelsma 
wrote:

> Hello,
>
> If you get a Connection Refused, then normally the server is just offline.
> But, something weird is hiding in your stack trace, you should check it out
> further:
>
> > Caused by: java.net.ConnectException: Cannot assign requested address
> > (connect failed)
>
> I have not seen this before.
>
> Regards,
> Markus
>
> -Original message-
> > From:Midas A 
> > Sent: Friday 28th June 2019 10:03
> > To: solr-user@lucene.apache.org
> > Subject: Re: refused connection
> >
> > Please reply .  THis error is coming intermittently.
> >
> > On Fri, Jun 28, 2019 at 11:50 AM Midas A  wrote:
> >
> > > Hi All ,
> > >
> > > I am getting following error while indexing . Please suggest
> resolution.
> > >
> > > We are using kafka consumer to index solr .
> > >
> > >
> > > org.apache.solr.client.solrj.SolrServerException: Server
> > > *refused connection* at: http://host:port/solr/research
> > > at
> > >
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at
> > >
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at
> > >
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at
> org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:156)
> > > ~[solr-solrj-8.1.1.jar!/:8.1.1
> fcbe46c28cef11bc058779afba09521de1b19bef -
> > > ab - 2019-05-22 15:20:04]
> > > at
> > >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.pushToSolr(ResumesDocumentRepositoryImpl.java:425)
> > > [classes!/:1.0.0]
> > > at
> > >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.createResumeDocument(ResumesDocumentRepositoryImpl.java:397)
> > > [classes!/:1.0.0]
> > > at
> > >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$FastClassBySpringCGLIB$$e5ddf9e4.invoke()
> > > [classes!/:1.0.0]
> > > at
> > >
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> > > [spring-core-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746)
> > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
> > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139)
> > > [spring-tx-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
> > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
> > > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > > at
> > >
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$EnhancerBySpringCGLIB$$3885a0b4.createResumeDocument()
> > > [classes!/:1.0.0]
> > > at
> > >
> com.monster.blue.jay.services.ResumeDocumentService.getResumeDocument(ResumeDocumentService.java:46)
> > > [classes!/:1.0.0]
> > > at
> > >
> com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:200)
> > > [classes!/:1.0.0]
> > > at
> > >
> com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:148)
> > > [classes!/:1.0.0]
> > > at java.util.concurrent.FutureTask.run(FutureTask.java:266)
> [na:1.8.0_121]
> > > at
> > >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > > [na:1.8.0_121]
> > > at
> > >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > > [na:1.8.0_121]
> > > at java.lang.Thread.r

RE: refused connection

2019-06-28 Thread Markus Jelsma
Hello,

If you get a Connection Refused, then normally the server is just offline. But, 
something weird is hiding in your stack trace, you should check it out further:

> Caused by: java.net.ConnectException: Cannot assign requested address
> (connect failed)

I have not seen this before.

Regards,
Markus 
 
-Original message-
> From:Midas A 
> Sent: Friday 28th June 2019 10:03
> To: solr-user@lucene.apache.org
> Subject: Re: refused connection
> 
> Please reply .  THis error is coming intermittently.
> 
> On Fri, Jun 28, 2019 at 11:50 AM Midas A  wrote:
> 
> > Hi All ,
> >
> > I am getting following error while indexing . Please suggest resolution.
> >
> > We are using kafka consumer to index solr .
> >
> >
> > org.apache.solr.client.solrj.SolrServerException: Server
> > *refused connection* at: http://host:port/solr/research
> > at
> > org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
> > ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> > ab - 2019-05-22 15:20:04]
> > at
> > org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
> > ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> > ab - 2019-05-22 15:20:04]
> > at
> > org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
> > ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> > ab - 2019-05-22 15:20:04]
> > at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
> > ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> > ab - 2019-05-22 15:20:04]
> > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
> > ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> > ab - 2019-05-22 15:20:04]
> > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
> > ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> > ab - 2019-05-22 15:20:04]
> > at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:156)
> > ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> > ab - 2019-05-22 15:20:04]
> > at
> > com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.pushToSolr(ResumesDocumentRepositoryImpl.java:425)
> > [classes!/:1.0.0]
> > at
> > com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.createResumeDocument(ResumesDocumentRepositoryImpl.java:397)
> > [classes!/:1.0.0]
> > at
> > com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$FastClassBySpringCGLIB$$e5ddf9e4.invoke()
> > [classes!/:1.0.0]
> > at
> > org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> > [spring-core-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > at
> > org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746)
> > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > at
> > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
> > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > at
> > org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139)
> > [spring-tx-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > at
> > org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
> > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > at
> > org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
> > [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> > at
> > com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$EnhancerBySpringCGLIB$$3885a0b4.createResumeDocument()
> > [classes!/:1.0.0]
> > at
> > com.monster.blue.jay.services.ResumeDocumentService.getResumeDocument(ResumeDocumentService.java:46)
> > [classes!/:1.0.0]
> > at
> > com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:200)
> > [classes!/:1.0.0]
> > at
> > com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:148)
> > [classes!/:1.0.0]
> > at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
> > at
> > java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> > [na:1.8.0_121]
> > at
> > java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> > [na:1.8.0_121]
> > at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> > Caused by: org.apache.http.conn.HttpHostConnectException: Connect to
> > 10.216.204.70:3112 [/10.216.204.70] failed: Cannot assign requested
> > address (connect failed)
> > at
> > org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:159)
> > ~[httpclient-4.5.5.jar!/:4.5.5]
> > at
> > org.apache.http.impl.conn.PoolingHttpC

Re: refused connection

2019-06-28 Thread Midas A
Please reply .  THis error is coming intermittently.

On Fri, Jun 28, 2019 at 11:50 AM Midas A  wrote:

> Hi All ,
>
> I am getting following error while indexing . Please suggest resolution.
>
> We are using kafka consumer to index solr .
>
>
> org.apache.solr.client.solrj.SolrServerException: Server
> *refused connection* at: http://host:port/solr/research
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.executeMethod(HttpSolrClient.java:656)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:255)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at
> org.apache.solr.client.solrj.impl.HttpSolrClient.request(HttpSolrClient.java:244)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at org.apache.solr.client.solrj.SolrRequest.process(SolrRequest.java:207)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:177)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:138)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at org.apache.solr.client.solrj.SolrClient.add(SolrClient.java:156)
> ~[solr-solrj-8.1.1.jar!/:8.1.1 fcbe46c28cef11bc058779afba09521de1b19bef -
> ab - 2019-05-22 15:20:04]
> at
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.pushToSolr(ResumesDocumentRepositoryImpl.java:425)
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl.createResumeDocument(ResumesDocumentRepositoryImpl.java:397)
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$FastClassBySpringCGLIB$$e5ddf9e4.invoke()
> [classes!/:1.0.0]
> at
> org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:204)
> [spring-core-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:746)
> [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
> [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.dao.support.PersistenceExceptionTranslationInterceptor.invoke(PersistenceExceptionTranslationInterceptor.java:139)
> [spring-tx-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:185)
> [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
> [spring-aop-5.0.7.RELEASE.jar!/:5.0.7.RELEASE]
> at
> com.monster.blue.jay.repositories.impl.ResumesDocumentRepositoryImpl$$EnhancerBySpringCGLIB$$3885a0b4.createResumeDocument()
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.services.ResumeDocumentService.getResumeDocument(ResumeDocumentService.java:46)
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:200)
> [classes!/:1.0.0]
> at
> com.monster.blue.jay.runable.impl.ParallelGroupProcessor$GroupIndexingTaskCallable.call(ParallelGroupProcessor.java:148)
> [classes!/:1.0.0]
> at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_121]
> at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
> [na:1.8.0_121]
> at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
> [na:1.8.0_121]
> at java.lang.Thread.run(Thread.java:745) [na:1.8.0_121]
> Caused by: org.apache.http.conn.HttpHostConnectException: Connect to
> 10.216.204.70:3112 [/10.216.204.70] failed: Cannot assign requested
> address (connect failed)
> at
> org.apache.http.impl.conn.DefaultHttpClientConnectionOperator.connect(DefaultHttpClientConnectionOperator.java:159)
> ~[httpclient-4.5.5.jar!/:4.5.5]
> at
> org.apache.http.impl.conn.PoolingHttpClientConnectionManager.connect(PoolingHttpClientConnectionManager.java:373)
> ~[httpclient-4.5.5.jar!/:4.5.5]
> at
> org.apache.http.impl.execchain.MainClientExec.establishRoute(MainClientExec.java:381)
> ~[httpclient-4.5.5.jar!/:4.5.5]
> at
> org.apache.http.impl.execchain.MainClientExec.execute(MainClientExec.java:237)
> ~[httpclient-4.5.5.jar!/:4.5.5]
> at
> org.apache.http.impl.execchain.ProtocolExec.execute(ProtocolExec.java:185)
> ~[httpclient-4.5.5.jar!/:4.5.5]
> at org.apache.http.impl.execchain.RetryExec.execute(RetryExec.java:89)
> ~[httpclient-4.5.5.jar!/:4.5.5]
> at
> org.apache.http.impl.execchain.RedirectExec.execute(Redire