I upgraded the spark-solr project to solrj-5.0.0 and was able to index into
the gettingstarted collection using Solr 5.0.0, so seems like it may be
environmental. Almost seems like the spark project is looking at the wrong
ZooKeeper? Are you using the default -zkHost localhost:9983

On Mon, Mar 30, 2015 at 2:32 PM, Purohit, Sumit <sumit.puro...@pnnl.gov>
wrote:

> Thanks Tim,
>
> i had to make some changes in my local spark-solr clone to build it for
> sorl5.
> If its ok, i can commit these to github.
>
> thanks
> sumit
> ________________________________________
> From: Timothy Potter [thelabd...@gmail.com]
> Sent: Monday, March 30, 2015 2:27 PM
> To: solr-user@lucene.apache.org
> Subject: Re: NoNode for /clusterstate.json in solr5.0.0 cloud
>
> Ok, let me upgrade my version of spark-solr to 5 to see what I get ...
>
> On Mon, Mar 30, 2015 at 2:26 PM, Purohit, Sumit <sumit.puro...@pnnl.gov>
> wrote:
>
> > yes there is getting started collection..
> > and on admin webpage  console-->cloud--->tree--->/clusterstate.json
> shows
> > me this table
> >
> > version =1
> > aversion=0
> > children_count=0
> > ctimeFri= Mar 27 19:20:21 UTC 2015 (1427484021901)
> > cversion=0
> > czxid=32
> > ephemeralOwner=0
> > mtime=Fri Mar 27 19:20:36 UTC 2015 (1427484036453)
> > mzxid=110
> > pzxid=32
> > dataLength=2
> >
> > children_count=0  seems related to "no node" error.
> >
> > thanks
> > sumit
> > ________________________________________
> > From: Timothy Potter [thelabd...@gmail.com]
> > Sent: Monday, March 30, 2015 2:18 PM
> > To: solr-user@lucene.apache.org
> > Subject: Re: NoNode for /clusterstate.json in solr5.0.0 cloud
> >
> > Anything in the server-side Solr logs? Also, if you go to the Solr admin
> > console at http://localhost:8983/solr, do you see the gettingstarted
> > collection in the cloud panel?
> >
> >
> >
> > On Mon, Mar 30, 2015 at 1:12 PM, Purohit, Sumit <sumit.puro...@pnnl.gov>
> > wrote:
> >
> > > I have a basic Solr 5.0.0 cloud setup after following
> > > http://lucene.apache.org/solr/quickstart.html
> > >
> > > I am trying to read data from spark and index it into solr using
> > following
> > > lib:
> > > https://github.com/LucidWorks/spark-solr
> > >
> > > I am getting following error when my code try to make request to solr
> > >
> > >
> > > Exception in thread "main" org.apache.spark.SparkException: Job aborted
> > > due to stage failure: Task 0 in stage 0.0 failed 1 times, most recent
> > > failure: Lost task 0.0 in stage 0.0 (TID 0, localhost):
> > > org.apache.solr.common.cloud.ZooKeeperException:
> > >
> > > at
> > >
> >
> org.apache.solr.client.solrj.impl.CloudSolrClient.connect(CloudSolrClient.java:465)
> > >
> > > .............
> > >
> > > ..............
> > >
> > > ..............
> > >
> > > Caused by: org.apache.zookeeper.KeeperException$NoNodeException:
> > > KeeperErrorCode = NoNode for /clusterstate.json
> > >
> > > at
> org.apache.zookeeper.KeeperException.create(KeeperException.java:111)
> > >
> > > at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
> > >
> > > at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
> > >
> > > at
> > >
> >
> org.apache.solr.common.cloud.SolrZkClient$10.execute(SolrZkClient.java:500)
> > >
> > >
> > >
> > > I am not sure how (and when) to create nodes for "/clusterstate.json"
> > >
> > > I am using solr 5.0.0, sorlj5.0.0 spark-core_2.10_2.12.jar
> > >
> > >
> > >
> > > Thanks for the help.
> > >
> > > Sumit Purohit
> > >
> > >
> >
>

Reply via email to