Re: How to patch Solr4.2 for SolrEnityProcessor Sub-Enity issue

2013-08-25 Thread Shalin Shekhar Mangar
You are right. The fix committed to source was not complete. I've
reopened SOLR-3336 and I will put up a test and fix.

https://issues.apache.org/jira/browse/SOLR-3336

On Mon, Aug 26, 2013 at 9:41 AM, harshchawla  wrote:
> In the second reply of this link, it is discussed and more over I am facing
> the same issue here:
> http://stackoverflow.com/questions/15734308/solrentityprocessor-is-called-only-once-for-sub-entities?lq=1.
>
> See attached my data-config.xml of new core (let say) test
> 
>  name="candidateid "/>
>  processor="SolrEntityProcessor"
> url="http://localhost:8983/solr/csearch"; query="candidateid
> :${can.CandidateID}" fl="*">
> 
>  query="select Value from [CandidateData] where
> candidateid=${can.CandidateID}">
> 
> 
> 
>
> Here only the first record is getting parsed properly otherwise for all the
> remaining records only two field are coming in new core test even though
> core "csearch" contains all the field values for all the records.
>
> I hope it clarifies my situation more,
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/How-to-patch-Solr4-2-for-SolrEnityProcessor-Sub-Enity-issue-tp4086292p4086564.html
> Sent from the Solr - User mailing list archive at Nabble.com.



-- 
Regards,
Shalin Shekhar Mangar.


Re: How to patch Solr4.2 for SolrEnityProcessor Sub-Enity issue

2013-08-25 Thread harshchawla
In the second reply of this link, it is discussed and more over I am facing
the same issue here:
http://stackoverflow.com/questions/15734308/solrentityprocessor-is-called-only-once-for-sub-entities?lq=1.

See attached my data-config.xml of new core (let say) test


http://localhost:8983/solr/csearch"; query="candidateid
:${can.CandidateID}" fl="*">   






Here only the first record is getting parsed properly otherwise for all the
remaining records only two field are coming in new core test even though
core "csearch" contains all the field values for all the records.

I hope it clarifies my situation more,



--
View this message in context: 
http://lucene.472066.n3.nabble.com/How-to-patch-Solr4-2-for-SolrEnityProcessor-Sub-Enity-issue-tp4086292p4086564.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: how to integrate solr with HDFS HA

2013-08-25 Thread YouPeng Yang
Hi Greg
   Thanks for your reponse.
   It works



2013/8/23 Greg Walters 

> Finally something I can help with! I went through the same problems you're
> having a short while ago. Check out
> https://cwiki.apache.org/confluence/display/solr/Running+Solr+on+HDFS for
> most of the information you need and be sure to check the comments on the
> page as well.
>
> Here's an example from my working setup:
>
> **
>class="solr.HdfsDirectoryFactory">
> true
> 1
>  name="solr.hdfs.blockcache.direct.memory.allocation">true
> 16384
> true
> true
> true
> 16
> 192
> hdfs://nameservice1:8020/solr
> /etc/hadoop/conf.cloudera.hdfs1
>   
> **
>
> Thanks,
> Greg
>
> -Original Message-
> From: YouPeng Yang [mailto:yypvsxf19870...@gmail.com]
> Sent: Friday, August 23, 2013 1:16 AM
> To: solr-user@lucene.apache.org
> Subject: how to integrate solr with HDFS HA
>
> Hi all
> I try to integrate solr with HDFS HA.When I start the solr server, it
> comes out an exeception[1].
> And I do know this is because the hadoop.conf.Configuration  in
> HdfsDirectoryFactory.java does not include the HA configuration.
> So I want to know ,in solr,is there any way to include my hadoop  HA
> configuration ?
>
>
>
> [1]---
> Caused by: java.lang.IllegalArgumentException:
> java.net.UnknownHostException: lklcluster
> at
>
> org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:418)
> at
>
> org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:164)
> at
>
> org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:129)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:415)
> at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:382)
> at
>
> org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:123)
> at
> org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2277)
> at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:86)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2311)
> at
> org.apache.hadoop.fs.FileSystem$Cache.getUnique(FileSystem.java:2299)
> at org.apache.hadoop.fs.FileSystem.newInstance(FileSystem.java:364)
> at
> org.apache.solr.store.hdfs.HdfsDirectory.(HdfsDirectory.java:59)
> at
>
> org.apache.solr.core.HdfsDirectoryFactory.create(HdfsDirectoryFactory.java:154)
> at
>
> org.apache.solr.core.CachingDirectoryFactory.get(CachingDirectoryFactory.java:350)
> at org.apache.solr.core.SolrCore.getNewIndexDir(SolrCore.java:256)
> at org.apache.solr.core.SolrCore.initIndex(SolrCore.java:469)
> at org.apache.solr.core.SolrCore.(SolrCore.java:759)
>


Re: Different Responses for 4.4 and 3.5 solr index

2013-08-25 Thread Kuchekar
Hi,
 The response from 4.4 and 3.5 in the current scenario differs in the
sequence in which results are given us back.

For example :

Response from 3.5 solr is :  id:A, id:B, id:C, id:D ...
Response from 4.4 solr is : id C, id:A, id:D, id:B...

Looking forward your reply.

Thanks.
Kuchekar, Nilesh


On Sun, Aug 25, 2013 at 11:32 AM, Stefan Matheis
wrote:

> Kuchekar (hope that's your first name?)
>
> you didn't tell us .. how they differ? do you get an actual error? or does
> the result contain documents you didn't expect? or the other way round,
> that some are missing you'd expect to be there?
>
> - Stefan
>
>
> On Sunday, August 25, 2013 at 4:43 PM, Kuchekar wrote:
>
> > Hi,
> >
> > We get different response when we query 4.4 and 3.5 solr using same
> > query params.
> >
> > My query param are as following :
> >
> > facet=true
> > &facet.mincount=1
> > &facet.limit=25
> >
> &qf=content^0.0+p_last_name^500.0+p_first_name^50.0+strong_topic^0.0+first_author_topic^0.0+last_author_topic^0.0+title_topic^0.0
> > &wt=javabin
> > &version=2
> > &rows=10
> > &f.affiliation_org.facet.limit=150
> > &fl=p_id,p_first_name,p_last_name
> > &start=0
> > &q=Apple
> > &facet.field=affiliation_org
> > &fq=table:profile
> > &fq=num_content:[*+TO+1500]
> > &fq=name:"Apple"
> >
> > The content in both (solr 4.4 and solr 3.5) are same.
> >
> > The solrconfig.xml from 3.5 an 4.4 are similarly constructed.
> >
> > Is there something I am missing that might have been changed in 4.4,
> which
> > might be causing this issue. ?. The "qf" params looks same.
> >
> > Looking forward for your reply.
> >
> > Thanks.
> > Kuchekar, Nilesh
> >
> >
>
>
>


Re: Dropping Caches of Machine That Solr Runs At

2013-08-25 Thread Walter Underwood
On Aug 25, 2013, at 1:41 PM, Furkan KAMACI wrote:

> Sometimes Physical Memory usage of Solr is over %99 and this may cause
> problems. Do you run such kind of a command periodically:
> 
> sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches"
> 
> to force dropping caches of machine that Solr runs at and avoid problems?


This is a terrible idea. The OS automatically manages the file buffers. When 
they are all used, that is a  good thing, because it reduced disk IO. 

After this, no files will be cached in RAM. Every single read from a file will 
have to go to disk. This will cause very slow performance until the files are 
recached.

Recently, I did exactly the opposite to improve performance in our Solr 
installation. Before starting the Solr process, a script reads every file in 
the index so that it will already be in file buffers. This avoids several 
minutes of high disk IO and slow performance after startup.

wunder
Search Guy, Chegg.com




Re: Dropping Caches of Machine That Solr Runs At

2013-08-25 Thread Furkan KAMACI
One of my Solr Nodes at SolrCloud (4.2.1) was down for a long time. I
restarted Solr and after recovery its  Physical Memory usage is 99.5% and
does not decrease. Thats why I asked that question (I don't know is it
usual that Physical Memory did not decreased for 3 days. Why my CentOS 6.4
does not drop caches after a time later and why recovery resulted with such
a high Physical Memory usage.)


2013/8/25 Furkan KAMACI 

> Sometimes Physical Memory usage of Solr is over %99 and this may cause
> problems. Do you run such kind of a command periodically:
>
> sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches"
>
> to force dropping caches of machine that Solr runs at and avoid problems?
>


Dropping Caches of Machine That Solr Runs At

2013-08-25 Thread Furkan KAMACI
Sometimes Physical Memory usage of Solr is over %99 and this may cause
problems. Do you run such kind of a command periodically:

sudo sh -c "sync; echo 3 > /proc/sys/vm/drop_caches"

to force dropping caches of machine that Solr runs at and avoid problems?


Re: How to Manage RAM Usage at Heavy Indexing

2013-08-25 Thread Furkan KAMACI
Hi Erick;

I wanted to get a quick answer that's why I asked my question as that way.

Error is as follows:

INFO  - 2013-08-21 22:01:30.978;
org.apache.solr.update.processor.LogUpdateProcessor; [collection1]
webapp=/solr path=/update params={wt=javabin&version=2}
{add=[com.deviantart.reachmeh
ere:http/gallery/, com.deviantart.reachstereo:http/,
com.deviantart.reachstereo:http/art/SE-mods-313298903,
com.deviantart.reachtheclouds:http/, com.deviantart.reachthegoddess:http/,
co
m.deviantart.reachthegoddess:http/art/retouched-160219962,
com.deviantart.reachthegoddess:http/badges/,
com.deviantart.reachthegoddess:http/favourites/,
com.deviantart.reachthetop:http/
art/Blue-Jean-Baby-82204657 (1444006227844530177),
com.deviantart.reachurdreams:http/, ... (163 adds)]} 0 38790
ERROR - 2013-08-21 22:01:30.979; org.apache.solr.common.SolrException;
java.lang.RuntimeException: [was class org.eclipse.jetty.io.EofException]
early EOF
at
com.ctc.wstx.util.ExceptionUtil.throwRuntimeException(ExceptionUtil.java:18)
at com.ctc.wstx.sr.StreamScanner.throwLazyError(StreamScanner.java:731)
at
com.ctc.wstx.sr.BasicStreamReader.safeFinishToken(BasicStreamReader.java:3657)
at com.ctc.wstx.sr.BasicStreamReader.getText(BasicStreamReader.java:809)
at org.apache.solr.handler.loader.XMLLoader.readDoc(XMLLoader.java:393)
at
org.apache.solr.handler.loader.XMLLoader.processUpdate(XMLLoader.java:245)
at org.apache.solr.handler.loader.XMLLoader.load(XMLLoader.java:173)
at
org.apache.solr.handler.UpdateRequestHandler$1.load(UpdateRequestHandler.java:92)
at
org.apache.solr.handler.ContentStreamHandlerBase.handleRequestBody(ContentStreamHandlerBase.java:74)
at
org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:135)
at org.apache.solr.core.SolrCore.execute(SolrCore.java:1812)
at
org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:639)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:345)
at
org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:141)
at
org.eclipse.jetty.servlet.ServletHandler$CachedChain.doFilter(ServletHandler.java:1307)
at
org.eclipse.jetty.servlet.ServletHandler.doHandle(ServletHandler.java:453)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:137)
at
org.eclipse.jetty.security.SecurityHandler.handle(SecurityHandler.java:560)
at
org.eclipse.jetty.server.session.SessionHandler.doHandle(SessionHandler.java:231)
at
org.eclipse.jetty.server.handler.ContextHandler.doHandle(ContextHandler.java:1072)
at org.eclipse.jetty.servlet.ServletHandler.doScope(ServletHandler.java:382)
at
org.eclipse.jetty.server.session.SessionHandler.doScope(SessionHandler.java:193)
at
org.eclipse.jetty.server.handler.ContextHandler.doScope(ContextHandler.java:1006)
at
org.eclipse.jetty.server.handler.ScopedHandler.handle(ScopedHandler.java:135)
at
org.eclipse.jetty.server.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:255)
at
org.eclipse.jetty.server.handler.HandlerCollection.handle(HandlerCollection.java:154)
at
org.eclipse.jetty.server.handler.HandlerWrapper.handle(HandlerWrapper.java:116)
at org.eclipse.jetty.server.Server.handle(Server.java:365)
at
org.eclipse.jetty.server.AbstractHttpConnection.handleRequest(AbstractHttpConnection.java:485)
at
org.eclipse.jetty.server.BlockingHttpConnection.handleRequest(BlockingHttpConnection.java:53)
at
org.eclipse.jetty.server.AbstractHttpConnection.content(AbstractHttpConnection.java:937)
at
org.eclipse.jetty.server.AbstractHttpConnection$RequestHandler.content(AbstractHttpConnection.java:998)
at org.eclipse.jetty.http.HttpParser.parseNext(HttpParser.java:948)
at org.eclipse.jetty.http.HttpParser.parseAvailable(HttpParser.java:235)
at
org.eclipse.jetty.server.BlockingHttpConnection.handle(BlockingHttpConnection.java:72)
at
org.eclipse.jetty.server.bio.SocketConnector$ConnectorEndPoint.run(SocketConnector.java:264)
at
org.eclipse.jetty.util.thread.QueuedThreadPool.runJob(QueuedThreadPool.java:608)
at
org.eclipse.jetty.util.thread.QueuedThreadPool$3.run(QueuedThreadPool.java:543)
at java.lang.Thread.run(Thread.java:722)
Caused by: org.eclipse.jetty.io.EofException: early EOF
at org.eclipse.jetty.server.HttpInput.read(HttpInput.java:65)
at java.io.InputStream.read(InputStream.java:101)
at com.ctc.wstx.io.UTF8Reader.loadMore(UTF8Reader.java:365)
at com.ctc.wstx.io.UTF8Reader.read(UTF8Reader.java:110)
at com.ctc.wstx.io.MergedReader.read(MergedReader.java:101)
at com.ctc.wstx.io.ReaderSource.readInto(ReaderSource.java:84)
at
com.ctc.wstx.io.BranchingReaderSource.readInto(BranchingReaderSource.java:57)
at com.ctc.wstx.sr.StreamScanner.loadMore(StreamScanner.java:992)
at
com.ctc.wstx.sr.BasicStreamReader.readTextSecondary(BasicStreamReader.java:4628)
at
com.ctc.wstx.sr.BasicStreamReader.readCoalescedText(BasicStreamReader.java:4126)
at
com.ctc.wstx.sr.BasicStreamReader.finishToken(BasicStreamReader.java:3701)
at
com.ctc.wstx.sr.BasicStreamReader.safeFinishToken(BasicS

Re: DIH : Unexpected character '=' (code 61); expected a semi-colon after the reference for entity 'st'

2013-08-25 Thread eShard
I just resolved this same error.
The problem was that I had a lot of ampersands (&) that were un-escaped in
my XML doc
There was nothing wrong with my DIH; it was the xml doc it was trying to
consume.
I just used StringEscapeUtils.escapeXml from apache to resolve...
Another big help was the Eclipse XML validation engine. 
Just add your doc to an existing project and right click anywhere on the doc
and select validate from the menu.





--
View this message in context: 
http://lucene.472066.n3.nabble.com/DIH-Unexpected-character-code-61-expected-a-semi-colon-after-the-reference-for-entity-st-tp2816210p4086531.html
Sent from the Solr - User mailing list archive at Nabble.com.


Re: Different Responses for 4.4 and 3.5 solr index

2013-08-25 Thread Stefan Matheis
Kuchekar (hope that's your first name?)

you didn't tell us .. how they differ? do you get an actual error? or does the 
result contain documents you didn't expect? or the other way round, that some 
are missing you'd expect to be there?

- Stefan 


On Sunday, August 25, 2013 at 4:43 PM, Kuchekar wrote:

> Hi,
> 
> We get different response when we query 4.4 and 3.5 solr using same
> query params.
> 
> My query param are as following :
> 
> facet=true
> &facet.mincount=1
> &facet.limit=25
> &qf=content^0.0+p_last_name^500.0+p_first_name^50.0+strong_topic^0.0+first_author_topic^0.0+last_author_topic^0.0+title_topic^0.0
> &wt=javabin
> &version=2
> &rows=10
> &f.affiliation_org.facet.limit=150
> &fl=p_id,p_first_name,p_last_name
> &start=0
> &q=Apple
> &facet.field=affiliation_org
> &fq=table:profile
> &fq=num_content:[*+TO+1500]
> &fq=name:"Apple"
> 
> The content in both (solr 4.4 and solr 3.5) are same.
> 
> The solrconfig.xml from 3.5 an 4.4 are similarly constructed.
> 
> Is there something I am missing that might have been changed in 4.4, which
> might be causing this issue. ?. The "qf" params looks same.
> 
> Looking forward for your reply.
> 
> Thanks.
> Kuchekar, Nilesh
> 
> 




Different Responses for 4.4 and 3.5 solr index

2013-08-25 Thread Kuchekar
Hi,

 We get different response when we query 4.4 and 3.5 solr using same
query params.

My query param are as following :

facet=true
 &facet.mincount=1
&facet.limit=25
&qf=content^0.0+p_last_name^500.0+p_first_name^50.0+strong_topic^0.0+first_author_topic^0.0+last_author_topic^0.0+title_topic^0.0
 &wt=javabin
&version=2
&rows=10
 &f.affiliation_org.facet.limit=150
&fl=p_id,p_first_name,p_last_name
&start=0
 &q=Apple
&facet.field=affiliation_org
&fq=table:profile
 &fq=num_content:[*+TO+1500]
&fq=name:"Apple"

The content in both (solr 4.4 and solr 3.5) are same.

The solrconfig.xml from 3.5 an 4.4 are similarly constructed.

Is there something I am missing that might have been changed in 4.4, which
might be causing this issue. ?. The "qf" params looks same.

Looking forward for your reply.

Thanks.
Kuchekar, Nilesh


Re: SOLR Prevent solr of modifying fields when update doc

2013-08-25 Thread Luis Portela Afonso
Hi, right now I'm using the link field that comes in any rss entry as my
uniqueKey.
That was the best solution that I found because in many updated documents,
this was the only field that never changes.

Now I'm facing another problem. When I want to search for a document with
that id or link, because that is my uniqueKey, I'm not able to get an
unique result.
I can't successfully search for a field that is a URL on solr.
I think that is because I'm encoding the URL that I'm searching for, but
solr doesn't decodes it.

Thanks for the concern and help

On Saturday, August 24, 2013, Erick Erickson wrote:

> bq:  but the uniqueId is generated by me. But when solr indexes and there
> is an update in a doc, it deletes the doc and creates a new one, so it
> generates a new UUID.
>
> right, this is why I was saying that a UUID field may not fit your use
> case. The _point_ of a UUID field is to generate a unique entry for every
> added document, there's no concept of "only generate the UUID once per
>  indexed" which seems to be what you want.
>
> So I'd do something like just use the  field rather than a
> separate UUID field. That doesn't change by definition. What advantage do
> you think you get from the UUID field over just using your 
> field?
>
> Best,
> Erick
>
>
> On Sat, Aug 24, 2013 at 6:26 AM, Luis Portela Afonso <
> meligalet...@gmail.com 
> > wrote:
>
> > Hi,
> >
> > The uuid, that was been used like the id of a document, it's generated by
> > solr using an updatechain.
> > I just use the recommend method to generate uuid's.
> >
> > I think an atomic update is not suitable for me, because I want that solr
> > indexes the feeds and not me. I don't want to send information to solr, I
> > want that indexes it each 15 minutes, for example, and now it's doing
> that.
> >
> > Lance, I don't understand what you want to say with, software that I use
> to
> > index.
> > I just use solr. I have a configuration with two entities. One that
> selects
> > my rss sources from a database and then the main entity that get
> > information from an URL and processes it.
> >
> > Thank you all for the answers.
> > Much appreciated
> >
> > On Saturday, August 24, 2013, Greg Preston wrote:
> >
> > > But there is an API for sending a delta over the wire, and server side
> it
> > > does a read, overlay, delete, and insert.  And only the fields you sent
> > > will be changed.
> > >
> > > *Might require your unchanged fields to all be stored, though.
> > >
> > >
> > > -Greg
> > >
> > >
> > > On Fri, Aug 23, 2013 at 7:08 PM, Lance Norskog 
> > > 
> > >
> > > wrote:
> > >
> > > > Solr does not by default generate unique IDs. It uses what you give
> as
> > > > your unique field, usually called 'id'.
> > > >
> > > > What software do you use to index data from your RSS feeds? Maybe
> that
> > is
> > > > creating a new 'id' field?
> > > >
> > > > There is no partial update, Solr (Lucene) always rewrites the
> complete
> > > > document.
> > > >
> > > >
> > > > On 08/23/2013 09:03 AM, Greg Preston wrote:
> > > >
> > > >> Perhaps an atomic update that only changes the fields you want to
> > > change?
> > > >>
> > > >> -Greg
> > > >>
> > > >>
> > > >> On Fri, Aug 23, 2013 at 4:16 AM, Luís Portela Afonso
> > > >>  > wrote:
> > > >>
> > > >>> Hi thanks by the answer, but the uniqueId is generated by me. But
> > when
> > > >>> solr indexes and there is an update in a doc, it deletes the doc
> and
> > > >>> creates a new one, so it generates a new UUID.
> > > >>> It is not suitable for me, because i want that solr just updates
> some
> > > >>> fields, because the UUID is the key that i use to map it to an user
> > in
> > > my
> > > >>> database.
> > > >>>
> > > >>> Right now i'm using information that comes from the source and
> never
> > > >>> chages, as my uniqueId, like for example the guid, that exists in
> > some
> > > rss
> > > >>> feeds, or if it doesn't exists i use link.
> > > >>>
> > > >>> I think there is any simple solution for me, because for what i
> have
> > > >>> read, when an update to a doc exists, SOLR deletes the old one and
> > > create a
> > > >>> new one, right?
> > > >>>
> > > >>> On Aug 23, 2013, at 12:07 PM, Erick Erickson <
> > erickerick...@gmail.com 
> > > >
> > > >>> wrote:
> > > >>>
> > > >>>  Well, not much in the way of help because you can't do what you
> > >  want AFAIK. I don't think UUID is suitable for your use-case. Why
> > not
> > >  use your ?
> > > 
> > >  Or generate something yourself...
> > > 
> > >  Best
> > >  Erick
> > > 
> > > 
> > >  On Thu, Aug 22, 2013 at 5:56 PM, Luís Portela Afonso <
> > >  meligalet...@gmail.com  
> > > 
> > > > wrote:
> > > > Hi,
> > > >
> > > > How can i prevent solr from update some fields when updating a
> doc?
> > > > The problem is, i have an uuid with the field name uuid, but it
> is
> > > not
> > > > an
> > > > unique key. When a rss source updates a feed, solr will update
> the
> > > do

Re: How to patch Solr4.2 for SolrEnityProcessor Sub-Enity issue

2013-08-25 Thread Shalin Shekhar Mangar
That issue was fixed in Solr 3.6 and 4.0-alpha. The latest Solr
releases already have that fix. Can you give more details on why you
think you are still affected by that bug?

On Fri, Aug 23, 2013 at 4:29 PM, harshchawla  wrote:
> According to
> http://stackoverflow.com/questions/15734308/solrentityprocessor-is-called-only-once-for-sub-entities?lq=1
> we can use the patched SolrEntityProcessor in
> https://issues.apache.org/jira/browse/SOLR-3336?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel
> to solve the subentity problem.
>
> I tried renaming the jar file to zip and then try replacing the patched file
> but as I got only java file I can't replace it with class file. So I drop
> this idea.
>
> Here is then what I have tried. I decompiled the original jar
> solr-dataimporthandler-4.2.0.jar present in solr 4.2 package. Then I replace
> the patch file. And try to compile the files for making the jar again. But I
> started getting compilation errors.
>
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:397: ')'
> expected
>
> /* 432 / if (XPathEntityProcessor.2.this.val$isEnd.get()) { ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:397: expected
> / 432 / if (XPathEntityProcessor.2.this.val$isEnd.get()) { ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:397: not a
> statem ent / 432 / if (XPathEntityProcessor.2.this.val$isEnd.get()) { ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:397: illegal
> star t of expression / 432 */ if
> (XPathEntityProcessor.2.this.val$isEnd.get()) { ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:397: ';'
> expected
>
> /* 432 */ if (XPathEntityProcessor.2.this.val$isEnd.get()) { ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:397: ';'
> expected
>
> /* 432 / if (XPathEntityProcessor.2.this.val$isEnd.get()) { ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:398: not a
> statem ent / 433 */ XPathEntityProcessor.2.this.val$throwExp.set(false); ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:398: ';'
> expected
>
> /* 433 / XPathEntityProcessor.2.this.val$throwExp.set(false); ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:406: not a
> statem ent / 442 */ XPathEntityProcessor.2.this.val$isEnd.set(true); ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:406: ';'
> expected
>
> /* 442 / XPathEntityProcessor.2.this.val$isEnd.set(true); ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:409: not a
> statem ent / 445 */ XPathEntityProcessor.2.this.offer(row); ^
> .\org\apache\solr\handler\dataimport\XPathEntityProcessor.java:409: ';'
> expected
>
> /* 445 */ XPathEntityProcessor.2.this.offer(row); ^ 12 errors
>
> Any idea how to patch Solr4.2 for this issue.
>
> I thought it must be in Sol4.4 but it is not in it. Any help on this.
>
>
>
>
> --
> View this message in context: 
> http://lucene.472066.n3.nabble.com/How-to-patch-Solr4-2-for-SolrEnityProcessor-Sub-Enity-issue-tp4086292.html
> Sent from the Solr - User mailing list archive at Nabble.com.



-- 
Regards,
Shalin Shekhar Mangar.