RejectedExecutionException when shutttingdown corecontainer

2010-06-17 Thread NarasimhaRaju
Hi,

I am using solr 1.3 and when indexing i am getting RejectedExecutionException 
after processing the last batch of update records from the database.
happening when coreContainer.shutdown() is called after processing the last 
record.
i have autocommits enabled based on maxTime which is 10 minutes.

from the exception below i see it's happening from commitTracker of 
DefaultUpdateHandler2.

looking at the SolrCore.close method searchExecutor.shutdown() is called before 
updateHandler.close()

I still don't understand why updateHandler is called after searchExcecutor when 
updateHandler has the possibility of adding/submitting to searchExecutor.

is this a bug or am i doing something wrong with my autocommit.




SEVERE: java.util.concurrent.RejectedExecutionException
at 
java.util.concurrent.ThreadPoolExecutor$AbortPolicy.rejectedExecution(ThreadPoolExecutor.java:1760)
at 
java.util.concurrent.ThreadPoolExecutor.reject(ThreadPoolExecutor.java:767)
at 
java.util.concurrent.ThreadPoolExecutor.execute(ThreadPoolExecutor.java:658)
at 
java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:92)
at 
java.util.concurrent.Executors$DelegatedExecutorService.submit(Executors.java:603)
at org.apache.solr.core.SolrCore.getSearcher(SolrCore.java:1029)
at 
org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:368)
at 
org.apache.solr.update.DirectUpdateHandler2$CommitTracker.run(DirectUpdateHandler2.java:515)
at 
java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
at java.util.concurrent.FutureTask.run(FutureTask.java:138)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:98)
at 
java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:207)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
at 
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)



 
Regards,
Narasimha



  

Re: Interleaving the results

2010-06-01 Thread NarasimhaRaju
Can some body throw some ideas, on how to achieve (interleaving) from with in 
the application especially in a distributed setup?


 “ There are only 10 types of people in this world:-
Those who understand binary and those who don’t “ 


Regards, 
P.N.Raju,





From: Lance Norskog goks...@gmail.com
To: solr-user@lucene.apache.org
Sent: Sat, May 29, 2010 3:04:46 AM
Subject: Re: Interleaving the results

There is no interleaving tool. There is a random number tool. You will
have to achive this in your application.

On Fri, May 28, 2010 at 8:23 AM, NarasimhaRaju rajux...@yahoo.com wrote:
 Hi,
 how to achieve custom ordering of the documents when there is a general query?

 Usecase:
 Interleave documents from different customers one after the other.

 Example:
 Say i have 10 documents in the index belonging to 3 customers (customer_id 
 field in the index ) and using query *:*
 so all the documents in the results score the same.
 but i want the results to be interleaved
 one document from the each customer should appear before a document from the 
 same customer repeats ?

 is there a way to achieve this ?


 Thanks in advance

 R.







-- 
Lance Norskog
goks...@gmail.com



  

Interleaving the results

2010-05-28 Thread NarasimhaRaju
Hi,
how to achieve custom ordering of the documents when there is a general query?

Usecase:
Interleave documents from different customers one after the other.

Example:
Say i have 10 documents in the index belonging to 3 customers (customer_id 
field in the index ) and using query *:*
so all the documents in the results score the same.
but i want the results to be interleaved 
one document from the each customer should appear before a document from the 
same customer repeats ?

is there a way to achieve this ?


Thanks in advance 

R.



  

Re: optimize is taking too much time

2010-02-18 Thread NarasimhaRaju
Hi, 
You can also make use of autocommit feature of solr.
You have two possibilities either based on max number of uncommited docs or 
based on time.
see updateHandler of your solrconfig.xml.

Example:-

autoCommit
   !--  
   maxDocs1/maxDocs
   --
   
   !-- maximum time (in MS) after adding a doc before an autocommit is 
triggered -- 
   maxTime60/maxTime 
  /autoCommit


once your done with adding run final optimize/commit.

Regards, 
P.N.Raju, 





From: Jagdish Vasani jvasani1...@gmail.com
To: solr-user@lucene.apache.org
Sent: Thu, February 18, 2010 3:12:15 PM
Subject: Re: optimize is taking too much time

Hi,

you should not optimize index after each insert of document.insted you
should optimize it after inserting some good no of documents.
because in optimize it will merge  all segments to one according to setting
of lucene index.

thanks,
Jagdish
On Fri, Feb 12, 2010 at 4:01 PM, mklprasad mklpra...@gmail.com wrote:


 hi
 in my solr u have 1,42,45,223 records having some 50GB .
 Now when iam loading a new record and when its trying optimize the docs its
 taking 2 much memory and time


 can any body please tell do we have any property in solr to get rid of
 this.

 Thanks in advance

 --
 View this message in context:
 http://old.nabble.com/optimize-is-taking-too-much-time-tp27561570p27561570.html
 Sent from the Solr - User mailing list archive at Nabble.com.





  

Re: Facet search concept problem

2010-02-15 Thread NarasimhaRaju
Hi,
you should have a new field in your index say 'type' which will have values 
'news','article' and 'blog' for documents news,article and blog respectively.
when searching with facet's elabled make use of this 'type' field then you will 
get what you wanted.

Regards, 
P.N.Raju, 






From: Ranveer Kumar ranveer.s...@gmail.com
To: solr-user@lucene.apache.org
Sent: Sun, February 14, 2010 5:45:54 AM
Subject: Facet search concept problem

Hi All,

My concept still not clear about facet search.

I am trying to search using facet query. I am indexing data from three
table, following is the detail of table:

table name: news
news_id
news_details

table name : article
article_id
article_details

table name: blog
blog_id
blog_details

I am indexing above tables as:
id
news_id
news_details
article_id
article_details
blog_id
blog_details

Now I want, when user search by soccer game and search match in all field
news(5), article(4) and blog(2),
then it should be list like:
news(5)
article(4)
blog(2)

currently facet listing like:
soccer(5)
game(6)

please help me..
thanks



  

Re: getting error when : in the query

2010-01-31 Thread NarasimhaRaju
Hi,
you have to escape lucene special characters present in usersearch term before 
handing it over to QueryParser.
for more info look at 
http://lucene.apache.org/java/2_9_1/queryparsersyntax.html#Escaping%20Special%20Characters



 “ There are only 10 types of people in this world:-
Those who understand binary and those who don’t “ 


Regards, 
P.N.Raju,





From: Ranveer Kumar ranveer.s...@gmail.com
To: solr-user@lucene.apache.org
Sent: Sun, January 31, 2010 5:35:37 PM
Subject: getting error when : in the query

Hi All,

Facing problem when someone searching the string which carry special
character  : .
For example:

when querying by ipod:touch then throwing exception due to  : .

Jan 31, 2010 9:56:35 PM org.apache.solr.common.SolrException log
SEVERE: org.apache.solr.common.SolrException: undefined field ipod

my full query is :
http://localhost:8080/solr/select?q=ipod:touchhl=truestart=0rows=10hl.fragsize=0hl.fl=bodyhl.snippets=2wt=xmlversion=2.2

is there any configuration to allow : in query.
please help.

thanks
with regards
Ranveer K Kumar