Expression Sort in Solr
I am working on solr for search. I required to perform a expression sort such that : say str = ((IF AVAILABLE IN (1,2,3),100,IF(AVAILABLE IN (4,5,6),80,100)) + IF(PRICE1000,70,40)) need to order by (if(str100,40+str/40,33+str/33)+SOMEOTHERCOLUMN) DESC -- View this message in context: http://lucene.472066.n3.nabble.com/Expression-Sort-in-Solr-tp3998050.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: core isolation
: looks like cores are isolated from each other. By isolated, I mean if a core : fails due to a configuration error or an error like ClassNotFoundException, : the other cores continue to work. For the most part this is true ... SolrCore's are islated form eachother as much as possible -- but some things like static variables in Lucene code or plugins loaded from the sharedLib in solr.xml will be common between all cores -- the most common example of this being Lucene's BooleanQuery.getMaxClauseCount() : In an other hand I think there are some errors that will make all cores hang : : : * OutOfMemoryError : * OutOfMemoryError : PermGen space : * Too many open files corret ... if a severe problem happens at the JVM process level, then all SolrCores are affected. You can run discrete JVM processes if you wish to avoid this, but then there is added resource overhead. -Hoss
Re: Boosting tips
: Thank Ahmet, I did that, it kinda worked (not as well as expected) the : document with ringtone was the 1st match, it was moved to the 2nd position, : I was expecting it to be at very bottom. Tried other factors for boosting : up to 10E6 but no success. bq is an additive boost -- it basically just adds another clause to the outermost BooleanQuery produced by the parser -- as the total boost values of all clauses increases, the queryNorm increases, and the overall effect diminishes. using the boost param of edismax, or wrapping your whole query in a {!boost} query is a much saner way to go (it's a multiplicitive boost) -Hoss
solr 1872
I am uable to use the rar file from the site https://issues.apache.org/jira/browse/SOLR-1872. When I try to open it,I get the message 'SolrACLSecurity.rar is not RAR archive. Is the file there at this link? Regards Sujatha
Re: solr 1872
looks like it might actually be a zip file. try renaming/unzipping it. cheers, rob On Mon, Jul 30, 2012 at 2:50 PM, Sujatha Arun suja.a...@gmail.com wrote: I am uable to use the rar file from the site https://issues.apache.org/jira/browse/SOLR-1872. When I try to open it,I get the message 'SolrACLSecurity.rar is not RAR archive. Is the file there at this link? Regards Sujatha
Re: solr 1872
I can access the rar fine with WinRAR, so should be ok, but yes, it might be in zip format. In any case, better to use the slightly later version -- SolrACLSecurity.java 26kb 12 Apr 2010 10:35 Thanks, Peter On Mon, Jul 30, 2012 at 7:50 PM, Sujatha Arun suja.a...@gmail.com wrote: I am uable to use the rar file from the site https://issues.apache.org/jira/browse/SOLR-1872. When I try to open it,I get the message 'SolrACLSecurity.rar is not RAR archive. Is the file there at this link? Regards Sujatha
Stats min and max for each group result
So, I have results like: And want to get a little more info, get min and max values for some field in doc limited to a group like this: I this possible to do with grouping? -- View this message in context: http://lucene.472066.n3.nabble.com/Stats-min-and-max-for-each-group-result-tp3998177.html Sent from the Solr - User mailing list archive at Nabble.com.
newbie question
Hi, I have been able to set up the SOLR demo environment as described in SOLR 3.6.1 tutorial: http://lucene.apache.org/solr/api-3_6_1/doc-files/tutorial.html. Actually, I set it up while it was still SOLR 3.6.0. The developer I am working with has created a custom SOLR instance using 3.6.1 and has packaged it up in the same manner as the demo. However, when I run the java -jar start.jar command in the example directory of my SOLR 3.6.1 instance and I open the admin interface on my local host: http://localhost:8983/solr/admin/, the admin webpage points to the 3.6.0 instance. The log says something like the JVM is already in use for port 8983. How can I open my 3.6.1 instance? I hope this question is not too elementary. Many thanks in advance for any help you can provide. Thanks, Kate
Re: Tools for schema.xml generation and to import from a database
Thanks for the reply Alexandre, I will test your clues as soon as possible. Best Regards, On Mon, Jul 30, 2012 at 4:15 AM, Alexandre Rafalovitch arafa...@gmail.com wrote: If you are just starting with SOLR, you might as well jump to 4.0 Alpha. By the time you finished, it will be the production copy. If you want to index stuff from the database, your first step is probably to use DataImportHandler (DIH). Once you get past the basics, you may want to do a custom code, but start from from DIH for faster results. You will want to modify schema.xml. I started by using DIH example and just adding an extra core at first. This might be easier than building a full directory setup from scratch. You also don't actually need to configure schema too much at the beginning. You can start by using dynamic fields. So, if in DIH, you say that your target field is XYZ_i it is automatically picked by as an integer field by SOLR (due to *_i definition that you do need to have). This will not work for fields you want to do aggregation on (e.g. multiple text fields copied into one for easier search), for multilingual text fields, etc. But it will get you going. Oh, and welcome to SOLR. You will like it. Regards, Alex. Personal blog: http://blog.outerthoughts.com/ LinkedIn: http://www.linkedin.com/in/alexandrerafalovitch - Time is the quality of nature that keeps events from happening all at once. Lately, it doesn't seem to be working. (Anonymous - via GTD book) On Sun, Jul 29, 2012 at 3:45 PM, Andre Lopes lopes80an...@gmail.com wrote: Hi, I'm new to Solr. I've installed 3.6.1 but I'm a little bit confused about what and how to do next. I will use the Jetty version for now. Two poinst I need to know: 1 - I've 2 views that I would like to import to Solr. I think I must do a schema.xml and then import data to that schema. I'm correct with this one? 2 - About tools to autogenerate the schema.xml, there are any? And about tools to import data to the schema, there are any(I'm using Python)? Please give me some clues. Thanks, Best Regards, André.
Re: Indexing binary files from database issue (no errors)
Hi, Did you get a reply on this? I'd guess that it is your JDBC driver which does not handle the response to your CONCAT_WS() SQL. Try without it and see. Then try to upgrade your mysql JDBC driver to a newer version and see if it helps. -- Jan Høydahl, search solution architect Cominvent AS - www.cominvent.com Solr Training - www.solrtraining.com On 5. juli 2012, at 09:26, anarchos78 wrote: Greetings friends, I am trying to index binary files stored in a database (mysql) and I have no success. I have a solr configured as below: *Solr file structure* +solr +bookledger(core0) -conf +lib(all necessary libraries) +contrib +dist +data +bookledger -index -spellchecker +ktimatologio -index -spellchecker +ktimatologio(core1) -conf +lib(all necessary libraries) +contrib +dist As you can see the configuration concerns a multicore solr setup. Now, on the bookledger(core0) I have indexed binary files successfully (stored in a database). In the second core when I conduct full-import I see no errors! Then, when I try to query the binary content the output is like: [B@660b1b14. What am I missing here? Thank you in advance, Tom Greece *The solr.xml file:* ?xml version=1.0 encoding=UTF-8 ? solr persistent=false cores adminPath=/admin/cores core name=ktimatologio instanceDir=ktimatologio dataDir=../data/ktimatologio/ core name=bookledger instanceDir=bookledger dataDir=../data/bookledger/ /cores /solr *The solrconfig.xml file:* ?xml version=1.0 encoding=UTF-8 ? config abortOnConfigurationError${solr.abortOnConfigurationError:true}/abortOnConfigurationError luceneMatchVersionLUCENE_36/luceneMatchVersion lib dir=lib/dist/ regex=apache-solr-cell-\d.*\.jar / lib dir=lib/dist/ regex=apache-solr-clustering-\d.*\.jar / lib dir=lib/dist/ regex=apache-solr-dataimporthandler-\d.*\.jar / lib dir=lib/dist/ regex=apache-solr-langid-\d.*\.jar / lib dir=lib/dist/ regex=apache-solr-velocity-\d.*\.jar / lib dir=lib/dist/ regex=apache-solr-dataimporthandler-extras-\d.*\.jar / lib dir=lib/contrib/extraction/lib/ regex=.*\.jar / lib dir=lib/contrib/clustering/lib/ regex=.*\.jar / lib dir=lib/contrib/dataimporthandler/lib/ regex=.*\.jar / lib dir=lib/contrib/langid/lib/ regex=.*\.jar / lib dir=lib/contrib/velocity/lib/ regex=.*\.jar / lib dir=lib/contrib/extraction/lib/ regex=tika-core-\d.*\.jar / lib dir=lib/contrib/extraction/lib/ regex=tika-parsers-\d.*\.jar / dataDir${solr.data.dir:}/dataDir directoryFactory name=DirectoryFactory class=${solr.directoryFactory:solr.StandardDirectoryFactory}/ indexConfig /indexConfig jmx / updateHandler class=solr.DirectUpdateHandler2 /updateHandler query maxBooleanClauses1024/maxBooleanClauses filterCache class=solr.FastLRUCache size=512 initialSize=512 autowarmCount=0/ queryResultCache class=solr.LRUCache size=512 initialSize=512 autowarmCount=0/ documentCache class=solr.LRUCache size=512 initialSize=512 autowarmCount=0/ enableLazyFieldLoadingtrue/enableLazyFieldLoading queryResultWindowSize20/queryResultWindowSize queryResultMaxDocsCached200/queryResultMaxDocsCached listener event=newSearcher class=solr.QuerySenderListener arr name=queries /arr /listener listener event=firstSearcher class=solr.QuerySenderListener arr name=queries lst str name=qstatic firstSearcher warming in solrconfig.xml/str /lst /arr /listener useColdSearcherfalse/useColdSearcher maxWarmingSearchers2/maxWarmingSearchers /query requestDispatcher requestParsers enableRemoteStreaming=true multipartUploadLimitInKB=2048000 / /requestDispatcher requestHandler name=/dataimport class=org.apache.solr.handler.dataimport.DataImportHandler lst name=defaults str name=configdata-config.xml/str /lst /requestHandler requestHandler name=/select class=solr.SearchHandler lst name=defaults str name=echoParamsexplicit/str int name=rows100/int /lst /requestHandler requestHandler name=/browse class=solr.SearchHandler lst name=defaults str name=echoParamsexplicit/str str name=wtvelocity/str str name=v.templatebrowse/str str name=v.layoutlayout/str str name=titleSolritas/str str name=dftext/str str name=defTypeedismax/str str name=q.alt*:*/str str name=rows10/str str name=fl*,score/str
Re: SolrCloud replication question
Hi, Interesting article in your link. What servlet container do you use and how is it configured wrt. threads etc? You should be able to utilize all CPUs with a single Solr index, given that you are not I/O bound.. Also, what is your mergeFactor? -- Jan Høydahl, search solution architect Cominvent AS - www.cominvent.com Solr Training - www.solrtraining.com On 9. juli 2012, at 22:11, avenka wrote: Hmm, never mind my question about replicating using symlinks. Given that replication on a single machine improves throughput, I should be able to get a similar improvement by simply sharding on a single machine. As also observed at http://carsabi.com/car-news/2012/03/23/optimizing-solr-7x-your-search-speed/ I am now benchmarking my workload to compare replication vs. sharding performance on a single machine. -- View this message in context: http://lucene.472066.n3.nabble.com/SolrCloud-replication-question-tp3993761p3994017.html Sent from the Solr - User mailing list archive at Nabble.com.
Nested / subquery / Inner queries in Solr with facets
Hi, Can somebody help me with this on Solr querying capabilities about nested / subquery / inner queries with facets. Here is my use case: 1) Fetch unique jobs for a given team: solrhost/solr/select/?q=teamname%3ANinjas+AND+jobname:*facet=truefacet.field=jobnamefacet.zeros=falserows=0 Each unique job returned in the above query will be substituted against the jobname in the below query: 2) Job info of latest build ID for a given team and job: solrhost/solr/select/?q=teamname:Mobilejobname:JOBNAMEbuildnumber:*sort=buildnumber%20descrows=1 My ultimate goal using these 2 queries is to fetch those solr docs/data for a given teamname pointing to Max (most recent) buildnumber for all the Unique jobs related to given teamname. If a team has n unique jobs then with the above two step approach I am expected to fire (n+1) queries which is performance overhead. If I were to fetch this data from a RDBMS using a single query, the SQL query would look something like this: Select * form table where teamname='Ninjas' AND jobname IN ( select DISTINCT jobname from TABLE ) ; One way to approach this problem is to frame a nested query by merging both the above queries. If I query something like the one below, i will receive all the solr docs in the response and cannot limit the docs being returned only to the one which have Max buildnumber, which would result in overhead of parsing and filtering of unnecessary data: solrhost/solr/select/?q=teamname:Ninjasjobname:*buildnumber:*sort=buildnumber%20descrows=1000 So, can somebody help me with forming a compound / nested query for fetching the data I want in a single request? Or perhaps may be any work around / solr feature you may have used to overcome such use case? Also does Solr support stored procedure or triggers? Thanks Regards Vamshi
Re: too many instances of org.tartarus.snowball.Among in the heap
I did take couple of thread dumps and they seem to be fine Heap dump is huge - close to 15GB I am having hard time to analyze that heap dump 2012-07-30 16:07:32 Full thread dump Java HotSpot(TM) 64-Bit Server VM (19.0-b09 mixed mode): RMI TCP Connection(33)-10.8.21.124 - Thread t@190 java.lang.Thread.State: RUNNABLE at sun.management.ThreadImpl.dumpThreads0(Native Method) at sun.management.ThreadImpl.dumpAllThreads(ThreadImpl.java:374) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:167) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:96) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:33) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262) at javax.management.StandardMBean.invoke(StandardMBean.java:391) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Locked ownable synchronizers: - locked 49cbecf2 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) JMX server connection timeout 189 - Thread t@189 java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) - waiting on b75fa27 (a [I) at com.sun.jmx.remote.internal.ServerCommunicatorAdmin$Timeout.run(ServerCommunicatorAdmin.java:150) at java.lang.Thread.run(Thread.java:662) Locked ownable synchronizers: - None web-77 - Thread t@186 java.lang.Thread.State: WAITING at sun.misc.Unsafe.park(Native Method) - parking to wait for 5ab03cb6 (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:662) Locked ownable synchronizers: - None web-76 - Thread t@185 java.lang.Thread.State: WAITING at sun.misc.Unsafe.park(Native Method) - parking to wait for 5ab03cb6 (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:662) Locked ownable synchronizers: - None web-75 - Thread t@184 java.lang.Thread.State: WAITING at
Re: question about jmx value (avgRequestsPerSecond) output from solr
: example scenario during testing: during a test run - the test harness will : fire requests at request handler (partItemDescSearch) and all numbers look : fine. then after the test harness is done - the metric : avgRequestsPerSecond does not immediately drop to 0. instead - it appears : as if JMX is somehow averaging this metric and gradually trending it : downward toward 0. : : continual checking of this metric (in the JMX tree - see screen shot) shows : the number trending downward instead of a hard stop at 0. : : is this behavior - just the way jmx works? It's nothing special about JMX, it's just the way RequestHandlerBase computes this number -- it's the average number of requests per second that this handler has seen over it's entire lifetime -- ie: from the moment the SOlrCore was created/started until now. it is quite simply... (numRequests*1000) / (currentTimeMillis()-handlerStart) If you want an average over a finer granularity of time then that you can have your JMX client sample the numRequests stat from the hander at some interval (ie: record the value every minute, subtract the last value from the current value, divide by 60 and that will tell you the average number of reuqests per second, sampled per minute. -Hoss
Re: avgTimePerRequest JMX M-Bean displays with NaN instead of 0 - when no activity
: while running tests - i noticed that of the 6 JMX M-Beans : (avgRequestsPerSecond, avgTimePerRequest, errors, requests, timeouts, : totalTime) ... : : the avgTimePerRequest M-Bean was producing NaN when there was no search : activity. : : all of the other M-Beans displayed a 0 (zero) when there was no search : activity. ... : is this just a WAD (works as a designed) ? I believe it is ... avgTimePerRequest is simply the total amount of time spent processing requests divided by the total number of requests -- in your case 0/0 which is NaN. All of the other metrics you mentioned are 0 when no requests have been processed because nothing about them results in division by zero. (well, tehnically avgRequestsPerSecond could result in division by zero if you queried it at the exact millisecond that the handler was created, but that's not likely to ever happen). -Hoss
Re: too many instances of org.tartarus.snowball.Among in the heap
is it some kind of memory leak with Lucene's use of Snowball Stemmer? I tried to google for Snowball Stemmer but could not find any recent info about memory leak this old link does indicate some memory leak but it is from 2004 http://snowball.tartarus.org/archives/snowball-discuss/0631.html Any inputs are welcome -Saroj On Mon, Jul 30, 2012 at 4:39 PM, roz dev rozde...@gmail.com wrote: I did take couple of thread dumps and they seem to be fine Heap dump is huge - close to 15GB I am having hard time to analyze that heap dump 2012-07-30 16:07:32 Full thread dump Java HotSpot(TM) 64-Bit Server VM (19.0-b09 mixed mode): RMI TCP Connection(33)-10.8.21.124 - Thread t@190 java.lang.Thread.State: RUNNABLE at sun.management.ThreadImpl.dumpThreads0(Native Method) at sun.management.ThreadImpl.dumpAllThreads(ThreadImpl.java:374) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at com.sun.jmx.mbeanserver.ConvertingMethod.invokeWithOpenReturn(ConvertingMethod.java:167) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:96) at com.sun.jmx.mbeanserver.MXBeanIntrospector.invokeM2(MXBeanIntrospector.java:33) at com.sun.jmx.mbeanserver.MBeanIntrospector.invokeM(MBeanIntrospector.java:208) at com.sun.jmx.mbeanserver.PerInterface.invoke(PerInterface.java:120) at com.sun.jmx.mbeanserver.MBeanSupport.invoke(MBeanSupport.java:262) at javax.management.StandardMBean.invoke(StandardMBean.java:391) at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.invoke(DefaultMBeanServerInterceptor.java:836) at com.sun.jmx.mbeanserver.JmxMBeanServer.invoke(JmxMBeanServer.java:761) at javax.management.remote.rmi.RMIConnectionImpl.doOperation(RMIConnectionImpl.java:1427) at javax.management.remote.rmi.RMIConnectionImpl.access$200(RMIConnectionImpl.java:72) at javax.management.remote.rmi.RMIConnectionImpl$PrivilegedOperation.run(RMIConnectionImpl.java:1265) at javax.management.remote.rmi.RMIConnectionImpl.doPrivilegedOperation(RMIConnectionImpl.java:1360) at javax.management.remote.rmi.RMIConnectionImpl.invoke(RMIConnectionImpl.java:788) at sun.reflect.GeneratedMethodAccessor50.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at sun.rmi.server.UnicastServerRef.dispatch(UnicastServerRef.java:305) at sun.rmi.transport.Transport$1.run(Transport.java:159) at java.security.AccessController.doPrivileged(Native Method) at sun.rmi.transport.Transport.serviceCall(Transport.java:155) at sun.rmi.transport.tcp.TCPTransport.handleMessages(TCPTransport.java:535) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run0(TCPTransport.java:790) at sun.rmi.transport.tcp.TCPTransport$ConnectionHandler.run(TCPTransport.java:649) at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) at java.lang.Thread.run(Thread.java:662) Locked ownable synchronizers: - locked 49cbecf2 (a java.util.concurrent.locks.ReentrantLock$NonfairSync) JMX server connection timeout 189 - Thread t@189 java.lang.Thread.State: TIMED_WAITING at java.lang.Object.wait(Native Method) - waiting on b75fa27 (a [I) at com.sun.jmx.remote.internal.ServerCommunicatorAdmin$Timeout.run(ServerCommunicatorAdmin.java:150) at java.lang.Thread.run(Thread.java:662) Locked ownable synchronizers: - None web-77 - Thread t@186 java.lang.Thread.State: WAITING at sun.misc.Unsafe.park(Native Method) - parking to wait for 5ab03cb6 (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1987) at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:399) at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:947) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907) at java.lang.Thread.run(Thread.java:662) Locked ownable synchronizers: - None web-76 - Thread t@185 java.lang.Thread.State: WAITING at sun.misc.Unsafe.park(Native Method) - parking to wait for 5ab03cb6 (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject) at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158) at
Re: Solr 4.0 ALPHA: AbstractSolrTestCase depending on LuceneTestCase
: I have been developing extensions to SOLR code using 4.0 truck. For JUnit : testing I am extending AbstractSolrTestCase which in the ALPHA release is : located in JAR apache-solr-test-framework-4.0.0-ALPHA.jar. However, this : class extends LuceneTestCase which comes from JAR : lucene-test-framework-4.0-SNAPSHOT.jar. In the ALPHA release the later JAR : is not shipped or I can't find it. My question is which class should I use Koorosh: thank you for asking about this. I believe there is definitel ya packaging bug here... https://issues.apache.org/jira/browse/SOLR-3690 : for testing customized/extensions to SOLR/LUCENE code? Is there a better way : of doing this without build the lucene-test-framework-4.0-SNAPSHOT.jar from : the source code? I would suggest you continue to use the apache-solr-test-framework.*.jar -- but for now you'll have to use the souce release in order to compile the lucene-test-framework.*.jar. I would however suggest that you consider using SolrTestCaseJ4 as your base class instead of AbstractSolrTestCase -- the key distinction being that it doesn't re-create a completley new SolrCore for every test method, which isn't typically needed for most test code, and allows it to be much faster. -Hoss
Rebuild index after database change
Hello, I am working on a Data warehouse project, which is making huge modifications at the database level directly. It is working fine, and everything is going on, but there is a third party application reading data from one of this databases. This application is using Solrj (with embedded server) and it is resulting in a big issue: the new data inserted directly into the database is not being showed by this application. I have researched a lot around that, but didn't find any way to make this new data available to this particular third party application. Is that something possible to do? Have someone faced this kind of issue before? Please let me know if I should put some additional detail. Thanks in advance. Best regards. -- View this message in context: http://lucene.472066.n3.nabble.com/Rebuild-index-after-database-change-tp3998257.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: solr 1872
thanks ,was looking to the rar file for instructions on set up . Regards Sujatha On Tue, Jul 31, 2012 at 1:07 AM, Peter Sturge peter.stu...@gmail.comwrote: I can access the rar fine with WinRAR, so should be ok, but yes, it might be in zip format. In any case, better to use the slightly later version -- SolrACLSecurity.java 26kb 12 Apr 2010 10:35 Thanks, Peter On Mon, Jul 30, 2012 at 7:50 PM, Sujatha Arun suja.a...@gmail.com wrote: I am uable to use the rar file from the site https://issues.apache.org/jira/browse/SOLR-1872. When I try to open it,I get the message 'SolrACLSecurity.rar is not RAR archive. Is the file there at this link? Regards Sujatha
Re: solr 1872
Renamed to zip and worked fine,thanks Regards Sujatha On Tue, Jul 31, 2012 at 9:15 AM, Sujatha Arun suja.a...@gmail.com wrote: thanks ,was looking to the rar file for instructions on set up . Regards Sujatha On Tue, Jul 31, 2012 at 1:07 AM, Peter Sturge peter.stu...@gmail.comwrote: I can access the rar fine with WinRAR, so should be ok, but yes, it might be in zip format. In any case, better to use the slightly later version -- SolrACLSecurity.java 26kb 12 Apr 2010 10:35 Thanks, Peter On Mon, Jul 30, 2012 at 7:50 PM, Sujatha Arun suja.a...@gmail.com wrote: I am uable to use the rar file from the site https://issues.apache.org/jira/browse/SOLR-1872. When I try to open it,I get the message 'SolrACLSecurity.rar is not RAR archive. Is the file there at this link? Regards Sujatha
Re: solr 1872
Peter, In a multicore environment , where should the acl file reside , under the conf directory ,Can I use a acl file per core ? Regards Sujatha On Tue, Jul 31, 2012 at 9:15 AM, Sujatha Arun suja.a...@gmail.com wrote: Renamed to zip and worked fine,thanks Regards Sujatha On Tue, Jul 31, 2012 at 9:15 AM, Sujatha Arun suja.a...@gmail.com wrote: thanks ,was looking to the rar file for instructions on set up . Regards Sujatha On Tue, Jul 31, 2012 at 1:07 AM, Peter Sturge peter.stu...@gmail.comwrote: I can access the rar fine with WinRAR, so should be ok, but yes, it might be in zip format. In any case, better to use the slightly later version -- SolrACLSecurity.java 26kb 12 Apr 2010 10:35 Thanks, Peter On Mon, Jul 30, 2012 at 7:50 PM, Sujatha Arun suja.a...@gmail.com wrote: I am uable to use the rar file from the site https://issues.apache.org/jira/browse/SOLR-1872. When I try to open it,I get the message 'SolrACLSecurity.rar is not RAR archive. Is the file there at this link? Regards Sujatha