Re: "Auto commit error" and java.io.FileNotFoundException
: I get these NullPointException records every once and a while, always : from SolrCore and SolrDispatchFilter. Don't get a stack trace, and no : nearby errors seem to clarify what might have happened. that's really strange ... i'm not sure what's causing those, or why you aren't seeing a stack trace. i'm not too familiar with the XmlFormatter, but according to the docs it does support an section with multiple s. any chance you could configure an additional SimpleFormatter just for SEVERE messages to help us track it down? -Hoss
Re: Rounding date fields
: > it also starts to get ito the realm of "arbitrary processing of values : > prior to storing/indexing ... which could be useful in other ways (ie: : > parsing alternate date formats) and for other field types (ie: limit : > numeric fiels to a certain range, round float input to an int, etc...) : > which is something i've been hoping to work on for a while now ... let any : > FieldType have an analyzer, and add/abuse a new type="preprocess"> for modifying the values before they are stored or : > analyzed by the "index" analyzer. : > : : We can use UpdateRequestProcessor for arbitary processing before storing. I : believe that is one of the use-cases for that API. Some of it be done using an UpdateRequestProcessor, assuming a Processor was created that exposed Analysis Factory like configuration to end users (w/o needing to write java) but that only happens when adding docs -- it wouldn't let you automaticly round dates submitted at query time to the granularity you know you are using -- it also wouldn't let you convert "yes" to "true" for a boolean field, etc... There's also a subtle but important distinction between the nature of the data in the index which should be expressed in the schema.xml via/fields and fieldtypes; and how the person who is responsible for this solr installation wants the data to be used, which should be expressed in the solrconfig.xml. you could imagine have two pieces of code that achieve very similar things (ie: rounding dates) -- one of which could be configured in the schema.xml as a fieldType attirbute, and one in the solrconfig.xml as an update processor option for index time, and/or a query component option. the schema.xml fieldType option would be a way to say "in this schema about books, fields of this type must never contain anything more granular then days." (or minutes, or hours, or what have you) and it doesn't matter who uses that index, or how they get to it, or whether they are updating the index or querying the index, or whether it's a master index or a slave indx: that field type is not a "date" field type, it is now "day" fieldtype. the solrconfig.xml options however would be a way to say "in this *instance* of an index using a schema about books, i don't want to deal with dates more granular then a day" ... this option might be different on between two differnet instances (ie: an instance containing data for publish on demand books vs. and instance containing data about medival books) or even between two copies of the same index (ie: on the master you can index any granularity, on slaveA internal users can query with any granularity, on slaveB the general public can only query with day granularity to improve caching) both approaches have their uses. -Hoss
Re: Lucene Query reuse
: Rephrasing: I do realize that I would be able to "flatten" my Lucene : Query-object to its String-representation and possibly use that as a query : input to SolrQuery. But would that work for all queries ... or is the Lucene : object-representation more expressive (i.e. richer) than the string-rep? "Query" as an abstract class supports all sorts of concrete subclasses -- not all of which can be expressed as simple strings parsable by a query parser. but that doesn't mean all of the types of queries you currently use can't be expressed as a string -- we'd need more info about what exactly your "rather intricate query" looks like to give you a good answer to that part. In general, anything you can do with Lucene directly can be done with Solr -- if you write a Solr plugin for handling it. you could for example wrote a QParser that takes in whatever input your current code parses and then generates your complex query. -Hoss
Re: Creating dynamic fields with DataImportHandler
I have created SOLR-742: http://issues.apache.org/jira/browse/SOLR-742 For my case, I don't know the field name ahead of time. Shalin Shekhar Mangar wrote: > > Yes, sounds like a bug. Do you mind opening a jira issue for this? > > A simple workaround is to add the field name (if you know it beforehand) > to > your data config and use the Transformer to set the value. If you don't > know > the field name before hand then this will not work for you. > > On Sat, Aug 30, 2008 at 1:31 AM, wojtekpia <[EMAIL PROTECTED]> wrote: > >> >> I have a custom row transformer that I'm using with the >> DataImportHandler. >> When I try to create a dynamic field from my transformer, it doesn't get >> created. >> >> If I do exactly the same thing from my dataimport handler config file, it >> works as expected. >> >> Has anyone experienced this? I'm using a nightly build from about 3 weeks >> ago. I realize there were some fixes done to the DataImportHandler since >> then, but if I understand them correctly, they seem unrelated to my issue >> ( >> http://www.nabble.com/localsolr-and-dataimport-problems-to18849983.html#a18854923 >> ). >> -- >> View this message in context: >> http://www.nabble.com/Creating-dynamic-fields-with-DataImportHandler-tp19226532p19226532.html >> Sent from the Solr - User mailing list archive at Nabble.com. >> >> > > > -- > Regards, > Shalin Shekhar Mangar. > > -- View this message in context: http://www.nabble.com/Creating-dynamic-fields-with-DataImportHandler-tp19226532p19227919.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr :: Snapshooter Cannot allocate memory
The OS is checking that there is enough memory... add swap space: http://www.nabble.com/Not-enough-space-to11423199.html#a11432978 -Yonik On Fri, Aug 29, 2008 at 4:40 PM, OLX - Pablo Garrido <[EMAIL PROTECTED]> wrote: > Hello > >We are facing this recurrent error on our Master Solr Server > every now and then : > > SEVERE: java.io.IOException: Cannot run program > "/opt/solr/bin/snapshooter" (in directory "solr/bin"): > java.io.IOException: error=12, Cannot allocate memory >at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) >at java.lang.Runtime.exec(Runtime.java:593) >at > org.apache.solr.core.RunExecutableListener.exec(RunExecutableListener.java:70) >at > org.apache.solr.core.RunExecutableListener.postCommit(RunExecutableListener.java:97) >at > org.apache.solr.update.UpdateHandler.callPostCommitCallbacks(UpdateHandler.java:99) >at > org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:514) >at > org.apache.solr.handler.XmlUpdateRequestHandler.update(XmlUpdateRequestHandler.java:214) >at > org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody(XmlUpdateRequestHandler.java:84) >at > org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:77) >at org.apache.solr.core.SolrCore.execute(SolrCore.java:658) >at > org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:191) >at > org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:159) >at org.mortbay.jetty.servlet.ServletHandler > $CachedChain.doFilter(ServletHandler.java:1089) >at > org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365) >at > org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) >at > org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) >at > org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) >at > org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) >at > org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211) >at > org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) >at > org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) >at org.mortbay.jetty.Server.handle(Server.java:285) >at > org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502) >at org.mortbay.jetty.HttpConnection > $RequestHandler.content(HttpConnection.java:835) >at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:723) >at > org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:202) >at > org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378) >at org.mortbay.jetty.bio.SocketConnector > $Connection.run(SocketConnector.java:226) >at org.mortbay.thread.BoundedThreadPool > $PoolThread.run(BoundedThreadPool.java:442) > Caused by: java.io.IOException: java.io.IOException: error=12, Cannot > allocate memory >at java.lang.UNIXProcess.(UNIXProcess.java:148) >at java.lang.ProcessImpl.start(ProcessImpl.java:65) >at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) >... 28 more > > >We have 3 Solr Servers, one Master and 2 Slaves, there is a Cron > Job doing Commits every 5 minutes on Master with Inserts/Deletes and > Slaves Rsync last Index Version from Master every 10 minutes, all 3 > Servers have this Setup : > >* RHEL 5.0 OS 64 bits >* 16 GB RAM Memory > >We are giving Solr Java process 10 GB RAM Memory, did anybody > face this error ? Snapshooter process will try to allocate memory from > the 6GB RAM Memory available for the OS ? should we reduce 10GB to 8GB > for Solr Java process ? > >Thanks for your help > > >Pablo > > >
Re: Creating dynamic fields with DataImportHandler
Yes, sounds like a bug. Do you mind opening a jira issue for this? A simple workaround is to add the field name (if you know it beforehand) to your data config and use the Transformer to set the value. If you don't know the field name before hand then this will not work for you. On Sat, Aug 30, 2008 at 1:31 AM, wojtekpia <[EMAIL PROTECTED]> wrote: > > I have a custom row transformer that I'm using with the DataImportHandler. > When I try to create a dynamic field from my transformer, it doesn't get > created. > > If I do exactly the same thing from my dataimport handler config file, it > works as expected. > > Has anyone experienced this? I'm using a nightly build from about 3 weeks > ago. I realize there were some fixes done to the DataImportHandler since > then, but if I understand them correctly, they seem unrelated to my issue > ( > http://www.nabble.com/localsolr-and-dataimport-problems-to18849983.html#a18854923 > ). > -- > View this message in context: > http://www.nabble.com/Creating-dynamic-fields-with-DataImportHandler-tp19226532p19226532.html > Sent from the Solr - User mailing list archive at Nabble.com. > > -- Regards, Shalin Shekhar Mangar.
Re: Lucene Query reuse
Rephrasing: I do realize that I would be able to "flatten" my Lucene Query-object to its String-representation and possibly use that as a query input to SolrQuery. But would that work for all queries ... or is the Lucene object-representation more expressive (i.e. richer) than the string-rep? Maybe this is really noob questions, sorry if that is the case - I am really new to lucene/solr. Tobias 2008/8/29 Tobias Hill <[EMAIL PROTECTED]> > Hi, > > I am process of moving from HibernateSearch (backed by Lucene) to Solr > (ditto) > in our application- > > In HibernateSearch I use the Query-class provided by Lucene to build a > rather > intricate query programatically. > > I would like to make my transition into Solr + Solrj as smooth as possible > and > hence I wonder if it is possible to reuse my Lucene Query but directing it > to solr > instead in some way. Any way to convert a Lucene Query into SolrQuery for > instance? Or some other way? > > Thankful for any thoughts on this, > Tobias > > > > >
Solr :: Snapshooter Cannot allocate memory
Hello We are facing this recurrent error on our Master Solr Server every now and then : SEVERE: java.io.IOException: Cannot run program "/opt/solr/bin/snapshooter" (in directory "solr/bin"): java.io.IOException: error=12, Cannot allocate memory at java.lang.ProcessBuilder.start(ProcessBuilder.java:459) at java.lang.Runtime.exec(Runtime.java:593) at org.apache.solr.core.RunExecutableListener.exec(RunExecutableListener.java:70) at org.apache.solr.core.RunExecutableListener.postCommit(RunExecutableListener.java:97) at org.apache.solr.update.UpdateHandler.callPostCommitCallbacks(UpdateHandler.java:99) at org.apache.solr.update.DirectUpdateHandler2.commit(DirectUpdateHandler2.java:514) at org.apache.solr.handler.XmlUpdateRequestHandler.update(XmlUpdateRequestHandler.java:214) at org.apache.solr.handler.XmlUpdateRequestHandler.handleRequestBody(XmlUpdateRequestHandler.java:84) at org.apache.solr.handler.RequestHandlerBase.handleRequest(RequestHandlerBase.java:77) at org.apache.solr.core.SolrCore.execute(SolrCore.java:658) at org.apache.solr.servlet.SolrDispatchFilter.execute(SolrDispatchFilter.java:191) at org.apache.solr.servlet.SolrDispatchFilter.doFilter(SolrDispatchFilter.java:159) at org.mortbay.jetty.servlet.ServletHandler $CachedChain.doFilter(ServletHandler.java:1089) at org.mortbay.jetty.servlet.ServletHandler.handle(ServletHandler.java:365) at org.mortbay.jetty.security.SecurityHandler.handle(SecurityHandler.java:216) at org.mortbay.jetty.servlet.SessionHandler.handle(SessionHandler.java:181) at org.mortbay.jetty.handler.ContextHandler.handle(ContextHandler.java:712) at org.mortbay.jetty.webapp.WebAppContext.handle(WebAppContext.java:405) at org.mortbay.jetty.handler.ContextHandlerCollection.handle(ContextHandlerCollection.java:211) at org.mortbay.jetty.handler.HandlerCollection.handle(HandlerCollection.java:114) at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:139) at org.mortbay.jetty.Server.handle(Server.java:285) at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:502) at org.mortbay.jetty.HttpConnection $RequestHandler.content(HttpConnection.java:835) at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:723) at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:202) at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:378) at org.mortbay.jetty.bio.SocketConnector $Connection.run(SocketConnector.java:226) at org.mortbay.thread.BoundedThreadPool $PoolThread.run(BoundedThreadPool.java:442) Caused by: java.io.IOException: java.io.IOException: error=12, Cannot allocate memory at java.lang.UNIXProcess.(UNIXProcess.java:148) at java.lang.ProcessImpl.start(ProcessImpl.java:65) at java.lang.ProcessBuilder.start(ProcessBuilder.java:452) ... 28 more We have 3 Solr Servers, one Master and 2 Slaves, there is a Cron Job doing Commits every 5 minutes on Master with Inserts/Deletes and Slaves Rsync last Index Version from Master every 10 minutes, all 3 Servers have this Setup : * RHEL 5.0 OS 64 bits * 16 GB RAM Memory We are giving Solr Java process 10 GB RAM Memory, did anybody face this error ? Snapshooter process will try to allocate memory from the 6GB RAM Memory available for the OS ? should we reduce 10GB to 8GB for Solr Java process ? Thanks for your help Pablo
admin/logging page and "Effective" level
I'm looking at the display page for the new LogLevelSelection servlet for the first time today, and something isn't adding up for me based on my knowledge of JDK logging, and the info on the page. according to the explanation there... The effective logging level is shown to the far right. If a logger has unset level ...running the Solr example on the trunk, i'm seeing lots of things get logged by various loggers, but according to the page all of those loggers have an effective level of "OFF" -- even though it shows that they are all "unset" and the "root" Logger is set to "INFO" This seems like a (low priority) bug to me ... or am i just missunderstanding what it's trying to show me here? -Hoss
Creating dynamic fields with DataImportHandler
I have a custom row transformer that I'm using with the DataImportHandler. When I try to create a dynamic field from my transformer, it doesn't get created. If I do exactly the same thing from my dataimport handler config file, it works as expected. Has anyone experienced this? I'm using a nightly build from about 3 weeks ago. I realize there were some fixes done to the DataImportHandler since then, but if I understand them correctly, they seem unrelated to my issue (http://www.nabble.com/localsolr-and-dataimport-problems-to18849983.html#a18854923). -- View this message in context: http://www.nabble.com/Creating-dynamic-fields-with-DataImportHandler-tp19226532p19226532.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Solr Logo thought
Its definitely open for submissions. We are at the very beginnings of this contest still it seems, so there is plenty of time. - Mark brosseau wrote: is this thing still open for propositions ? we have an in house graphical artist and i could ask him to take a shot he does very nice work. 2008/8/20 Lukáš Vlček <[EMAIL PROTECTED]> Hi, Only few responded so far. How we can get more feedback? Do you think I should work on the proposal a little bit more and then attach it to SOLR-84? Regards, Lukas On Mon, Aug 18, 2008 at 6:14 PM, Otis Gospodnetic < [EMAIL PROTECTED]> wrote: I like it, even its asymmetry. :) Otis -- Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch - Original Message From: Lukáš Vlček <[EMAIL PROTECTED]> To: solr-user@lucene.apache.org Sent: Sunday, August 17, 2008 7:02:25 PM Subject: Re: Solr Logo thought Hi, My initial draft of Solr logo can be found here: http://picasaweb.google.com/lukas.vlcek/Solr The reason why I haven't attached it to SOLR-84 for now is that this is just draft and not final design (there are a lot of unfinished details). I would like to get some feedback before I spend more time on it. I had several ideas but in the end I found that the simplicity works best. Simple font, sun motive, just two colors. Should look fine in both the large and small formats. As for the favicon I would use the sun motive only - it means the O letter with the beams. The logo font still needs a lot of small (but important) touches. For now I would like to get feedback mostly about the basic idea. Regards, Lukas On Sat, Aug 9, 2008 at 8:21 PM, Mark Miller wrote: Plenty left, but here is a template to get things started: http://wiki.apache.org/solr/LogoContest Speaking of which, if we want to maintain the momentum of interest in this topic, someone (ie: not me) should setup a "LogoContest" wiki page with some of the "goals" discussed in the various threads on solr-user and solr-dev recently, as well as draft up some good guidelines for how we should run the contest -- http://blog.lukas-vlcek.com/ -- http://blog.lukas-vlcek.com/
Performance of putting the solr data in SAN.
Hi, I'm jus wondering if anybody has experinces about putting the solr data in SAN instead of local disk. Is there a big performance penalty? Please share with me your experiences. Thank you very much. Yongjun Rong
Re: Wrong sort by score
There was a bug in my implementation of tf(float freq). It was always 1, even when the frequency was 0. This caused a discrepancy between the score and the debug info score - it seems like in the debug all the dismax fields got score, while in the "real" score fields with document frequency 0 were eliminated. -Yuri On Wed, Aug 27, 2008 at 7:56 PM, Chris Hostetter <[EMAIL PROTECTED]>wrote: > > : It seems like the debug information is using the custom similarity as it > : should - the bug isn't there. > : I see in the explain information the right tf value (I modified it to be > 1 > : in my custom similarity). > : The numbers in the explain seem to add up and make sense. > : Is it possible that the score itself is wrong (the one that I get from > fl)? > > the score in the doclist is by definition the correct score - the > debug info follows a differnet code path and sometimes that code path > isn't in sink with the the actual searching/scoring code for differnet > query types (although i was pretty confident that the test i added to > Lucene-Java a whle back tested this for anything you can see in Solr > without getting into crazy contrib Query classes) > > it would help if you could post: > > 1) the full debugQuery output from a query where you see this > disconnect, showing the all query toString info, and the score > explanations > 2) the corrisponding scores you see in the doclist > 3) some more details about how your custom similarity works (can you post > the code) > 4) info on how you've configured dismax and what request params you are > using (the output from using echoParams=all would be good) > > > > > -Hoss > >
Lucene Query reuse
Hi, I am process of moving from HibernateSearch (backed by Lucene) to Solr (ditto) in our application- In HibernateSearch I use the Query-class provided by Lucene to build a rather intricate query programatically. I would like to make my transition into Solr + Solrj as smooth as possible and hence I wonder if it is possible to reuse my Lucene Query but directing it to solr instead in some way. Any way to convert a Lucene Query into SolrQuery for instance? Or some other way? Thankful for any thoughts on this, Tobias
Re: Regarding Indexing
Hi You can read here and decide which strategy to adopt for maintaining multiple indexes - http://wiki.apache.org/solr/MultipleIndexes I used the 1st option of 'Multiple Solr webapps' as I used single Solr instance for indexing my 2 different modules. - Neeti On Fri, Aug 29, 2008 at 3:07 PM, sanraj25 <[EMAIL PROTECTED]> wrote: > > > I want to store two independent datas in solr index. so I decided to create > two index.But that's not possible.so i go for multicore concept in solr > .can u give me step by step procedure to create multicore in solr > > regards, > Santhanaraj R > > > > Norberto Meijome-2 wrote: > > > > On Fri, 29 Aug 2008 00:31:13 -0700 (PDT) > > sanraj25 <[EMAIL PROTECTED]> wrote: > > > >> But still i cant maintain two index. > >> please help me how to create two cores in solr > > > > What specific problem do you have ? > > B > > > > _ > > {Beto|Norberto|Numard} Meijome > > > > "Always listen to experts. They'll tell you what can't be done, and why. > > Then do it." > > Robert A. Heinlein > > > > I speak for myself, not my employer. Contents may be hot. Slippery when > > wet. > > Reading disclaimers makes you go blind. Writing them is worse. You have > > been > > Warned. > > > > > > -- > View this message in context: > http://www.nabble.com/Regarding-Indexing-tp19215093p19216746.html > Sent from the Solr - User mailing list archive at Nabble.com. > >
Re: Regarding Indexing
On Fri, 29 Aug 2008 02:37:10 -0700 (PDT) sanraj25 <[EMAIL PROTECTED]> wrote: > I want to store two independent datas in solr index. so I decided to create > two index.But that's not possible.so i go for multicore concept in solr > .can u give me step by step procedure to create multicore in solr Hi, without specific questions, i doubt myself or others can give you any other information than the documentation, which can be found at : http://wiki.apache.org/solr/CoreAdmin Please make sure you are using (a recent version of ) 1.3. B _ {Beto|Norberto|Numard} Meijome Your reasoning is excellent -- it's only your basic assumptions that are wrong. I speak for myself, not my employer. Contents may be hot. Slippery when wet. Reading disclaimers makes you go blind. Writing them is worse. You have been Warned.
Re: Regarding Indexing
I want to store two independent datas in solr index. so I decided to create two index.But that's not possible.so i go for multicore concept in solr .can u give me step by step procedure to create multicore in solr regards, Santhanaraj R Norberto Meijome-2 wrote: > > On Fri, 29 Aug 2008 00:31:13 -0700 (PDT) > sanraj25 <[EMAIL PROTECTED]> wrote: > >> But still i cant maintain two index. >> please help me how to create two cores in solr > > What specific problem do you have ? > B > > _ > {Beto|Norberto|Numard} Meijome > > "Always listen to experts. They'll tell you what can't be done, and why. > Then do it." > Robert A. Heinlein > > I speak for myself, not my employer. Contents may be hot. Slippery when > wet. > Reading disclaimers makes you go blind. Writing them is worse. You have > been > Warned. > > -- View this message in context: http://www.nabble.com/Regarding-Indexing-tp19215093p19216746.html Sent from the Solr - User mailing list archive at Nabble.com.
Re: Regarding Indexing
On Fri, 29 Aug 2008 00:31:13 -0700 (PDT) sanraj25 <[EMAIL PROTECTED]> wrote: > But still i cant maintain two index. > please help me how to create two cores in solr What specific problem do you have ? B _ {Beto|Norberto|Numard} Meijome "Always listen to experts. They'll tell you what can't be done, and why. Then do it." Robert A. Heinlein I speak for myself, not my employer. Contents may be hot. Slippery when wet. Reading disclaimers makes you go blind. Writing them is worse. You have been Warned.
Regarding Indexing
Hi To maintain two separate datas in solr ,I tried to create two index with the help of http://wiki.apache.org/solr/CoreAdmin http://wiki.apache.org/solr/CoreAdmin instruction.From the document I put solr.xml on the solr home directory. But still i cant maintain two index. please help me how to create two cores in solr regards, Santhanaraj R -- View this message in context: http://www.nabble.com/Regarding-Indexing-tp19215093p19215093.html Sent from the Solr - User mailing list archive at Nabble.com.