Re: deploy solr in Eclipse IDE

2009-10-18 Thread Amit Nithian
I think you may have better luck setting up Eclipse, Subclipse etc and hook
off of trunk rather than having to re-create the eclipse project every time
a nightly build comes out.
I simply have an eclipse project tied to trunk and every so often i'll do an
SVN update when I want/need the latest code.

hope that helps some!
Amit

On Thu, Oct 15, 2009 at 2:31 AM, Brian Carmalt b...@contact.de wrote:

 Hello,

 I Start Solr with Jetty using the following code. If the classpath and
 src paths are set correctly in Eclipse and you pass the solr.home to the
 VM on startup, you just have to start this class and you can debug Solr
 in Eclipse.

 code
 import org.mortbay.jetty.Connector;
 import org.mortbay.jetty.Server;
 import org.mortbay.jetty.webapp.WebAppContext;

 public class JettyStarter {

/**
 * @param args
 */
public static void main(String[] args) {

try {

Server server = new Server();

WebAppContext solr = new WebAppContext();
solr.setContextPath(/solr);
 solr.setWar(Path to solr directory or war);
server.addHandler(solr);
server.setStopAtShutdown(true);
server.start();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}

 }

 /code


 Am Dienstag, den 13.10.2009, 16:43 -0700 schrieb Pradeep Pujari:
  Hi All,
 
  I am trying to install solr nightly build into Eclipse IDE and facing lot
 of issues while importing the zip file. The build path, libs and various
 source files are scattered. It took me lot of tine to configure and make it
 run.
 
  What development environment are being used and is there a smooth way of
 importing daily-nightly build into eclipse?
 
  Please help.
 
  Thanks,
  Pradeep.
 
 




Hudson build is back to normal: Solr-trunk #959

2009-10-18 Thread Apache Hudson Server
See http://hudson.zones.apache.org/hudson/job/Solr-trunk/959/changes




Re: deploy solr in Eclipse IDE

2009-10-18 Thread Pradeep Pujari
Hi Amit,
This is what I am looking for. Do you know the URL for trunk?

Thanks,
Pradeep.

--- On Sun, 10/18/09, Amit Nithian anith...@gmail.com wrote:

 From: Amit Nithian anith...@gmail.com
 Subject: Re: deploy solr in Eclipse IDE
 To: solr-dev@lucene.apache.org
 Date: Sunday, October 18, 2009, 12:55 AM
 I think you may have better luck
 setting up Eclipse, Subclipse etc and hook
 off of trunk rather than having to re-create the eclipse
 project every time
 a nightly build comes out.
 I simply have an eclipse project tied to trunk and every so
 often i'll do an
 SVN update when I want/need the latest code.
 
 hope that helps some!
 Amit
 
 On Thu, Oct 15, 2009 at 2:31 AM, Brian Carmalt b...@contact.de
 wrote:
 
  Hello,
 
  I Start Solr with Jetty using the following code. If
 the classpath and
  src paths are set correctly in Eclipse and you pass
 the solr.home to the
  VM on startup, you just have to start this class and
 you can debug Solr
  in Eclipse.
 
  code
  import org.mortbay.jetty.Connector;
  import org.mortbay.jetty.Server;
  import org.mortbay.jetty.webapp.WebAppContext;
 
  public class JettyStarter {
 
         /**
          * @param args
          */
         public static void
 main(String[] args) {
 
                
 try {
 
                
         Server server = new
 Server();
 
                
         WebAppContext solr = new
 WebAppContext();
                
         solr.setContextPath(/solr);
  solr.setWar(Path to solr directory or war);
                
         server.addHandler(solr);
                
         server.setStopAtShutdown(true);
                
         server.start();
                
 } catch (Exception e) {
                
         // TODO Auto-generated catch
 block
                
         e.printStackTrace();
                
 }
         }
 
  }
 
  /code
 
 
  Am Dienstag, den 13.10.2009, 16:43 -0700 schrieb
 Pradeep Pujari:
   Hi All,
  
   I am trying to install solr nightly build into
 Eclipse IDE and facing lot
  of issues while importing the zip file. The build
 path, libs and various
  source files are scattered. It took me lot of tine to
 configure and make it
  run.
  
   What development environment are being used and
 is there a smooth way of
  importing daily-nightly build into eclipse?
  
   Please help.
  
   Thanks,
   Pradeep.
  
  
 
 




Re: [jira] Commented: (SOLR-1513) Use Google Collections in ConcurrentLRUCache

2009-10-18 Thread Lance Norskog
-1 for weak references in caching.

This makes memory management less deterministic (predictable) and at
peak can cause cache-thrashing. In other words, the worst case gets
even more worse. When designing a system I want predictability and I
want to control the worst case, because system meltdowns are caused by
the worst case. Having thousands of small weak references does the
opposite.

On Sat, Oct 17, 2009 at 2:00 AM, Noble Paul (JIRA) j...@apache.org wrote:

    [ 
 https://issues.apache.org/jira/browse/SOLR-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12766864#action_12766864
  ]

 Noble Paul commented on SOLR-1513:
 --

 bq.Google Collections is already checked in as a dependency of Carrot 
 clustering.

 in that e need to move it to core.

 Jason . We do not need to remove the original option. We can probably add an 
 extra parameter say softRef=true or something. That way , we are not 
 screwing up anything and perf benefits can be studied separately.


 Use Google Collections in ConcurrentLRUCache
 

                 Key: SOLR-1513
                 URL: https://issues.apache.org/jira/browse/SOLR-1513
             Project: Solr
          Issue Type: Improvement
          Components: search
    Affects Versions: 1.4
            Reporter: Jason Rutherglen
            Priority: Minor
             Fix For: 1.5

         Attachments: google-collect-snapshot.jar, SOLR-1513.patch


 ConcurrentHashMap is used in ConcurrentLRUCache.  The Google Colletions 
 concurrent map implementation allows for soft values that are great for 
 caches that potentially exceed the allocated heap.  Though I suppose Solr 
 caches usually don't use too much RAM?
 http://code.google.com/p/google-collections/

 --
 This message is automatically generated by JIRA.
 -
 You can reply to this email to add a comment to the issue online.





-- 
Lance Norskog
goks...@gmail.com


[jira] Created: (SOLR-1516) DocumentList and Document QueryResponseWriter

2009-10-18 Thread Chris A. Mattmann (JIRA)
DocumentList and Document QueryResponseWriter
-

 Key: SOLR-1516
 URL: https://issues.apache.org/jira/browse/SOLR-1516
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.3
 Environment: My MacBook Pro laptop.
Reporter: Chris A. Mattmann
Priority: Minor
 Fix For: 1.4


I tried to implement a custom QueryResponseWriter the other day and was amazed 
at the level of unmarshalling and weeding through objects that was necessary 
just to format the output o.a.l.Document list. As a user, I wanted to be able 
to implement either 2 functions:

* process a document at a time, and format it (for speed/efficiency)
* process all the documents at once, and format them (in case an aggregate 
calculation is necessary for outputting)

So, I've decided to contribute 2 simple classes that I think are sufficiently 
generic and reusable. The first is o.a.s.request.DocumentResponseWriter -- it 
handles the first bullet above. The second is 
o.a.s.request.DocumentListResponseWriter. Both are abstract base classes and 
require the user to implement either an #emitDoc function (in the case of 
bullet 1), or an #emitDocList function (in the case of bullet 2). Both classes 
provide an #emitHeader and #emitFooter function set that handles formatting and 
output before the Document list is processed.



-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1516) DocumentList and Document QueryResponseWriter

2009-10-18 Thread Chris A. Mattmann (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Chris A. Mattmann updated SOLR-1516:


Attachment: SOLR-1516.Mattmann.101809.patch.txt

 DocumentList and Document QueryResponseWriter
 -

 Key: SOLR-1516
 URL: https://issues.apache.org/jira/browse/SOLR-1516
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.3
 Environment: My MacBook Pro laptop.
Reporter: Chris A. Mattmann
Priority: Minor
 Fix For: 1.4

 Attachments: SOLR-1516.Mattmann.101809.patch.txt


 I tried to implement a custom QueryResponseWriter the other day and was 
 amazed at the level of unmarshalling and weeding through objects that was 
 necessary just to format the output o.a.l.Document list. As a user, I wanted 
 to be able to implement either 2 functions:
 * process a document at a time, and format it (for speed/efficiency)
 * process all the documents at once, and format them (in case an aggregate 
 calculation is necessary for outputting)
 So, I've decided to contribute 2 simple classes that I think are sufficiently 
 generic and reusable. The first is o.a.s.request.DocumentResponseWriter -- it 
 handles the first bullet above. The second is 
 o.a.s.request.DocumentListResponseWriter. Both are abstract base classes and 
 require the user to implement either an #emitDoc function (in the case of 
 bullet 1), or an #emitDocList function (in the case of bullet 2). Both 
 classes provide an #emitHeader and #emitFooter function set that handles 
 formatting and output before the Document list is processed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1506) Search multiple cores using MultiReader

2009-10-18 Thread Jason Rutherglen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1506?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Rutherglen updated SOLR-1506:
---

Attachment: SOLR-1506.patch

Fixes a bug, added Apache headers

 Search multiple cores using MultiReader
 ---

 Key: SOLR-1506
 URL: https://issues.apache.org/jira/browse/SOLR-1506
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Trivial
 Fix For: 1.5

 Attachments: SOLR-1506.patch, SOLR-1506.patch


 I need to search over multiple cores, and SOLR-1477 is more
 complicated than expected, so here we'll create a MultiReader
 over the cores to allow searching on them.
 Maybe in the future we can add parallel searching however
 SOLR-1477, if it gets completed, provides that out of the box.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1477) Search on multi-tier cores

2009-10-18 Thread Jason Rutherglen (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1477?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Jason Rutherglen updated SOLR-1477:
---

Priority: Minor  (was: Trivial)
 Summary: Search on multi-tier cores  (was: Search on local cores)

 Search on multi-tier cores
 --

 Key: SOLR-1477
 URL: https://issues.apache.org/jira/browse/SOLR-1477
 Project: Solr
  Issue Type: Improvement
Affects Versions: 1.4
Reporter: Jason Rutherglen
Priority: Minor
 Fix For: 1.5

 Attachments: SOLR-1477.patch, SOLR-1477.patch, SOLR-1477.patch, 
 SOLR-1477.patch, SOLR-1477.patch


 Search on cores in the container, using distributed search.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Commented: (SOLR-1513) Use Google Collections in ConcurrentLRUCache

2009-10-18 Thread Jason Rutherglen
Lance,

Do you mean soft references?

On Sun, Oct 18, 2009 at 3:59 PM, Lance Norskog goks...@gmail.com wrote:
 -1 for weak references in caching.

 This makes memory management less deterministic (predictable) and at
 peak can cause cache-thrashing. In other words, the worst case gets
 even more worse. When designing a system I want predictability and I
 want to control the worst case, because system meltdowns are caused by
 the worst case. Having thousands of small weak references does the
 opposite.

 On Sat, Oct 17, 2009 at 2:00 AM, Noble Paul (JIRA) j...@apache.org wrote:

    [ 
 https://issues.apache.org/jira/browse/SOLR-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12766864#action_12766864
  ]

 Noble Paul commented on SOLR-1513:
 --

 bq.Google Collections is already checked in as a dependency of Carrot 
 clustering.

 in that e need to move it to core.

 Jason . We do not need to remove the original option. We can probably add an 
 extra parameter say softRef=true or something. That way , we are not 
 screwing up anything and perf benefits can be studied separately.


 Use Google Collections in ConcurrentLRUCache
 

                 Key: SOLR-1513
                 URL: https://issues.apache.org/jira/browse/SOLR-1513
             Project: Solr
          Issue Type: Improvement
          Components: search
    Affects Versions: 1.4
            Reporter: Jason Rutherglen
            Priority: Minor
             Fix For: 1.5

         Attachments: google-collect-snapshot.jar, SOLR-1513.patch


 ConcurrentHashMap is used in ConcurrentLRUCache.  The Google Colletions 
 concurrent map implementation allows for soft values that are great for 
 caches that potentially exceed the allocated heap.  Though I suppose Solr 
 caches usually don't use too much RAM?
 http://code.google.com/p/google-collections/

 --
 This message is automatically generated by JIRA.
 -
 You can reply to this email to add a comment to the issue online.





 --
 Lance Norskog
 goks...@gmail.com



Re: [jira] Commented: (SOLR-1513) Use Google Collections in ConcurrentLRUCache

2009-10-18 Thread Noble Paul നോബിള്‍ नोब्ळ्
On Mon, Oct 19, 2009 at 4:29 AM, Lance Norskog goks...@gmail.com wrote:
 -1 for weak references in caching.

 This makes memory management less deterministic (predictable) and at
 peak can cause cache-thrashing. In other words, the worst case gets
 even more worse. When designing a system I want predictability and I
 want to control the worst case, because system meltdowns are caused by
 the worst case. Having thousands of small weak references does the
 opposite.
cache trashing is really not that bad (as against system crashing with
OOM). Documentation of softReference spcifically mentions caches as
one of the applications
http://java.sun.com/j2se/1.4.2/docs/api/java/lang/ref/SoftReference.html

 On Sat, Oct 17, 2009 at 2:00 AM, Noble Paul (JIRA) j...@apache.org wrote:

    [ 
 https://issues.apache.org/jira/browse/SOLR-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12766864#action_12766864
  ]

 Noble Paul commented on SOLR-1513:
 --

 bq.Google Collections is already checked in as a dependency of Carrot 
 clustering.

 in that e need to move it to core.

 Jason . We do not need to remove the original option. We can probably add an 
 extra parameter say softRef=true or something. That way , we are not 
 screwing up anything and perf benefits can be studied separately.


 Use Google Collections in ConcurrentLRUCache
 

                 Key: SOLR-1513
                 URL: https://issues.apache.org/jira/browse/SOLR-1513
             Project: Solr
          Issue Type: Improvement
          Components: search
    Affects Versions: 1.4
            Reporter: Jason Rutherglen
            Priority: Minor
             Fix For: 1.5

         Attachments: google-collect-snapshot.jar, SOLR-1513.patch


 ConcurrentHashMap is used in ConcurrentLRUCache.  The Google Colletions 
 concurrent map implementation allows for soft values that are great for 
 caches that potentially exceed the allocated heap.  Though I suppose Solr 
 caches usually don't use too much RAM?
 http://code.google.com/p/google-collections/

 --
 This message is automatically generated by JIRA.
 -
 You can reply to this email to add a comment to the issue online.





 --
 Lance Norskog
 goks...@gmail.com




-- 
-
Noble Paul | Principal Engineer| AOL | http://aol.com


[jira] Updated: (SOLR-1516) DocumentList and Document QueryResponseWriter

2009-10-18 Thread Shalin Shekhar Mangar (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1516?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Shalin Shekhar Mangar updated SOLR-1516:


Fix Version/s: (was: 1.4)
   1.5

Moving to 1.5. We are not accepting new features for 1.4 anymore.

 DocumentList and Document QueryResponseWriter
 -

 Key: SOLR-1516
 URL: https://issues.apache.org/jira/browse/SOLR-1516
 Project: Solr
  Issue Type: New Feature
  Components: search
Affects Versions: 1.3
 Environment: My MacBook Pro laptop.
Reporter: Chris A. Mattmann
Priority: Minor
 Fix For: 1.5

 Attachments: SOLR-1516.Mattmann.101809.patch.txt


 I tried to implement a custom QueryResponseWriter the other day and was 
 amazed at the level of unmarshalling and weeding through objects that was 
 necessary just to format the output o.a.l.Document list. As a user, I wanted 
 to be able to implement either 2 functions:
 * process a document at a time, and format it (for speed/efficiency)
 * process all the documents at once, and format them (in case an aggregate 
 calculation is necessary for outputting)
 So, I've decided to contribute 2 simple classes that I think are sufficiently 
 generic and reusable. The first is o.a.s.request.DocumentResponseWriter -- it 
 handles the first bullet above. The second is 
 o.a.s.request.DocumentListResponseWriter. Both are abstract base classes and 
 require the user to implement either an #emitDoc function (in the case of 
 bullet 1), or an #emitDocList function (in the case of bullet 2). Both 
 classes provide an #emitHeader and #emitFooter function set that handles 
 formatting and output before the Document list is processed.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.