Re: Solr nightly build failure

2010-03-23 Thread Bill Au
The carrot2 download links are broken.  I have file a bug with them:

http://issues.carrot2.org/browse/CARROT-653

Bill

On Tue, Mar 23, 2010 at 4:09 AM, solr-dev@lucene.apache.org wrote:


 init-forrest-entities:
[mkdir] Created dir: /tmp/apache-solr-nightly/build
[mkdir] Created dir: /tmp/apache-solr-nightly/build/web

 compile-solrj:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/solrj
[javac] Compiling 88 source files to
 /tmp/apache-solr-nightly/build/solrj
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

 compile:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/solr
[javac] Compiling 434 source files to
 /tmp/apache-solr-nightly/build/solr
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

 compileTests:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/tests
[javac] Compiling 209 source files to
 /tmp/apache-solr-nightly/build/tests
[javac] Note: Some input files use or override a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.
[javac] Note: Some input files use unchecked or unsafe operations.
[javac] Note: Recompile with -Xlint:unchecked for details.

 dist-contrib:

 init:
[mkdir] Created dir:
 /tmp/apache-solr-nightly/contrib/clustering/build/classes
[mkdir] Created dir:
 /tmp/apache-solr-nightly/contrib/clustering/lib/downloads
[mkdir] Created dir: /tmp/apache-solr-nightly/build/docs/api

 init-forrest-entities:

 compile-solrj:

 compile:
[javac] Compiling 1 source file to /tmp/apache-solr-nightly/build/solr
[javac] Note:
 /tmp/apache-solr-nightly/src/java/org/apache/solr/search/DocSetHitCollector.java
 uses or overrides a deprecated API.
[javac] Note: Recompile with -Xlint:deprecation for details.

 make-manifest:
[mkdir] Created dir: /tmp/apache-solr-nightly/build/META-INF

 proxy.setup:

 check-files:

 get-colt:
  [get] Getting:
 http://repo1.maven.org/maven2/colt/colt/1.2.0/colt-1.2.0.jar
  [get] To:
 /tmp/apache-solr-nightly/contrib/clustering/lib/downloads/colt-1.2.0.jar

 get-pcj:
  [get] Getting: http://repo1.maven.org/maven2/pcj/pcj/1.2/pcj-1.2.jar
  [get] To:
 /tmp/apache-solr-nightly/contrib/clustering/lib/downloads/pcj-1.2.jar

 get-nni:
  [get] Getting:
 http://download.carrot2.org/maven2/org/carrot2/nni/1.0.0/nni-1.0.0.jar
  [get] To:
 /tmp/apache-solr-nightly/contrib/clustering/lib/downloads/nni-1.0.0.jar
  [get] Error getting
 http://download.carrot2.org/maven2/org/carrot2/nni/1.0.0/nni-1.0.0.jar to
 /tmp/apache-solr-nightly/contrib/clustering/lib/downloads/nni-1.0.0.jar

 BUILD FAILED
 /tmp/apache-solr-nightly/common-build.xml:361: The following error occurred
 while executing this line:
 /tmp/apache-solr-nightly/common-build.xml:219: The following error occurred
 while executing this line:
 /tmp/apache-solr-nightly/contrib/clustering/build.xml:84:
 java.net.NoRouteToHostException: No route to host

 Total time: 4 minutes 3 seconds





Re: rough outline of where Solr's going

2010-03-16 Thread Bill Au
+1 on moving to Java 6 since Java 5 has been EOL'ed.

Bill

On Tue, Mar 16, 2010 at 12:00 PM, Yonik Seeley yo...@apache.org wrote:

 One more addition:
  - Consider a different wiki... our current style will serve us poorly
 across major version bumps esp.  We need versioning.   A different
 option could include moving more documentation onto the website, where
 it would be versioned.  Getting something easy to edit/change would be
 the key there we don't have that currently.

 -Yonik


 On Tue, Mar 16, 2010 at 10:06 AM, Yonik Seeley yo...@apache.org wrote:
  another minor addition:
   - move to Junti4 for new tests... and some old tests might be
  migrated (for speed issues)
 
  I already have a SolrTestCaseJ4 that extends LuceneTestCase4J that
  avoids spinning up a solr core for each test method... but I need to
  be able to reference  LuceneTestCase4J from the lucene sources (i.e it
  works in the IDE, but not on the command line right now).
 
  -Yonik
 
 
 
 
  On Tue, Mar 16, 2010 at 10:00 AM, Yonik Seeley yo...@apache.org wrote:
  Here is a very rough list of what makes sense to me:
  - since lucene is on a new major version, the next solr release
  containing that sould have a new major version number
   - this does not preclude further releases on 1.x
   - for simplicity, and the single dev model, we should just sync
  with lucene's... i.e. the next major Solr version would be 3.1
  - branches/solr would become the new trunk, with a shared trunk with
  lucene in some structure (see other thread)
  - solr cloud branch gets merged in
  - we move to Java6 (Java5 has already been EOLd by Sun unless you pay
  money... and we need Java6 for zookeeper, scripting)
  - remove deprecations (finally!), and perhaps some additional cleanups
  that we've wanted to do
 
  -Yonik
 
 



[jira] Updated: (SOLR-1734) Add pid file to snappuller to skip script overruns, and recover from failure

2010-02-04 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-1734:
--

Attachment: SOLR-1734-2.patch

The original patch has a slight race condition within the time window between 
testing for the presence and creation of the lock file.  Since snappuller is a 
bash script, the revised patch uses the noclobber option to prevent that. 

 Add pid file to snappuller to skip script overruns, and recover from failure
 

 Key: SOLR-1734
 URL: https://issues.apache.org/jira/browse/SOLR-1734
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Attachments: SOLR-1734-2.patch, SOLR-1734.patch


 The pid file will allow snappuller to be run as fast as possible without 
 overruns. Also it will recover from a last failed run should an older 
 snappuller process no longer be running.  The same has already been done to 
 snapinstaller in SOLR-990.  Overlapping snappuller could cause replication 
 traffic to saturate the network if a large Solr index is being replicated to 
 a large number of clients.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1734) Add pid file to snappuller to skip script overruns, and recover from failure

2010-02-04 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-1734:
--

Attachment: (was: SOLR-1734-2.patch)

 Add pid file to snappuller to skip script overruns, and recover from failure
 

 Key: SOLR-1734
 URL: https://issues.apache.org/jira/browse/SOLR-1734
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Attachments: SOLR-1734-2.patch, SOLR-1734.patch


 The pid file will allow snappuller to be run as fast as possible without 
 overruns. Also it will recover from a last failed run should an older 
 snappuller process no longer be running.  The same has already been done to 
 snapinstaller in SOLR-990.  Overlapping snappuller could cause replication 
 traffic to saturate the network if a large Solr index is being replicated to 
 a large number of clients.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1734) Add pid file to snappuller to skip script overruns, and recover from failure

2010-02-04 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-1734:
--

Attachment: SOLR-1734-2.patch

 Add pid file to snappuller to skip script overruns, and recover from failure
 

 Key: SOLR-1734
 URL: https://issues.apache.org/jira/browse/SOLR-1734
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Attachments: SOLR-1734-2.patch, SOLR-1734.patch


 The pid file will allow snappuller to be run as fast as possible without 
 overruns. Also it will recover from a last failed run should an older 
 snappuller process no longer be running.  The same has already been done to 
 snapinstaller in SOLR-990.  Overlapping snappuller could cause replication 
 traffic to saturate the network if a large Solr index is being replicated to 
 a large number of clients.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1755) race condition in snappuller

2010-02-04 Thread Bill Au (JIRA)
race condition in snappuller


 Key: SOLR-1755
 URL: https://issues.apache.org/jira/browse/SOLR-1755
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor


There is a slight race condition in snapinstaller within the time window 
between testing for the presence and creating the lock file.  When the race 
condition is met there can be more than one instance of snapinstaller running.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1755) race condition in snappuller

2010-02-04 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1755?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-1755:
--

Attachment: SOLR-1755.patch

Since snapinstaller is a bash script.  The attached patch uses the noclobber 
option to prevent the race condition.

 race condition in snappuller
 

 Key: SOLR-1755
 URL: https://issues.apache.org/jira/browse/SOLR-1755
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Attachments: SOLR-1755.patch


 There is a slight race condition in snapinstaller within the time window 
 between testing for the presence and creating the lock file.  When the race 
 condition is met there can be more than one instance of snapinstaller running.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-1734) Add pid file to snappuller to skip script overruns, and recover from failure

2010-01-26 Thread Bill Au (JIRA)
Add pid file to snappuller to skip script overruns, and recover from failure


 Key: SOLR-1734
 URL: https://issues.apache.org/jira/browse/SOLR-1734
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor


The pid file will allow snappuller to be run as fast as possible without 
overruns. Also it will recover from a last failed run should an older 
snappuller process no longer be running.  The same has already been done to 
snapinstaller in SOLR-990.  Overlapping snappuller could cause replication 
traffic to saturate the network if a large Solr index is being replicated to a 
large number of clients.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1734) Add pid file to snappuller to skip script overruns, and recover from failure

2010-01-26 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1734?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-1734:
--

Attachment: SOLR-1734.patch

I am reusing the code from SOLR-990 which adds the same feature to 
snapinstaller.  I have added a -f command line argument to force the 
snappuller to run even if one is already running.  That will be useful if 
network capacity is not an issue.

 Add pid file to snappuller to skip script overruns, and recover from failure
 

 Key: SOLR-1734
 URL: https://issues.apache.org/jira/browse/SOLR-1734
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Attachments: SOLR-1734.patch


 The pid file will allow snappuller to be run as fast as possible without 
 overruns. Also it will recover from a last failed run should an older 
 snappuller process no longer be running.  The same has already been done to 
 snapinstaller in SOLR-990.  Overlapping snappuller could cause replication 
 traffic to saturate the network if a large Solr index is being replicated to 
 a large number of clients.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: LucidWorks for Solr Reference Guide on front page?

2010-01-12 Thread Bill Au
+1.  We should explicitly point out that certain sections (like
installation) apples to LucidWorks and not Apache Solr.

Bill

On Tue, Jan 12, 2010 at 8:32 AM, Ryan McKinley ryan...@gmail.com wrote:


 On Jan 12, 2010, at 5:45 AM, Erik Hatcher wrote:

  Hey gang,

 I'd like to put an image and link to our LucidWorks for Solr Certified
 Distro Reference Guide on the Solr front page, and wondering if there were
 any objections.

 To toss out my only objection with my Apache hat on (yes, I'm really
 wearing it!), the reference guide isn't strictly about *Apache* Solr, it's
 about our distro of it.  So there is just a little bit more than in the
 distro (like kstem, tomcat).  But then again, the Packt book* has more than
 what's in Solr 1.4 too (localsolr, field collapsing).

 Here's a link to our reference guide, that includes the image I'd use too:
 
 http://www.lucidimagination.com/Downloads/LucidWorks-for-Solr/Reference-Guide
 

 If there are no objections, I'll add it sometime this week.


 +1, more doc is better then less!

 ryan



Re: handling of Lucene's ParseException inside QueryComponent

2009-11-23 Thread Bill Au
I dug deeper and discovered that teh exception message is being added to the
HTTP response line by SolrDIspatchFilter.  So there is where the fix should
be made.  I will open a Jira and attach a patch.

Bill

On Fri, Nov 20, 2009 at 5:34 PM, Bill Au bill.w...@gmail.com wrote:

 I just noticed that the message of Lucene's ParseException contains the
 user's input that Lucene is failing to parse.  The user input is not
 sanitize in any way.  My appserver is showing the exception message in both
 the body and the HTTP status line of the response.  So even if I set up
 custom error pages the user input are still being send un-sanitized in the
 response.  I don't know if this is the behavior of other appserver or not.
 I don't think I can sanitize the user input before sending it to Solr/Lucene
 since the content of my index contains special characters.

 I am thinking that we can change the behavior of QueryComponent.  Since
 Solr is a webapp, I don't think it is unreasonable to have Solr be
 responsible for sanitizing exception messages.  This is the current
 QueryComponent code:

 } catch (ParseException e) {
   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, e);
 }

 Instead of wrapping the ParseException in the SolrException, we can simply
 sanitize the message of the ParseException and use that to create the
 SolrException.
 I can submit a patch for this.

 Any comments/suggestions?

 Bill



[jira] Created: (SOLR-1594) SolrDispatchFilter needs to sanitize exception message

2009-11-23 Thread Bill Au (JIRA)
SolrDispatchFilter needs to sanitize exception message
--

 Key: SOLR-1594
 URL: https://issues.apache.org/jira/browse/SOLR-1594
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
 Fix For: 1.5
 Attachments: solr-1594.patch

SolrDispatchFIlter needs to sanitize exception messages before using them in 
the response.  I will attach a patch shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-1594) SolrDispatchFilter needs to sanitize exception message

2009-11-23 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-1594:
--

Attachment: solr-1594.patch

 SolrDispatchFilter needs to sanitize exception message
 --

 Key: SOLR-1594
 URL: https://issues.apache.org/jira/browse/SOLR-1594
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
 Fix For: 1.5

 Attachments: solr-1594.patch


 SolrDispatchFIlter needs to sanitize exception messages before using them in 
 the response.  I will attach a patch shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: handling of Lucene's ParseException inside QueryComponent

2009-11-23 Thread Bill Au
https://issues.apache.org/jira/browse/SOLR-1594

Bill

On Mon, Nov 23, 2009 at 11:45 AM, Bill Au bill.w...@gmail.com wrote:

 I dug deeper and discovered that teh exception message is being added to
 the HTTP response line by SolrDIspatchFilter.  So there is where the fix
 should be made.  I will open a Jira and attach a patch.

 Bill


 On Fri, Nov 20, 2009 at 5:34 PM, Bill Au bill.w...@gmail.com wrote:

 I just noticed that the message of Lucene's ParseException contains the
 user's input that Lucene is failing to parse.  The user input is not
 sanitize in any way.  My appserver is showing the exception message in both
 the body and the HTTP status line of the response.  So even if I set up
 custom error pages the user input are still being send un-sanitized in the
 response.  I don't know if this is the behavior of other appserver or not.
 I don't think I can sanitize the user input before sending it to Solr/Lucene
 since the content of my index contains special characters.

 I am thinking that we can change the behavior of QueryComponent.  Since
 Solr is a webapp, I don't think it is unreasonable to have Solr be
 responsible for sanitizing exception messages.  This is the current
 QueryComponent code:

 } catch (ParseException e) {
   throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, e);
 }

 Instead of wrapping the ParseException in the SolrException, we can simply
 sanitize the message of the ParseException and use that to create the
 SolrException.
 I can submit a patch for this.

 Any comments/suggestions?

 Bill





[jira] Commented: (SOLR-1594) SolrDispatchFilter needs to sanitize exception message

2009-11-23 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12781496#action_12781496
 ] 

Bill Au commented on SOLR-1594:
---

Try running this query:

solr/select/?q=title:scriptalert(xss)/script

 SolrDispatchFilter needs to sanitize exception message
 --

 Key: SOLR-1594
 URL: https://issues.apache.org/jira/browse/SOLR-1594
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
 Fix For: 1.5

 Attachments: solr-1594.patch


 SolrDispatchFIlter needs to sanitize exception messages before using them in 
 the response.  I will attach a patch shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1594) SolrDispatchFilter needs to sanitize exception message

2009-11-23 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12781507#action_12781507
 ] 

Bill Au commented on SOLR-1594:
---

Jetty is sanitizing both the HTTP response line and the response body so it is 
OK.  I know of at least one appserver that is not doing that.

 SolrDispatchFilter needs to sanitize exception message
 --

 Key: SOLR-1594
 URL: https://issues.apache.org/jira/browse/SOLR-1594
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
 Fix For: 1.5

 Attachments: solr-1594.patch


 SolrDispatchFIlter needs to sanitize exception messages before using them in 
 the response.  I will attach a patch shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1594) SolrDispatchFilter needs to sanitize exception message

2009-11-23 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12781522#action_12781522
 ] 

Bill Au commented on SOLR-1594:
---

I just tried it and Jetty does double-escape:

org.apache.lucene.queryParser.ParseException: Cannot parse 
'amp;lt;scriptamp;gt;alert(xss)amp;lt;/scriptamp;gt;': Lexical error at 
line 1, column 31.  Encountered: amp;lt;EOFamp;gt; after : 
\)amp;lt;/scriptamp;gt;

So should we leave it up to the appserver to do the right thing or should Solr 
be more proactive?  To me double-escaping is a lesser evil than being 
vulnerable to xss attack.

 SolrDispatchFilter needs to sanitize exception message
 --

 Key: SOLR-1594
 URL: https://issues.apache.org/jira/browse/SOLR-1594
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
 Fix For: 1.5

 Attachments: solr-1594.patch


 SolrDispatchFIlter needs to sanitize exception messages before using them in 
 the response.  I will attach a patch shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-1594) SolrDispatchFilter needs to sanitize exception message

2009-11-23 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1594?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-1594.
---

Resolution: Invalid

This same problem had been reported against and fixed in Tomcat.  I have 
reported it to the vendor of my appserver and they are working on a fix.  
Marking as invalid.

 SolrDispatchFilter needs to sanitize exception message
 --

 Key: SOLR-1594
 URL: https://issues.apache.org/jira/browse/SOLR-1594
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Bill Au
Assignee: Bill Au
 Fix For: 1.5

 Attachments: solr-1594.patch


 SolrDispatchFIlter needs to sanitize exception messages before using them in 
 the response.  I will attach a patch shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



handling of Lucene's ParseException inside QueryComponent

2009-11-20 Thread Bill Au
I just noticed that the message of Lucene's ParseException contains the
user's input that Lucene is failing to parse.  The user input is not
sanitize in any way.  My appserver is showing the exception message in both
the body and the HTTP status line of the response.  So even if I set up
custom error pages the user input are still being send un-sanitized in the
response.  I don't know if this is the behavior of other appserver or not.
I don't think I can sanitize the user input before sending it to Solr/Lucene
since the content of my index contains special characters.

I am thinking that we can change the behavior of QueryComponent.  Since Solr
is a webapp, I don't think it is unreasonable to have Solr be responsible
for sanitizing exception messages.  This is the current QueryComponent code:

} catch (ParseException e) {
  throw new SolrException(SolrException.ErrorCode.BAD_REQUEST, e);
}

Instead of wrapping the ParseException in the SolrException, we can simply
sanitize the message of the ParseException and use that to create the
SolrException.
I can submit a patch for this.

Any comments/suggestions?

Bill


Re: Solr 1.5 or 2.0?

2009-11-19 Thread Bill Au
Since Solr is dependent on Lucene I agree that there should be a major
version number bump in Solr whenever there is one in Lucene:

Solr 2.x with Lucene 3.x

On Thu, Nov 19, 2009 at 11:11 AM, Mark Miller markrmil...@gmail.com wrote:

 Yonik Seeley wrote:
  What should the next version of Solr be?
 
  Options:
  - have a Solr 1.5 with a lucene 2.9.x
  - have a Solr 1.5 with a lucene 3.x, with weaker back compat given all
  of the removed lucene deprecations from 2.9-3.0
  - have a Solr 2.0 with a lucene 3.x
 
  -Yonik
  http://www.lucidimagination.com
 
 +1 for 2.0 with 3.x - not going to be easy keeping devs from taking
 advantage of new Lucene features for another whole release period. And
 1.5 with 3.x doesn't make a lot of sense if its going to be weaker back
 compat - 2.0 makes sense in that case.

 --
 - Mark

 http://www.lucidimagination.com






[jira] Commented: (SOLR-1545) add support for sort to MoreLikeThis

2009-11-18 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1545?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12779652#action_12779652
 ] 

Bill Au commented on SOLR-1545:
---

Has anyone had a change to review/test this patch?  I have been testing it with 
my app and have not found any problem.

 add support for sort to MoreLikeThis
 

 Key: SOLR-1545
 URL: https://issues.apache.org/jira/browse/SOLR-1545
 Project: Solr
  Issue Type: Improvement
  Components: search
Affects Versions: 1.4
Reporter: Bill Au
Priority: Minor
 Fix For: 1.5

 Attachments: solr-1545.patch


 Add support for sort to MoreLikeThis.  I will attach a patch with more info 
 shortly.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [VOTE] 1.4 new RC up

2009-11-09 Thread Bill Au
+1 for releasing the latest release candidate as 1.4.

Bill

On Mon, Nov 9, 2009 at 11:15 AM, Yonik Seeley yo...@lucidimagination.comwrote:

 On Mon, Nov 9, 2009 at 11:07 AM, Ryan McKinley ryan...@gmail.com wrote:
  I assume we will have a 1.4.1 release when lucene 3.0 comes out.
  Hopefully
  that is soon.

 Lucene 3.0 won't really have any new features - it's really just meant
 to remove deprecations.
 It could take a decent amount of work to migrate all of Solr to
 non-deprecated APIs, so it could potentially be a while before we're
 on a Lucene 3.x release.

 -Yonik
 http://www.lucidimagination.com



Re: [VOTE] Release Solr 1.4.0

2009-11-02 Thread Bill Au
+1

Tested on Redhat EL5 with SUN Java 1.6.0_16 and Resin 3.0.23.  All my custom
Solr plugins (TokenFilter, RequestHandler, SearchComponent, QParserPlugin,
QParser, and UpdateRequestProcessor) work as expected without any problem.

Nice job everyone:-)

Bill

On Mon, Nov 2, 2009 at 10:32 AM, Eric Pugh
ep...@opensourceconnections.comwrote:

 A happy non binding +1!



 On Nov 2, 2009, at 10:30 AM, Grant Ingersoll wrote:

  +1

 On Oct 30, 2009, at 9:40 AM, Grant Ingersoll wrote:

  OK, take 3: 
 http://people.apache.org/~gsingers/solr/1.4.0/http://people.apache.org/%7Egsingers/solr/1.4.0/

 On Oct 30, 2009, at 8:10 AM, Grant Ingersoll wrote:

  Got it.  Will upload shortly.

 On Oct 29, 2009, at 8:33 PM, Yonik Seeley wrote:

  On Thu, Oct 29, 2009 at 7:36 PM, Yonik Seeley
 yo...@lucidimagination.com wrote:

 Lucene 2.9.1 respin 3 vote has started... I'm downloading now and will
 test + check in.


 Done.  You're up Grant!

 -Yonik
 http://www.lucidimagination.com








 -
 Eric Pugh | Principal | OpenSource Connections, LLC | 434.466.1467 |
 http://www.opensourceconnections.com
 Co-Author: Solr 1.4 Enterprise Search Server available from
 http://www.packtpub.com/solr-1-4-enterprise-search-server
 Free/Busy: http://tinyurl.com/eric-cal











Re: Another RC

2009-10-25 Thread Bill Au
So far so good on Resin 3.0.x (that's not the latest version of Resin).

Bill

On Sun, Oct 25, 2009 at 6:07 PM, Ryan McKinley ryan...@gmail.com wrote:

 I've been testing with jetty 7.0.0.v20091005, and everything works good so
 far..



 On Oct 25, 2009, at 3:59 PM, Yonik Seeley wrote:

  If all goes well in lucene-land 2.9.1 should start a vote on Monday
 sometime...

 I've recently tested the latest stable Jetty (6.1.21) ... so we can
 avoid some duplication, has anyone tested with the latest tomcat,
 resin, or other popular servlet containers?

 -Yonik
 http://www.lucidimagination.com


 On Mon, Oct 19, 2009 at 5:57 PM, Yonik Seeley
 yo...@lucidimagination.com wrote:

 OK, so let's do the following: as soon as the Lucene 2.9.1 official RC
 is put up (the one that will be voted on), I'll update Solr and we can
 do our vote at the same time, saving 3 or 4 days... this release has
 really been held up long enough :-)

 We can re-evaluate what to do if for some reason the Lucene vote
 doesn't pass (i.e., we won't blindly release Solr if the lucene vote
 fails).

 Hopefully everyone has looked at the latest Solr distributions for any
 potential showstoppers that would cause them to vote -1 during the
 official vote.

 -Yonik
 http://www.lucidimagination.com

 On Mon, Oct 19, 2009 at 1:35 PM, Walter Underwood wun...@wunderwood.org
 wrote:

 Please wait for an official release of Lucene. It makes thing SO much
 easier
 when you need to dig into the Lucene code.

 It is well worth a week delay.

 wunder

 On Oct 19, 2009, at 10:27 AM, Yonik Seeley wrote:

  On Mon, Oct 19, 2009 at 10:59 AM, Grant Ingersoll gsing...@apache.org
 
 wrote:


 Are we ready for a release?


 +1

 I don't think we need to wait for Lucene 2.9.1 - we have all the fixes
 in our version, and there's little point in pushing things off yet
 another week.

 Seems like the next RC should be a *real* one (i.e. no RC label in the
 version, immediately call a VOTE).

 -Yonik
 http://www.lucidimagination.com

   I got busy at work and haven't been able to
 address things as much, but it seems like things are progressing.

 Shall I generate another RC or are we waiting for Lucene 2.9.1?  If we
 go
 w/
 the 2.9.1-dev, then we just need to restore the Maven stuff for them.
  Hopefully, that stuff was just commented out and not completely
 removed
 so
 as to make it a little easier to restore.

 -Grant









Re: [jira] Commented: (SOLR-1513) Use Google Collections in ConcurrentLRUCache

2009-10-20 Thread Bill Au
+1 for having soft reference an available option by configuration, keeping
the current behavior as default.

Bill

2009/10/20 Noble Paul നോബിള്‍ नोब्ळ् noble.p...@corp.aol.com

 On Tue, Oct 20, 2009 at 6:07 PM, Mark Miller markrmil...@gmail.com
 wrote:

  I'm +1 obviously ;) No one is talking about making it the default. And I
  think its well known that soft value caches can be a valid choice -
  thats why google has one in their collections here ;) Its a nice way to
  let your cache grow and shrink based on the available RAM. Its not
  always the right choice, but sure is a nice option. And it doesn't have
  much to do with Lucene's FieldCaches. The main reason for a soft value
  cache is not to avoid OOM. Set your cache sizes correctly for that. And
  even if it was to avoid OOM, who cares if something else causes more of
  them? Thats like not fixing a bug in a piece of code because another
  piece of code has more bugs. Anyway, their purpose is to allow the cache
  to size depending on the available free RAM IMO.
 
 +1

 
  Noble Paul നോബിള്‍ नोब्ळ् wrote:
   So , is everyone now in favor of this feature? Who has a -1 on this?
 and
   what is the concern?
  
   On Tue, Oct 20, 2009 at 3:56 PM, Mark Miller markrmil...@gmail.com
  wrote:
  
  
   On Oct 20, 2009, at 12:12 AM, Shalin Shekhar Mangar 
   shalinman...@gmail.com wrote:
  
I don't think the debate is about weak reference vs. soft references.
  
   There appears to be confusion between the two here no matter what the
   debate - soft references are for cachinh, weak references are not so
  much.
   Getting it right is important.
  
I
  
   guess the point that Lance is making is that using such a technique
  will
   make application performance less predictable. There's also a good
  chance
   that a soft reference based cache will cause cache thrashing and will
  hide
   OOMs caused by inadequate cache sizes. So basically we trade an OOM
 for
   more
   CPU usage (due to re-computation of results).
  
  
   That's the whole point. Your not hiding anything. I don't follow you.
  
  
  
  
   Personally, I think giving an option is fine. What if the user does
 not
   have
   enough RAM and he is willing to pay the price? Right now, there is no
  way
   he
   can do that at all. However, the most frequent reason behind OOMs is
  not
   having enough RAM to create the field caches and not Solr caches, so
  I'm
   not
   sure how important this is.
  
  
   How important is any feature? You don't have a use for it, so it's not
   important to you - someone else does so it is important to them. Soft
  value
   caches can be useful.
  
  
  
  
   On Tue, Oct 20, 2009 at 8:41 AM, Mark Miller markrmil...@gmail.com
   wrote:
  
There is a difference - weak references are not for very good for
  caches
  
   -
   soft references (soft values here) are good for caches in most jvms.
  They
   can be very nice. Weak refs are eagerly reclaimed - it's suggested
  that
   impls should not eagerly reclaim soft refs.
  
   - Mark
  
   http://www.lucidimagination.com (mobile)
  
  
   On Oct 19, 2009, at 8:22 PM, Lance Norskog goks...@gmail.com
 wrote:
  
   Soft references then. Weak pointers is an older term. (They're
  
  
   weak because some bully can steal their candy.)
  
   On Sun, Oct 18, 2009 at 8:37 PM, Jason Rutherglen
   jason.rutherg...@gmail.com wrote:
  
Lance,
  
   Do you mean soft references?
  
   On Sun, Oct 18, 2009 at 3:59 PM, Lance Norskog goks...@gmail.com
 
   wrote:
  
-1 for weak references in caching.
  
   This makes memory management less deterministic (predictable) and
  at
   peak can cause cache-thrashing. In other words, the worst case
 gets
   even more worse. When designing a system I want predictability
 and
  I
   want to control the worst case, because system meltdowns are
 caused
  by
   the worst case. Having thousands of small weak references does
 the
   opposite.
  
   On Sat, Oct 17, 2009 at 2:00 AM, Noble Paul (JIRA) 
  j...@apache.org
   wrote:
  
  
  
[
  
  
 
 https://issues.apache.org/jira/browse/SOLR-1513?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12766864#action_12766864
   ]
  
   Noble Paul commented on SOLR-1513:
   --
  
   bq.Google Collections is already checked in as a dependency of
  Carrot
   clustering.
  
   in that e need to move it to core.
  
   Jason . We do not need to remove the original option. We can
  probably
   add an extra parameter say softRef=true or something. That way
 ,
  we
   are
   not screwing up anything and perf benefits can be studied
  separately.
  
  
   Use Google Collections in ConcurrentLRUCache
  
  
   
  
Key: SOLR-1513
URL:
  https://issues.apache.org/jira/browse/SOLR-1513
Project: Solr
 Issue Type: Improvement
 Components: search
Affects Versions: 1.4
  

[jira] Commented: (SOLR-1294) SolrJS/Javascript client fails in IE8!

2009-10-06 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12762604#action_12762604
 ] 

Bill Au commented on SOLR-1294:
---

jscalendar.js is LGPL so I am not sure if it can be included into Solr.

The latest patch does not apply cleanly to the trunk.  After I manually fix 
things.  The reuters example now works for IE8.  But I am still getting one 
error for IE7:

Line: 61
Error: Expected Identifier, string or number

But if I ignore that error, the auto suggestion and searches do seem to work 
correctly.  It is an improvement since those were not working before.

 SolrJS/Javascript client fails in IE8!
 --

 Key: SOLR-1294
 URL: https://issues.apache.org/jira/browse/SOLR-1294
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Eric Pugh
Assignee: Ryan McKinley
 Fix For: 1.4

 Attachments: jscalendar.tar, SOLR-1294-full.patch, 
 SOLR-1294-IE8.patch, SOLR-1294.patch, solrjs-ie8-html-syntax-error.patch


 SolrJS seems to fail with 'jQuery.solrjs' is null or not an object errors 
 under IE8.  I am continuing to test if this occurs in IE 6 and 7 as well.  
 This happens on both the Sample online site at 
 http://solrjs.solrstuff.org/test/reuters/ as well as the 
 /trunk/contrib/javascript library.   Seems to be a show stopper from the 
 standpoint of really using this library!
 I have posted a screenshot of the error at 
 http://img.skitch.com/20090717-jejm71u6ghf2dpn3mwrkarigwm.png
 The error is just a whole bunch of repeated messages in the vein of:
 Message: 'jQuery.solrjs' is null or not an object
 Line: 24
 Char: 1
 Code: 0
 URI: file:///C:/dev/projects/lib/solr/contrib/javascript/src/core/QueryItem.js
 Message: 'jQuery.solrjs' is null or not an object
 Line: 37
 Char: 1
 Code: 0
 URI: file:///C:/dev/projects/lib/solr/contrib/javascript/src/core/Manager.js
 Message: 'jQuery.solrjs' is null or not an object
 Line: 24
 Char: 1
 Code: 0
 URI: 
 file:///C:/dev/projects/lib/solr/contrib/javascript/src/core/AbstractSelectionView.js
 Message: 'jQuery.solrjs' is null or not an object
 Line: 27
 Char: 1
 Code: 0
 URI: 
 file:///C:/dev/projects/lib/solr/contrib/javascript/src/core/AbstractWidget.js

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1485) PayloadTermQuery support

2009-10-03 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12761931#action_12761931
 ] 

Bill Au commented on SOLR-1485:
---

I am +0 on including/excluding this from 1.4.  FYI, Solr 1.4 already has a 
DelimitedPayloadTokenFilterFactory which uses the DelimitedPayloadTokenFIlter 
in Lucene.  If we include this, I think we should also include a Similarity 
class for payload, either as part of this JIRA or a separate one.

There is also a similar JIRA on query support:

https://issues.apache.org/jira/browse/SOLR-1337

 PayloadTermQuery support
 

 Key: SOLR-1485
 URL: https://issues.apache.org/jira/browse/SOLR-1485
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Erik Hatcher
 Attachments: PayloadTermQueryPlugin.java


 Solr currently has no support for Lucene's PayloadTermQuery, yet it has 
 support for indexing payloads. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-1481) phps writer ignores omitHeader parameter

2009-10-02 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au reassigned SOLR-1481:
-

Assignee: Bill Au

 phps writer ignores omitHeader parameter
 

 Key: SOLR-1481
 URL: https://issues.apache.org/jira/browse/SOLR-1481
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Koji Sekiguchi
Assignee: Bill Au
Priority: Trivial
 Fix For: 1.4

 Attachments: SOLR-1481.patch


 My co-worker found this one. I'm expecting a patch will be attached soon by 
 him. :)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1481) phps writer ignores omitHeader parameter

2009-10-02 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12761592#action_12761592
 ] 

Bill Au commented on SOLR-1481:
---

I can take this one.

 phps writer ignores omitHeader parameter
 

 Key: SOLR-1481
 URL: https://issues.apache.org/jira/browse/SOLR-1481
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Koji Sekiguchi
Assignee: Bill Au
Priority: Trivial
 Fix For: 1.4

 Attachments: SOLR-1481.patch


 My co-worker found this one. I'm expecting a patch will be attached soon by 
 him. :)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-1481) phps writer ignores omitHeader parameter

2009-10-02 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-1481?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-1481.
---

Resolution: Fixed

The patch looks good.  I have committed it:

SendingCHANGES.txt
Sendingsrc/java/org/apache/solr/request/PHPSerializedResponseWriter.java
Transmitting file data ..
Committed revision 821076.


Thanks, Jun.

 phps writer ignores omitHeader parameter
 

 Key: SOLR-1481
 URL: https://issues.apache.org/jira/browse/SOLR-1481
 Project: Solr
  Issue Type: Bug
  Components: search
Reporter: Koji Sekiguchi
Assignee: Bill Au
Priority: Trivial
 Fix For: 1.4

 Attachments: SOLR-1481.patch


 My co-worker found this one. I'm expecting a patch will be attached soon by 
 him. :)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1482) Solr master and slave freeze after query

2009-10-02 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1482?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12761654#action_12761654
 ] 

Bill Au commented on SOLR-1482:
---

You probably want to take a JVM thread dump (kill -3) while the JVM is hung to 
find out what's going on.

Is your webapp app being reloaded?  You can check the appserver log file to see 
if that's happening.  One common way of running out of PermGen space is a 
classloader link which occurs when a webapp is reloaded.

 Solr master and slave freeze after query
 

 Key: SOLR-1482
 URL: https://issues.apache.org/jira/browse/SOLR-1482
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
 Environment: Nightly 9/28/09.
 14 individual instances per server, using JNDI.
 replicateAfter commit, 5 min interval polling.
 All caches are currently commented out, on both slave and master.
 Lots of ongoing commits - large chunks of data, each accompanied by a commit. 
 This is to guarantee that anything we think is now in Solr remains there in 
 case the server crashes.
Reporter: Artem Russakovskii
Priority: Critical

 We're having issues with the deployment of 2 master-slave setups.
 One of the master-slave setups is OK (so far) but on the other both the 
 master and the slave keep freezing, but only after I send a query to them. 
 And by freezing I mean indefinite hanging, with almost no output to log, no 
 errors, nothing. It's as if there's some sort of a deadlock. The hanging 
 servers need to be killed with -9, otherwise they keep hanging.
 The query I send queries all instances at the same time using the ?shards= 
 syntax.
 On the slave, the logs just stop - nothing shows up anymore after the query 
 is issued. On the master, they're a bit more descriptive. This information 
 seeps through very-very slowly, as you can see from the timestamps:
 {quote}
 SEVERE: java.lang.OutOfMemoryError: PermGen space
 Oct 1, 2009 2:16:00 PM org.apache.solr.common.SolrException log
 SEVERE: java.lang.OutOfMemoryError: PermGen space
 Oct 1, 2009 2:19:37 PM org.apache.catalina.connector.CoyoteAdapter service
 SEVERE: An exception or error occurred in the container during the request 
 processing
 java.lang.OutOfMemoryError: PermGen space
 Oct 1, 2009 2:19:37 PM org.apache.coyote.http11.Http11Processor process
 SEVERE: Error processing request
 java.lang.OutOfMemoryError: PermGen space
 Oct 1, 2009 2:19:39 PM org.apache.catalina.connector.CoyoteAdapter service
 SEVERE: An exception or error occurred in the container during the request 
 processing
 java.lang.OutOfMemoryError: PermGen space
 Exception in thread ContainerBackException in thread pool-29-threadOct 1, 
 2009 2:21:47 PM org.apache.catalina.connector.CoyoteAdapter service
 SEVERE: An exception or error occurred in the container during the request 
 processing
 java.lang.OutOfMemoryError: PermGen space
 Oct 1, 2009 2:21:47 PM 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler process
 SEVERE: Error reading request, ignored
 java.lang.OutOfMemoryError: PermGen space
 Oct 1, 2009 2:21:47 PM 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler process
 SEVERE: Error reading request, ignored
 java.lang.OutOfMemoryError: PermGen space
 -22 java.lang.OutOfMemoryError: PermGen space
 Oct 1, 2009 2:21:47 PM 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler process
 SEVERE: Error reading request, ignored
 java.lang.OutOfMemoryError: PermGen space
 Exception in thread http-8080-42 Oct 1, 2009 2:21:47 PM 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler process
 SEVERE: Error reading request, ignored
 java.lang.OutOfMemoryError: PermGen space
 Oct 1, 2009 2:21:47 PM 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler process
 SEVERE: Error reading request, ignored
 java.lang.OutOfMemoryError: PermGen space
 Oct 1, 2009 2:21:47 PM 
 org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler process
 SEVERE: Error reading request, ignored
 java.lang.OutOfMemoryError: PermGen space
 Exception in thread http-8080-26 Exception in thread http-8080-32 
 Exception in thread http-8080-25 Exception in thread http-8080-22 
 Exception in thread http-8080-15 Exception in thread http-8080-45 
 Exception in thread http-8080-13 Exception in thread http-8080-48 
 Exception in thread http-8080-7 Exception in thread http-8080-38 
 Exception in thread http-8080-39 Exception in thread http-8080-28 
 Exception in thread http-8080-1 Exception in thread http-8080-2 Exception 
 in thread http-8080-12 Exception in thread http-8080-44 Exception in 
 thread http-8080-47 Exception in thread http-8080-29 Exception in thread 
 http-8080-33 Exception in thread http-8080-27 Exception in thread 
 http-8080-36 Exception in thread http-8080-113 Exception in thread 
 http-8080-112

[jira] Commented: (SOLR-1485) PayloadTermQuery support

2009-10-02 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12761687#action_12761687
 ] 

Bill Au commented on SOLR-1485:
---

Eric, have you started on this?  I recently wrote a QParserPlugin that supports 
PayloadTermQuery.  It is very bear-bone but could be a good starting point.  I 
can attach my code here to get things started.

 PayloadTermQuery support
 

 Key: SOLR-1485
 URL: https://issues.apache.org/jira/browse/SOLR-1485
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Erik Hatcher
 Fix For: 1.4

 Attachments: PayloadTermQueryPlugin.java


 Solr currently has no support for Lucene's PayloadTermQuery, yet it has 
 support for indexing payloads. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1485) PayloadTermQuery support

2009-10-02 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12761688#action_12761688
 ] 

Bill Au commented on SOLR-1485:
---

Never mind.  I just saw you update.  Your code looks good.

 PayloadTermQuery support
 

 Key: SOLR-1485
 URL: https://issues.apache.org/jira/browse/SOLR-1485
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Erik Hatcher
 Fix For: 1.4

 Attachments: PayloadTermQueryPlugin.java


 Solr currently has no support for Lucene's PayloadTermQuery, yet it has 
 support for indexing payloads. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-1485) PayloadTermQuery support

2009-10-02 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1485?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12761696#action_12761696
 ] 

Bill Au commented on SOLR-1485:
---

Eric, do you think we should support default field and default operator in the 
QParser used?

 PayloadTermQuery support
 

 Key: SOLR-1485
 URL: https://issues.apache.org/jira/browse/SOLR-1485
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Erik Hatcher
 Fix For: 1.4

 Attachments: PayloadTermQueryPlugin.java


 Solr currently has no support for Lucene's PayloadTermQuery, yet it has 
 support for indexing payloads. 

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: 8 for 1.4

2009-09-29 Thread Bill Au
I will try out the patch and test it and report back here.  Stay tuned...

Bill

On Tue, Sep 29, 2009 at 6:36 AM, Grant Ingersoll gsing...@apache.orgwrote:


 On Sep 28, 2009, at 10:09 PM, Bill Au wrote:

  Dropping it completely from 1.4 seems a little too drastic to me, since
 the
 problem is with IE8 only.


 Yeah, but the last patch had doubts about IE7, too.  Bill, is there any
 chance you could try out the patch and test it and apply it?

 I'd like to see it released too, as then it will attract more attention,
 otherwise, we can remove it from just this release.


  Bill

 On Mon, Sep 28, 2009 at 9:47 PM, Ryan McKinley ryan...@gmail.com wrote:

  can we leave it in svn, but drop it from the release?  logistically,
 what is the best way to do this?  Make a branch now, remove it from
 /trunk, after release copy it from the branch back into /trunk?

 That seems like the best way to kick the can down the road.  I agree
 an off-the-shelf apache license jquery client is great.

 ryan


 On Mon, Sep 28, 2009 at 9:13 PM, Grant Ingersoll gsing...@apache.org
 wrote:

 Moving to GPL doesn't seem like a good solution to me, but I don't know

 what

 else to propose.  Why don't we just hold it from this release, but keep

 it

 in trunk and encourage the Drupal guys and others to submit their

 changes?

 Perhaps by then Matthias or you or someone else will have stepped up.

 On Sep 28, 2009, at 7:27 PM, Ryan McKinley wrote:

  I just discussed this off-line with Matthias.  It does not look like
 he has the time to give this much attention now.  (nor do I)

 We agreed that the best steps forward are to:
 1. Support the Drupal guys GPL port
 2. Archive the solrjs code to solrstuff.org
 3. Yank solrjs from apache svn (and 1.4 release)
 4. Add links to the drupal code (GPL) and the solrjs archive (Apache)

 Does this sound reasonable to everybody?

 ryan


 On Mon, Sep 28, 2009 at 1:27 PM, Grant Ingersoll gsing...@apache.org
 wrote:


 Forwarded with permission from Peter Wolanin on a private thread.

 Begin forwarded message:

  From: Peter Wolanin peter.wola...@acquia.com
 Date: September 26, 2009 9:43:23 AM EDT
 To: Grant Ingersoll gsing...@apache.org

 Subject: Re: 8 for 1.4

 I talked to the guys reworking the JS library for Drupal at Drupalcon
 - they are also having to fork potentially around license as much as
 anything else, since they'd like to distribute via drupal.org, which
 means they were hoping to get the original author to re-license the
 code to them as GPL.

 -Peter

 On Fri, Sep 25, 2009 at 4:45 PM, Grant Ingersoll 
 gsing...@apache.org


  wrote:


 Argh, this was meant for solr-dev.

 Begin forwarded message:

  From: Grant Ingersoll gsing...@apache.org
 Date: September 25, 2009 1:34:32 PM EDT
 To: solr-u...@lucene.apache.org
 Subject: 8 for 1.4
 Reply-To: solr-u...@lucene.apache.org

 Y'all,

 We're down to 8 open issues:




 https://issues.apache.org/jira/secure/BrowseVersion.jspa?id=12310230versionId=12313351showOpenIssuesOnly=true


 2 are packaging related, one is dependent on the official 2.9

 release

 (so
 should be taken care of today or tomorrow I suspect) and then we

 have

 a
 few
 others.

 The only two somewhat major ones are S-1458, S-1294 (more on this
 in

 a

 mo') and S-1449.

 On S-1294, the SolrJS patch, I yet again have concerns about even
 including this, given the lack of activity (from Matthias, the
 original
 author and others) and the fact that some in the Drupal community

 have

 already forked this to fix the various bugs in it instead of just
 submitting
 patches.  While I really like the idea of this library (jQuery is
 awesome),
 I have yet to see interest in the community to maintain it (unless

 you

 count
 someone forking it and fixing the bugs in the fork as maintenance)

 and

 I'll
 be upfront in admitting I have neither the time nor the patience to
 debug
 Javascript across the gazillions of browsers out there (I don't
 even
 have IE
 on my machine unless you count firing up a VM w/ XP on it) in the
 wild.
 Given what I know of most of the other committers here, I suspect
 that
 is
 true for others too.  At a minimum, I think S-1294 should be pushed

 to

 1.5.
 Next up, I think we consider pulling SolrJS from the release, but
 keeping
 it in trunk and officially releasing it with either 1.5 or 1.4.1,
 assuming
 its gotten some love in the meantime.  If by then it has no love, I
 vote
 we
 remove it and let the fork maintain it and point people there.

 -Grant







 --
 Peter M. Wolanin, Ph.D.
 Momentum Specialist,  Acquia. Inc.
 peter.wola...@acquia.com


 --
 Grant Ingersoll
 http://www.lucidimagination.com/

 Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)

 using

 Solr/Lucene:
 http://www.lucidimagination.com/search



 --
 Grant Ingersoll
 http://www.lucidimagination.com/

 Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids) using
 Solr/Lucene:
 http

[jira] Commented: (SOLR-1294) SolrJS/Javascript client fails in IE8!

2009-09-29 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1294?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12760637#action_12760637
 ] 

Bill Au commented on SOLR-1294:
---

I just tried the patch from 8/14 with the reuters example in trunk but it does 
not work for IE8.

Eric, http://www.newswise.com/search does work for both IE7 and IE8.  So you 
think you can come up with a patch this week?  If not I think we should 
postpone this bug to 1.5.

 SolrJS/Javascript client fails in IE8!
 --

 Key: SOLR-1294
 URL: https://issues.apache.org/jira/browse/SOLR-1294
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.4
Reporter: Eric Pugh
Assignee: Ryan McKinley
 Fix For: 1.4

 Attachments: SOLR-1294-IE8.patch, SOLR-1294.patch, 
 solrjs-ie8-html-syntax-error.patch


 SolrJS seems to fail with 'jQuery.solrjs' is null or not an object errors 
 under IE8.  I am continuing to test if this occurs in IE 6 and 7 as well.  
 This happens on both the Sample online site at 
 http://solrjs.solrstuff.org/test/reuters/ as well as the 
 /trunk/contrib/javascript library.   Seems to be a show stopper from the 
 standpoint of really using this library!
 I have posted a screenshot of the error at 
 http://img.skitch.com/20090717-jejm71u6ghf2dpn3mwrkarigwm.png
 The error is just a whole bunch of repeated messages in the vein of:
 Message: 'jQuery.solrjs' is null or not an object
 Line: 24
 Char: 1
 Code: 0
 URI: file:///C:/dev/projects/lib/solr/contrib/javascript/src/core/QueryItem.js
 Message: 'jQuery.solrjs' is null or not an object
 Line: 37
 Char: 1
 Code: 0
 URI: file:///C:/dev/projects/lib/solr/contrib/javascript/src/core/Manager.js
 Message: 'jQuery.solrjs' is null or not an object
 Line: 24
 Char: 1
 Code: 0
 URI: 
 file:///C:/dev/projects/lib/solr/contrib/javascript/src/core/AbstractSelectionView.js
 Message: 'jQuery.solrjs' is null or not an object
 Line: 27
 Char: 1
 Code: 0
 URI: 
 file:///C:/dev/projects/lib/solr/contrib/javascript/src/core/AbstractWidget.js

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: 8 for 1.4

2009-09-29 Thread Bill Au
I just did some testing with the reuters example in the trunk.

The code in the trunk now does not work for both IE7 and IE8.  I tried the
patch from 8/14 but that does not work for IE8 (and IE7).  However, Eric
Pugh had indicated in Jira that he has code that works for both IE7 and
IE8.  I have verified that his site http://www.newswise.com/search does work
for both.  I have ask Eric in the Jira if he can come up with a patch this
week.  I think that's our best bet but I don't think this bug should block
the release of 1.4.  So if there is not patch this week I think SolrJS
should be removed from 1.4.

Bill

On Tue, Sep 29, 2009 at 10:33 AM, Bill Au bill.w...@gmail.com wrote:

 I will try out the patch and test it and report back here.  Stay tuned...

 Bill


 On Tue, Sep 29, 2009 at 6:36 AM, Grant Ingersoll gsing...@apache.orgwrote:


 On Sep 28, 2009, at 10:09 PM, Bill Au wrote:

  Dropping it completely from 1.4 seems a little too drastic to me, since
 the
 problem is with IE8 only.


 Yeah, but the last patch had doubts about IE7, too.  Bill, is there any
 chance you could try out the patch and test it and apply it?

 I'd like to see it released too, as then it will attract more attention,
 otherwise, we can remove it from just this release.


  Bill

 On Mon, Sep 28, 2009 at 9:47 PM, Ryan McKinley ryan...@gmail.com
 wrote:

  can we leave it in svn, but drop it from the release?  logistically,
 what is the best way to do this?  Make a branch now, remove it from
 /trunk, after release copy it from the branch back into /trunk?

 That seems like the best way to kick the can down the road.  I agree
 an off-the-shelf apache license jquery client is great.

 ryan


 On Mon, Sep 28, 2009 at 9:13 PM, Grant Ingersoll gsing...@apache.org
 wrote:

 Moving to GPL doesn't seem like a good solution to me, but I don't know

 what

 else to propose.  Why don't we just hold it from this release, but keep

 it

 in trunk and encourage the Drupal guys and others to submit their

 changes?

 Perhaps by then Matthias or you or someone else will have stepped up.

 On Sep 28, 2009, at 7:27 PM, Ryan McKinley wrote:

  I just discussed this off-line with Matthias.  It does not look like
 he has the time to give this much attention now.  (nor do I)

 We agreed that the best steps forward are to:
 1. Support the Drupal guys GPL port
 2. Archive the solrjs code to solrstuff.org
 3. Yank solrjs from apache svn (and 1.4 release)
 4. Add links to the drupal code (GPL) and the solrjs archive (Apache)

 Does this sound reasonable to everybody?

 ryan


 On Mon, Sep 28, 2009 at 1:27 PM, Grant Ingersoll gsing...@apache.org
 
 wrote:


 Forwarded with permission from Peter Wolanin on a private thread.

 Begin forwarded message:

  From: Peter Wolanin peter.wola...@acquia.com
 Date: September 26, 2009 9:43:23 AM EDT
 To: Grant Ingersoll gsing...@apache.org

 Subject: Re: 8 for 1.4

 I talked to the guys reworking the JS library for Drupal at
 Drupalcon
 - they are also having to fork potentially around license as much as
 anything else, since they'd like to distribute via drupal.org,
 which
 means they were hoping to get the original author to re-license the
 code to them as GPL.

 -Peter

 On Fri, Sep 25, 2009 at 4:45 PM, Grant Ingersoll 
 gsing...@apache.org


  wrote:


 Argh, this was meant for solr-dev.

 Begin forwarded message:

  From: Grant Ingersoll gsing...@apache.org
 Date: September 25, 2009 1:34:32 PM EDT
 To: solr-u...@lucene.apache.org
 Subject: 8 for 1.4
 Reply-To: solr-u...@lucene.apache.org

 Y'all,

 We're down to 8 open issues:




 https://issues.apache.org/jira/secure/BrowseVersion.jspa?id=12310230versionId=12313351showOpenIssuesOnly=true


 2 are packaging related, one is dependent on the official 2.9

 release

  (so
 should be taken care of today or tomorrow I suspect) and then we

 have

  a
 few
 others.

 The only two somewhat major ones are S-1458, S-1294 (more on this
 in

 a

  mo') and S-1449.

 On S-1294, the SolrJS patch, I yet again have concerns about even
 including this, given the lack of activity (from Matthias, the
 original
 author and others) and the fact that some in the Drupal community

 have

  already forked this to fix the various bugs in it instead of just
 submitting
 patches.  While I really like the idea of this library (jQuery is
 awesome),
 I have yet to see interest in the community to maintain it (unless

 you

  count
 someone forking it and fixing the bugs in the fork as maintenance)

 and

  I'll
 be upfront in admitting I have neither the time nor the patience
 to
 debug
 Javascript across the gazillions of browsers out there (I don't
 even
 have IE
 on my machine unless you count firing up a VM w/ XP on it) in the
 wild.
 Given what I know of most of the other committers here, I suspect
 that
 is
 true for others too.  At a minimum, I think S-1294 should be
 pushed

 to

  1.5.
 Next up, I think we consider pulling SolrJS from the release, but
 keeping
 it in trunk

Re: 8 for 1.4

2009-09-28 Thread Bill Au
Sounds very reasonable to me.  I hate to see the release of 1.4 being hold
up by Javascript browsers compatibility issues.
Bill

On Mon, Sep 28, 2009 at 7:27 PM, Ryan McKinley ryan...@gmail.com wrote:

 I just discussed this off-line with Matthias.  It does not look like
 he has the time to give this much attention now.  (nor do I)

 We agreed that the best steps forward are to:
 1. Support the Drupal guys GPL port
 2. Archive the solrjs code to solrstuff.org
 3. Yank solrjs from apache svn (and 1.4 release)
 4. Add links to the drupal code (GPL) and the solrjs archive (Apache)

 Does this sound reasonable to everybody?

 ryan


 On Mon, Sep 28, 2009 at 1:27 PM, Grant Ingersoll gsing...@apache.org
 wrote:
  Forwarded with permission from Peter Wolanin on a private thread.
 
  Begin forwarded message:
 
  From: Peter Wolanin peter.wola...@acquia.com
  Date: September 26, 2009 9:43:23 AM EDT
  To: Grant Ingersoll gsing...@apache.org
 
  Subject: Re: 8 for 1.4
 
  I talked to the guys reworking the JS library for Drupal at Drupalcon
  - they are also having to fork potentially around license as much as
  anything else, since they'd like to distribute via drupal.org, which
  means they were hoping to get the original author to re-license the
  code to them as GPL.
 
  -Peter
 
  On Fri, Sep 25, 2009 at 4:45 PM, Grant Ingersoll gsing...@apache.org
  wrote:
 
  Argh, this was meant for solr-dev.
 
  Begin forwarded message:
 
  From: Grant Ingersoll gsing...@apache.org
  Date: September 25, 2009 1:34:32 PM EDT
  To: solr-u...@lucene.apache.org
  Subject: 8 for 1.4
  Reply-To: solr-u...@lucene.apache.org
 
  Y'all,
 
  We're down to 8 open issues:
 
 
 https://issues.apache.org/jira/secure/BrowseVersion.jspa?id=12310230versionId=12313351showOpenIssuesOnly=true
 
  2 are packaging related, one is dependent on the official 2.9 release
  (so
  should be taken care of today or tomorrow I suspect) and then we have
 a
  few
  others.
 
  The only two somewhat major ones are S-1458, S-1294 (more on this in a
  mo') and S-1449.
 
  On S-1294, the SolrJS patch, I yet again have concerns about even
  including this, given the lack of activity (from Matthias, the
 original
  author and others) and the fact that some in the Drupal community have
  already forked this to fix the various bugs in it instead of just
  submitting
  patches.  While I really like the idea of this library (jQuery is
  awesome),
  I have yet to see interest in the community to maintain it (unless you
  count
  someone forking it and fixing the bugs in the fork as maintenance) and
  I'll
  be upfront in admitting I have neither the time nor the patience to
  debug
  Javascript across the gazillions of browsers out there (I don't even
  have IE
  on my machine unless you count firing up a VM w/ XP on it) in the
 wild.
   Given what I know of most of the other committers here, I suspect
 that
  is
  true for others too.  At a minimum, I think S-1294 should be pushed to
  1.5.
   Next up, I think we consider pulling SolrJS from the release, but
  keeping
  it in trunk and officially releasing it with either 1.5 or 1.4.1,
  assuming
  its gotten some love in the meantime.  If by then it has no love, I
 vote
  we
  remove it and let the fork maintain it and point people there.
 
  -Grant
 
 
 
 
 
 
  --
  Peter M. Wolanin, Ph.D.
  Momentum Specialist,  Acquia. Inc.
  peter.wola...@acquia.com
 
  --
  Grant Ingersoll
  http://www.lucidimagination.com/
 
  Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids) using
  Solr/Lucene:
  http://www.lucidimagination.com/search
 
 



Re: 8 for 1.4

2009-09-28 Thread Bill Au
Dropping it completely from 1.4 seems a little too drastic to me, since the
problem is with IE8 only.
Bill

On Mon, Sep 28, 2009 at 9:47 PM, Ryan McKinley ryan...@gmail.com wrote:

 can we leave it in svn, but drop it from the release?  logistically,
 what is the best way to do this?  Make a branch now, remove it from
 /trunk, after release copy it from the branch back into /trunk?

 That seems like the best way to kick the can down the road.  I agree
 an off-the-shelf apache license jquery client is great.

 ryan


 On Mon, Sep 28, 2009 at 9:13 PM, Grant Ingersoll gsing...@apache.org
 wrote:
  Moving to GPL doesn't seem like a good solution to me, but I don't know
 what
  else to propose.  Why don't we just hold it from this release, but keep
 it
  in trunk and encourage the Drupal guys and others to submit their
 changes?
   Perhaps by then Matthias or you or someone else will have stepped up.
 
  On Sep 28, 2009, at 7:27 PM, Ryan McKinley wrote:
 
  I just discussed this off-line with Matthias.  It does not look like
  he has the time to give this much attention now.  (nor do I)
 
  We agreed that the best steps forward are to:
  1. Support the Drupal guys GPL port
  2. Archive the solrjs code to solrstuff.org
  3. Yank solrjs from apache svn (and 1.4 release)
  4. Add links to the drupal code (GPL) and the solrjs archive (Apache)
 
  Does this sound reasonable to everybody?
 
  ryan
 
 
  On Mon, Sep 28, 2009 at 1:27 PM, Grant Ingersoll gsing...@apache.org
  wrote:
 
  Forwarded with permission from Peter Wolanin on a private thread.
 
  Begin forwarded message:
 
  From: Peter Wolanin peter.wola...@acquia.com
  Date: September 26, 2009 9:43:23 AM EDT
  To: Grant Ingersoll gsing...@apache.org
 
  Subject: Re: 8 for 1.4
 
  I talked to the guys reworking the JS library for Drupal at Drupalcon
  - they are also having to fork potentially around license as much as
  anything else, since they'd like to distribute via drupal.org, which
  means they were hoping to get the original author to re-license the
  code to them as GPL.
 
  -Peter
 
  On Fri, Sep 25, 2009 at 4:45 PM, Grant Ingersoll gsing...@apache.org
 
  wrote:
 
  Argh, this was meant for solr-dev.
 
  Begin forwarded message:
 
  From: Grant Ingersoll gsing...@apache.org
  Date: September 25, 2009 1:34:32 PM EDT
  To: solr-u...@lucene.apache.org
  Subject: 8 for 1.4
  Reply-To: solr-u...@lucene.apache.org
 
  Y'all,
 
  We're down to 8 open issues:
 
 
 
 https://issues.apache.org/jira/secure/BrowseVersion.jspa?id=12310230versionId=12313351showOpenIssuesOnly=true
 
  2 are packaging related, one is dependent on the official 2.9
 release
  (so
  should be taken care of today or tomorrow I suspect) and then we
 have
  a
  few
  others.
 
  The only two somewhat major ones are S-1458, S-1294 (more on this in
 a
  mo') and S-1449.
 
  On S-1294, the SolrJS patch, I yet again have concerns about even
  including this, given the lack of activity (from Matthias, the
  original
  author and others) and the fact that some in the Drupal community
 have
  already forked this to fix the various bugs in it instead of just
  submitting
  patches.  While I really like the idea of this library (jQuery is
  awesome),
  I have yet to see interest in the community to maintain it (unless
 you
  count
  someone forking it and fixing the bugs in the fork as maintenance)
 and
  I'll
  be upfront in admitting I have neither the time nor the patience to
  debug
  Javascript across the gazillions of browsers out there (I don't even
  have IE
  on my machine unless you count firing up a VM w/ XP on it) in the
  wild.
   Given what I know of most of the other committers here, I suspect
  that
  is
  true for others too.  At a minimum, I think S-1294 should be pushed
 to
  1.5.
   Next up, I think we consider pulling SolrJS from the release, but
  keeping
  it in trunk and officially releasing it with either 1.5 or 1.4.1,
  assuming
  its gotten some love in the meantime.  If by then it has no love, I
  vote
  we
  remove it and let the fork maintain it and point people there.
 
  -Grant
 
 
 
 
 
 
  --
  Peter M. Wolanin, Ph.D.
  Momentum Specialist,  Acquia. Inc.
  peter.wola...@acquia.com
 
  --
  Grant Ingersoll
  http://www.lucidimagination.com/
 
  Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids)
 using
  Solr/Lucene:
  http://www.lucidimagination.com/search
 
 
 
  --
  Grant Ingersoll
  http://www.lucidimagination.com/
 
  Search the Lucene ecosystem (Lucene/Solr/Nutch/Mahout/Tika/Droids) using
  Solr/Lucene:
  http://www.lucidimagination.com/search
 
 



[jira] Commented: (SOLR-1203) We should add an example of setting the update.processor for a given RequestHandler

2009-08-29 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-1203?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12749189#action_12749189
 ] 

Bill Au commented on SOLR-1203:
---

I had actually come across an example in the Wiki when I was looking for 
information on update processor:

http://wiki.apache.org/solr/Deduplication?highlight=%28updateprocessor%29#head-177c1b1e490e1192f41d9ab0e037b05e1567a35d



 We should add an example of setting the update.processor for a given 
 RequestHandler
 ---

 Key: SOLR-1203
 URL: https://issues.apache.org/jira/browse/SOLR-1203
 Project: Solr
  Issue Type: Improvement
Reporter: Mark Miller
Assignee: Mark Miller
Priority: Minor
 Fix For: 1.4

 Attachments: SOLR-1203.patch


 a commented out example that points to the commented out example update chain 
 or just as good: a comment above the current update chain example explaining 
 how to attach it to a handler.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Lucene RC2

2009-08-29 Thread Bill Au
Yonik, Are you in the process of trying it out or upgrading Solr, or
both?

Bill

On Fri, Aug 28, 2009 at 3:32 PM, Yonik Seeley yo...@lucidimagination.comwrote:

 On Fri, Aug 28, 2009 at 2:54 PM, Grant Ingersollgsing...@apache.org
 wrote:
  Anyone tried out the new Lucene RC2 in Solr yet?  Should we upgrade to
 it?

 I'm in the process of doing so.

 -Yonik
 http://www.lucidimagination.com



Re: remove replication scripts

2009-06-12 Thread Bill Au
+1 for removing from example since they only work on certain OSes.  That
will improve the out-of-the-box experience.

I would like to keep them in src to keep this option open to those running
on OSes that are supported since the scripts have been proven to work well.

Bill

On Fri, Jun 12, 2009 at 11:07 AM, Yonik Seeley
yo...@lucidimagination.comwrote:

 Feels like we should remove the replication scripts from example given
 that we have a new way of doing replication (we could keep in src...
 no biggie).

 Should we keep any of the scripts in bin?  not sure... but something
 like backup should be changed to use the new snapshooter since that
 would work for windows also.  Feels like commit and friends would be
 better to just add to the admin at some point anyway - people who
 really want a command line option can use curl directly.  I'm leaning
 toward removing them all from example at this point... thoughts?

 -Yonik
 http://www.lucidimagination.com



Re: Welcome new Solr committers Mark Miller and Noble Paul

2009-05-01 Thread Bill Au
Mark and Noble, welcome aboard.

Bill

On Fri, May 1, 2009 at 6:48 AM, Mark Miller markrmil...@gmail.com wrote:

 Thanks all for the welcome.

 Little intro:

 I grew up in VT and now I live in Greenwhich CT, about a stones throw from
 PortChester NY. Or as my Grandmother would say, just a spell away. I've been
 a Lucene committer for a while, and its great to also be involved with the
 Solr project. I work for a Lucene/Solr stack commercial company called Lucid
 Imagination, and frankly, its a blast to get to do so much involving my
 favorite open source projects. Looking forward to many more years of
 Lucene/Solr development.

 Congrats Noble!

 --
 - Mark

 http://www.lucidimagination.com






[jira] Updated: (SOLR-346) need to improve snapinstaller to ignore non-snapshots in data directory

2009-03-04 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-346:
-

Attachment: solr-346-2.patch

attaching new patch to fix snapinstaller

 need to improve snapinstaller to ignore non-snapshots in data directory
 ---

 Key: SOLR-346
 URL: https://issues.apache.org/jira/browse/SOLR-346
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Fix For: 1.4

 Attachments: solr-346-2.patch, solr-346.patch


 http://www.mail-archive.com/solr-u...@lucene.apache.org/msg05734.html
  latest snapshot /opt/solr/data/temp-snapshot.20070816120113 already
  installed
 A directory in the Solr data directory is causing snapinstaller to fail.  
 Snapinstaller should be improved to ignore any much non-snapshot as possible. 
  It can use a regular expression to look for snapshot.dd where d 
 is a digit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-346) need to improve snapinstaller to ignore non-snapshots in data directory

2009-03-04 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-346.
--

Resolution: Fixed

I went ahead and committed the code change:

Sendingsnapinstaller
Transmitting file data .
Committed revision 750048.


 need to improve snapinstaller to ignore non-snapshots in data directory
 ---

 Key: SOLR-346
 URL: https://issues.apache.org/jira/browse/SOLR-346
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Fix For: 1.4

 Attachments: solr-346-2.patch, solr-346.patch


 http://www.mail-archive.com/solr-u...@lucene.apache.org/msg05734.html
  latest snapshot /opt/solr/data/temp-snapshot.20070816120113 already
  installed
 A directory in the Solr data directory is causing snapinstaller to fail.  
 Snapinstaller should be improved to ignore any much non-snapshot as possible. 
  It can use a regular expression to look for snapshot.dd where d 
 is a digit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-990) Add pid file to snapinstaller to skip script overruns, and recover from faliure

2009-01-27 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-990?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12667726#action_12667726
 ] 

Bill Au commented on SOLR-990:
--

The pid file should be move under the logs directory.  I don't think it is a 
good idea to keep the pid file under /tmp.  

 Add pid file to snapinstaller to skip script overruns, and recover from 
 faliure
 ---

 Key: SOLR-990
 URL: https://issues.apache.org/jira/browse/SOLR-990
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Reporter: Dan Rosher
Priority: Minor
 Attachments: SOLR-990.patch, SOLR-990.patch, SOLR-990.patch


 The pid file will allow snapinstaller to be run as fast as possible without 
 overruns. Also it will recover from a last failed run should an older 
 snapinstaller process no longer be running. 
 Avoiding overruns means that snapinstaller can be run as fast as possible, 
 but without suffering from the performance issue described here:
 http://wiki.apache.org/solr/SolrPerformanceFactors#head-fc7f22035c493431d58c5404ab22aef0ee1b9909
  
 This means that one can do the following
 */1 * * * * /bin/snappuller/bin/snapinstaller
 Even with a 'properly tuned' setup, there can be times where snapinstaller 
 can suffer from overruns due to a lack of resources, or an unoptimized index 
 using more resources etc.
 currently the pid will live in /tmp ... perhaps it should be in the logs dir?

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Closed: (SOLR-897) abo/abc/backupcleaner/snapcleaner, if removinmg by number of snapshots/backups, gets Argument list too long

2008-12-18 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au closed SOLR-897.


Resolution: Fixed

Patch commited:

SendingCHANGES.txt
Sendingsrc/scripts/abc
Sendingsrc/scripts/abo
Sendingsrc/scripts/backupcleaner
Sendingsrc/scripts/snapcleaner
Transmitting file data .
Committed revision 727722.


 abo/abc/backupcleaner/snapcleaner, if removinmg by number of 
 snapshots/backups, gets Argument list too long
 ---

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: solr-897-2.patch, SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-897) abo/abc/backupcleaner/snapcleaner, if removinmg by number of snapshots/backups, gets Argument list too long

2008-12-16 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-897:
-

Summary: abo/abc/backupcleaner/snapcleaner, if removinmg by number of 
snapshots/backups, gets Argument list too long  (was: snapcleaner, if removinmg 
by number of snapshots, gets Argument list too long)

 abo/abc/backupcleaner/snapcleaner, if removinmg by number of 
 snapshots/backups, gets Argument list too long
 ---

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-897) snapcleaner, if removinmg by number of snapshots, gets Argument list too long

2008-12-16 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12657071#action_12657071
 ] 

Bill Au commented on SOLR-897:
--

The following scripts also have the same problem: abc, abo, backupcleaner

 snapcleaner, if removinmg by number of snapshots, gets Argument list too long
 -

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-897) abo/abc/backupcleaner/snapcleaner, if removinmg by number of snapshots/backups, gets Argument list too long

2008-12-16 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-897:
-

Attachment: solr-897.patch2

expanded patch to include fix for all four scripts with the same problem.

 abo/abc/backupcleaner/snapcleaner, if removinmg by number of 
 snapshots/backups, gets Argument list too long
 ---

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-897) abo/abc/backupcleaner/snapcleaner, if removinmg by number of snapshots/backups, gets Argument list too long

2008-12-16 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-897:
-

Attachment: (was: solr-897.patch2)

 abo/abc/backupcleaner/snapcleaner, if removinmg by number of 
 snapshots/backups, gets Argument list too long
 ---

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-897) abo/abc/backupcleaner/snapcleaner, if removinmg by number of snapshots/backups, gets Argument list too long

2008-12-16 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-897:
-

Attachment: solr-897-2.patch

reattach expanded patch for fixes to all four scritps with the same problem.

 abo/abc/backupcleaner/snapcleaner, if removinmg by number of 
 snapshots/backups, gets Argument list too long
 ---

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-897) abo/abc/backupcleaner/snapcleaner, if removinmg by number of snapshots/backups, gets Argument list too long

2008-12-16 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-897:
-

Attachment: (was: solr-897-2.patch)

 abo/abc/backupcleaner/snapcleaner, if removinmg by number of 
 snapshots/backups, gets Argument list too long
 ---

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-897) abo/abc/backupcleaner/snapcleaner, if removinmg by number of snapshots/backups, gets Argument list too long

2008-12-16 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-897:
-

Attachment: solr-897-2.patch

reattach expanded patch for all four scripts with the same problem.  This time 
with correct license.

 abo/abc/backupcleaner/snapcleaner, if removinmg by number of 
 snapshots/backups, gets Argument list too long
 ---

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: solr-897-2.patch, SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-897) snapcleaner, if removinmg by number of snapshots, gets Argument list too long

2008-12-14 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12656542#action_12656542
 ] 

Bill Au commented on SOLR-897:
--

I am going to see of other replication related scripts have this problem and 
fix them once and for all.

 snapcleaner, if removinmg by number of snapshots, gets Argument list too long
 -

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-897) snapcleaner, if removinmg by number of snapshots, gets Argument list too long

2008-12-14 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-897?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au reassigned SOLR-897:


Assignee: Bill Au

 snapcleaner, if removinmg by number of snapshots, gets Argument list too long
 -

 Key: SOLR-897
 URL: https://issues.apache.org/jira/browse/SOLR-897
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.3
Reporter: Dan Rosher
Assignee: Bill Au
 Fix For: 1.4

 Attachments: SOLR-897.patch


 ls -cd ${data_dir}/snapshot.* returns Argument list too long, use find instead

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [VOTE] LOGO

2008-12-11 Thread Bill Au
https://issues.apache.org/jira/secure/attachment/12394070/sslogo-solr-finder2.0.png
https://issues.apache.org/jira/secure/attachment/12394264/apache_solr_a_red.jpg
https://issues.apache.org/jira/secure/attachment/12394266/apache_solr_b_red.jpg
https://issues.apache.org/jira/secure/attachment/12394268/apache_solr_c_red.jpg

Bill


On Thu, Dec 11, 2008 at 12:01 AM, Ryan McKinley [EMAIL PROTECTED] wrote:


 On Dec 10, 2008, at 10:12 PM, Otis Gospodnetic wrote:



 https://issues.apache.org/jira/secure/attachment/12394475/solr2_maho-vote.png
   is this one ok?


 Seems totally fine by me.




[jira] Resolved: (SOLR-346) need to improve snapinstaller to ignore non-snapshots in data directory

2008-11-20 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-346.
--

Resolution: Fixed

Patch committed:

SendingCHANGES.txt
Sendingsrc/scripts/snapinstaller
Transmitting file data ..
Committed revision 719232.


 need to improve snapinstaller to ignore non-snapshots in data directory
 ---

 Key: SOLR-346
 URL: https://issues.apache.org/jira/browse/SOLR-346
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Fix For: 1.3.1

 Attachments: solr-346.patch


 http://www.mail-archive.com/[EMAIL PROTECTED]/msg05734.html
  latest snapshot /opt/solr/data/temp-snapshot.20070816120113 already
  installed
 A directory in the Solr data directory is causing snapinstaller to fail.  
 Snapinstaller should be improved to ignore any much non-snapshot as possible. 
  It can use a regular expression to look for snapshot.dd where d 
 is a digit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-830) snappuller picks bad snapshot name

2008-11-20 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-830.
--

Resolution: Fixed

Patch committed:

SendingCHANGES.txt
Sendingsrc/scripts/snappuller
Transmitting file data ..
Committed revision 719233.


 snappuller picks bad snapshot name
 --

 Key: SOLR-830
 URL: https://issues.apache.org/jira/browse/SOLR-830
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Hoss Man
Assignee: Bill Au
 Fix For: 1.3.1

 Attachments: solr-830.patch


 as mentioned on the mailing list...
 http://www.nabble.com/FileNotFoundException-on-slave-after-replication---script-bug--to20111313.html#a20111313
 {noformat}
 We're seeing strange behavior on one of our slave nodes after replication. 
 When the new searcher is created we see FileNotFoundExceptions in the log
 and the index is strangely invalid/corrupted.
 We may have identified the root cause but wanted to run it by the community. 
 We figure there is a bug in the snappuller shell script, line 181:
 snap_name=`ssh -o StrictHostKeyChecking=no ${master_host} ls
 ${master_data_dir}|grep 'snapshot\.'|grep -v wip|sort -r|head -1` 
 This line determines the directory name of the latest snapshot to download
 to the slave from the master.  Problem with this line is that it grab the
 temporary work directory of a snapshot in progress.  Those temporary
 directories are prefixed with  temp and as far as I can tell should never
 get pulled from the master so its easy to disambiguate.  It seems that this
 temp directory, if it exists will be the newest one so if present it will be
 the one replicated: FAIL.
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-830) snappuller picks bad snapshot name

2008-11-17 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12648260#action_12648260
 ] 

Bill Au commented on SOLR-830:
--

It works on Mac OS X.

 snappuller picks bad snapshot name
 --

 Key: SOLR-830
 URL: https://issues.apache.org/jira/browse/SOLR-830
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Hoss Man
Assignee: Bill Au
 Fix For: 1.3.1

 Attachments: solr-830.patch


 as mentioned on the mailing list...
 http://www.nabble.com/FileNotFoundException-on-slave-after-replication---script-bug--to20111313.html#a20111313
 {noformat}
 We're seeing strange behavior on one of our slave nodes after replication. 
 When the new searcher is created we see FileNotFoundExceptions in the log
 and the index is strangely invalid/corrupted.
 We may have identified the root cause but wanted to run it by the community. 
 We figure there is a bug in the snappuller shell script, line 181:
 snap_name=`ssh -o StrictHostKeyChecking=no ${master_host} ls
 ${master_data_dir}|grep 'snapshot\.'|grep -v wip|sort -r|head -1` 
 This line determines the directory name of the latest snapshot to download
 to the slave from the master.  Problem with this line is that it grab the
 temporary work directory of a snapshot in progress.  Those temporary
 directories are prefixed with  temp and as far as I can tell should never
 get pulled from the master so its easy to disambiguate.  It seems that this
 temp directory, if it exists will be the newest one so if present it will be
 the one replicated: FAIL.
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-830) snappuller picks bad snapshot name

2008-11-17 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-830:
-

Attachment: solr-830.patch

patch to use perl regular expression to improve accuracy in finding latest 
snapshot.

 snappuller picks bad snapshot name
 --

 Key: SOLR-830
 URL: https://issues.apache.org/jira/browse/SOLR-830
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Hoss Man
Assignee: Bill Au
 Fix For: 1.3.1

 Attachments: solr-830.patch


 as mentioned on the mailing list...
 http://www.nabble.com/FileNotFoundException-on-slave-after-replication---script-bug--to20111313.html#a20111313
 {noformat}
 We're seeing strange behavior on one of our slave nodes after replication. 
 When the new searcher is created we see FileNotFoundExceptions in the log
 and the index is strangely invalid/corrupted.
 We may have identified the root cause but wanted to run it by the community. 
 We figure there is a bug in the snappuller shell script, line 181:
 snap_name=`ssh -o StrictHostKeyChecking=no ${master_host} ls
 ${master_data_dir}|grep 'snapshot\.'|grep -v wip|sort -r|head -1` 
 This line determines the directory name of the latest snapshot to download
 to the slave from the master.  Problem with this line is that it grab the
 temporary work directory of a snapshot in progress.  Those temporary
 directories are prefixed with  temp and as far as I can tell should never
 get pulled from the master so its easy to disambiguate.  It seems that this
 temp directory, if it exists will be the newest one so if present it will be
 the one replicated: FAIL.
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-346) need to improve snapinstaller to ignore non-snapshots in data directory

2008-11-17 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-346:
-

Attachment: solr-346.patch

patch to use perl regular expression to improve accuracy in finding latest 
snapshot.

 need to improve snapinstaller to ignore non-snapshots in data directory
 ---

 Key: SOLR-346
 URL: https://issues.apache.org/jira/browse/SOLR-346
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Fix For: 1.3.1

 Attachments: solr-346.patch


 http://www.mail-archive.com/[EMAIL PROTECTED]/msg05734.html
  latest snapshot /opt/solr/data/temp-snapshot.20070816120113 already
  installed
 A directory in the Solr data directory is causing snapinstaller to fail.  
 Snapinstaller should be improved to ignore any much non-snapshot as possible. 
  It can use a regular expression to look for snapshot.dd where d 
 is a digit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-857) Memory Leak during the indexing of large xml files

2008-11-16 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12648070#action_12648070
 ] 

Bill Au commented on SOLR-857:
--

HashMap$Entry is on top of your list there.  I would look for a large HashMap 
in the heap dump.

 Memory Leak during the indexing of large xml files
 --

 Key: SOLR-857
 URL: https://issues.apache.org/jira/browse/SOLR-857
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: Verified on Ubuntu 8.0.4 (1.7GB RAM, 2.4GHz dual core) 
 and Windows XP (2GB RAM, 2GHz pentium) both with a Java5 SDK
Reporter: Ruben Jimenez
 Attachments: OQ_SOLR_1.xml.zip, schema.xml, solr256MBHeap.jpg


 While indexing a set of SOLR xml files that contain 5000 document adds within 
 them and are about 30MB each, SOLR 1.3 seems to continually use more and more 
 memory until the heap is exhausted, while the same files are indexed without 
 issue with SOLR 1.2.
 Steps used to reproduce.
 1 - Download SOLR 1.3
 2 - Modify example schema.xml to match fields required
 3 - start example server with following command java -Xms512m -Xmx1024m 
 -XX:MaxPermSize=128m -jar start.jar
 4 - Index files as follow java -Xmx128m -jar 
 .../examples/exampledocs/post.jar *.xml
 Directory with xml files contains about 100 xml files each of about 30MB 
 each.  While indexing after about the 25th file SOLR 1.3 runs out of memory, 
 while SOLR 1.2 is able to index the entire set of files without any problems.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-857) Memory Leak during the indexing of large xml files

2008-11-14 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12647684#action_12647684
 ] 

Bill Au commented on SOLR-857:
--

Have you try starting your JVM with -XX:-HeapDumpOnOutOfMemoryError and then 
looking at the heap dump?

 Memory Leak during the indexing of large xml files
 --

 Key: SOLR-857
 URL: https://issues.apache.org/jira/browse/SOLR-857
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: Verified on Ubuntu 8.0.4 (1.7GB RAM, 2.4GHz dual core) 
 and Windows XP (2GB RAM, 2GHz pentium) both with a Java5 SDK
Reporter: Ruben Jimenez
 Attachments: OQ_SOLR_1.xml.zip, schema.xml


 While indexing a set of SOLR xml files that contain 5000 document adds within 
 them and are about 30MB each, SOLR 1.3 seems to continually use more and more 
 memory until the heap is exhausted, while the same files are indexed without 
 issue with SOLR 1.2.
 Steps used to reproduce.
 1 - Download SOLR 1.3
 2 - Modify example schema.xml to match fields required
 3 - start example server with following command java -Xms512m -Xmx1024m 
 -XX:MaxPermSize=128m -jar start.jar
 4 - Index files as follow java -Xmx128m -jar 
 .../examples/exampledocs/post.jar *.xml
 Directory with xml files contains about 100 xml files each of about 30MB 
 each.  While indexing after about the 25th file SOLR 1.3 runs out of memory, 
 while SOLR 1.2 is able to index the entire set of files without any problems.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Issue Comment Edited: (SOLR-857) Memory Leak during the indexing of large xml files

2008-11-14 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-857?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12647684#action_12647684
 ] 

billa edited comment on SOLR-857 at 11/14/08 10:24 AM:
-

Have you tried starting your JVM with -XX:-HeapDumpOnOutOfMemoryError and then 
looking at the heap dump?

  was (Author: billa):
Have you try starting your JVM with -XX:-HeapDumpOnOutOfMemoryError and 
then looking at the heap dump?
  
 Memory Leak during the indexing of large xml files
 --

 Key: SOLR-857
 URL: https://issues.apache.org/jira/browse/SOLR-857
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: Verified on Ubuntu 8.0.4 (1.7GB RAM, 2.4GHz dual core) 
 and Windows XP (2GB RAM, 2GHz pentium) both with a Java5 SDK
Reporter: Ruben Jimenez
 Attachments: OQ_SOLR_1.xml.zip, schema.xml


 While indexing a set of SOLR xml files that contain 5000 document adds within 
 them and are about 30MB each, SOLR 1.3 seems to continually use more and more 
 memory until the heap is exhausted, while the same files are indexed without 
 issue with SOLR 1.2.
 Steps used to reproduce.
 1 - Download SOLR 1.3
 2 - Modify example schema.xml to match fields required
 3 - start example server with following command java -Xms512m -Xmx1024m 
 -XX:MaxPermSize=128m -jar start.jar
 4 - Index files as follow java -Xmx128m -jar 
 .../examples/exampledocs/post.jar *.xml
 Directory with xml files contains about 100 xml files each of about 30MB 
 each.  While indexing after about the 25th file SOLR 1.3 runs out of memory, 
 while SOLR 1.2 is able to index the entire set of files without any problems.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-830) snappuller picks bad snapshot name

2008-11-13 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12647331#action_12647331
 ] 

Bill Au commented on SOLR-830:
--

Steve, thanks for the perl code.  I need to get rid of the \ before the $ 
in order to get it to work for me:

perl -e 'chdir q/${master_data_dir}/; print ((sort grep 
{/^snapshot[.][1-9][0-9]{13}$/} *)[-1])'


I have tested this on Linux and FreeBSD.  I will test on Mac OS X tonight.  It 
will be good if someone can do a quick test on Solaris.  You really don't need 
a full brown Solr installation to test it.  Just create some dummy directory 
with various names like:

snapshot.00080527124131
snapshot.20080527124131
snapshot.20080527124131-wip
snapshot.20080527140518
snapshot.20080527140610
snapshot.20081113113700
snapshot.2080527124131
temp-snapshot.20080527124131

and then run the perl command to make sure the right one is returned.  With the 
data set above, you should get:

snapshot.20081113113700

 snappuller picks bad snapshot name
 --

 Key: SOLR-830
 URL: https://issues.apache.org/jira/browse/SOLR-830
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Hoss Man
Assignee: Bill Au
 Fix For: 1.3.1


 as mentioned on the mailing list...
 http://www.nabble.com/FileNotFoundException-on-slave-after-replication---script-bug--to20111313.html#a20111313
 {noformat}
 We're seeing strange behavior on one of our slave nodes after replication. 
 When the new searcher is created we see FileNotFoundExceptions in the log
 and the index is strangely invalid/corrupted.
 We may have identified the root cause but wanted to run it by the community. 
 We figure there is a bug in the snappuller shell script, line 181:
 snap_name=`ssh -o StrictHostKeyChecking=no ${master_host} ls
 ${master_data_dir}|grep 'snapshot\.'|grep -v wip|sort -r|head -1` 
 This line determines the directory name of the latest snapshot to download
 to the slave from the master.  Problem with this line is that it grab the
 temporary work directory of a snapshot in progress.  Those temporary
 directories are prefixed with  temp and as far as I can tell should never
 get pulled from the master so its easy to disambiguate.  It seems that this
 temp directory, if it exists will be the newest one so if present it will be
 the one replicated: FAIL.
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-830) snappuller picks bad snapshot name

2008-11-13 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12647384#action_12647384
 ] 

Bill Au commented on SOLR-830:
--

Steven, thanks for testing on Solaris.  It looks like on Linux and FreeBSD that 
the '\' in front of '$' escape the special meaning of '$' so it is trying to 
match against the literal '$' after all the digits (ie 
snapshot.20080527124131$).  Unless this does not work on Mac OS X, I will go 
with perl without the '\' before the '$'.  I will attach a patch here after I 
test on my Mac tonight.

 snappuller picks bad snapshot name
 --

 Key: SOLR-830
 URL: https://issues.apache.org/jira/browse/SOLR-830
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Hoss Man
Assignee: Bill Au
 Fix For: 1.3.1


 as mentioned on the mailing list...
 http://www.nabble.com/FileNotFoundException-on-slave-after-replication---script-bug--to20111313.html#a20111313
 {noformat}
 We're seeing strange behavior on one of our slave nodes after replication. 
 When the new searcher is created we see FileNotFoundExceptions in the log
 and the index is strangely invalid/corrupted.
 We may have identified the root cause but wanted to run it by the community. 
 We figure there is a bug in the snappuller shell script, line 181:
 snap_name=`ssh -o StrictHostKeyChecking=no ${master_host} ls
 ${master_data_dir}|grep 'snapshot\.'|grep -v wip|sort -r|head -1` 
 This line determines the directory name of the latest snapshot to download
 to the slave from the master.  Problem with this line is that it grab the
 temporary work directory of a snapshot in progress.  Those temporary
 directories are prefixed with  temp and as far as I can tell should never
 get pulled from the master so its easy to disambiguate.  It seems that this
 temp directory, if it exists will be the newest one so if present it will be
 the one replicated: FAIL.
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-830) snappuller picks bad snapshot name

2008-10-30 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-830?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12644004#action_12644004
 ] 

Bill Au commented on SOLR-830:
--


I think we should use a regular expression and match against the naming 
convention of the snapshot (snapshot.mmddHHMMss).  Here is what I propose:

snap_name=`ssh -o StrictHostKeyChecking=no ${master_host} ls
${master_data_dir}|egrep '^snapshot\.[123456789][0-9]{13}$'|sort -r|head
-1`

 snappuller picks bad snapshot name
 --

 Key: SOLR-830
 URL: https://issues.apache.org/jira/browse/SOLR-830
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Hoss Man

 as mentioned on the mailing list...
 http://www.nabble.com/FileNotFoundException-on-slave-after-replication---script-bug--to20111313.html#a20111313
 {noformat}
 We're seeing strange behavior on one of our slave nodes after replication. 
 When the new searcher is created we see FileNotFoundExceptions in the log
 and the index is strangely invalid/corrupted.
 We may have identified the root cause but wanted to run it by the community. 
 We figure there is a bug in the snappuller shell script, line 181:
 snap_name=`ssh -o StrictHostKeyChecking=no ${master_host} ls
 ${master_data_dir}|grep 'snapshot\.'|grep -v wip|sort -r|head -1` 
 This line determines the directory name of the latest snapshot to download
 to the slave from the master.  Problem with this line is that it grab the
 temporary work directory of a snapshot in progress.  Those temporary
 directories are prefixed with  temp and as far as I can tell should never
 get pulled from the master so its easy to disambiguate.  It seems that this
 temp directory, if it exists will be the newest one so if present it will be
 the one replicated: FAIL.
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-830) snappuller picks bad snapshot name

2008-10-30 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-830?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au reassigned SOLR-830:


Assignee: Bill Au

 snappuller picks bad snapshot name
 --

 Key: SOLR-830
 URL: https://issues.apache.org/jira/browse/SOLR-830
 Project: Solr
  Issue Type: Bug
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Hoss Man
Assignee: Bill Au

 as mentioned on the mailing list...
 http://www.nabble.com/FileNotFoundException-on-slave-after-replication---script-bug--to20111313.html#a20111313
 {noformat}
 We're seeing strange behavior on one of our slave nodes after replication. 
 When the new searcher is created we see FileNotFoundExceptions in the log
 and the index is strangely invalid/corrupted.
 We may have identified the root cause but wanted to run it by the community. 
 We figure there is a bug in the snappuller shell script, line 181:
 snap_name=`ssh -o StrictHostKeyChecking=no ${master_host} ls
 ${master_data_dir}|grep 'snapshot\.'|grep -v wip|sort -r|head -1` 
 This line determines the directory name of the latest snapshot to download
 to the slave from the master.  Problem with this line is that it grab the
 temporary work directory of a snapshot in progress.  Those temporary
 directories are prefixed with  temp and as far as I can tell should never
 get pulled from the master so its easy to disambiguate.  It seems that this
 temp directory, if it exists will be the newest one so if present it will be
 the one replicated: FAIL.
 {noformat}

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-346) need to improve snapinstaller to ignore non-snapshots in data directory

2008-10-30 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-346?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12644007#action_12644007
 ] 

Bill Au commented on SOLR-346:
--

snappuller has the same problem.  See SOLR-830 
(https://issues.apache.org/jira/browse/SOLR-830) for details.  The proposed 
solution there should work here also.


 need to improve snapinstaller to ignore non-snapshots in data directory
 ---

 Key: SOLR-346
 URL: https://issues.apache.org/jira/browse/SOLR-346
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor

 http://www.mail-archive.com/[EMAIL PROTECTED]/msg05734.html
  latest snapshot /opt/solr/data/temp-snapshot.20070816120113 already
  installed
 A directory in the Solr data directory is causing snapinstaller to fail.  
 Snapinstaller should be improved to ignore any much non-snapshot as possible. 
  It can use a regular expression to look for snapshot.dd where d 
 is a digit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-346) need to improve snapinstaller to ignore non-snapshots in data directory

2008-10-30 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-346?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-346:
-

Affects Version/s: 1.2
   1.3

 need to improve snapinstaller to ignore non-snapshots in data directory
 ---

 Key: SOLR-346
 URL: https://issues.apache.org/jira/browse/SOLR-346
 Project: Solr
  Issue Type: Improvement
  Components: replication (scripts)
Affects Versions: 1.2, 1.3
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor

 http://www.mail-archive.com/[EMAIL PROTECTED]/msg05734.html
  latest snapshot /opt/solr/data/temp-snapshot.20070816120113 already
  installed
 A directory in the Solr data directory is causing snapinstaller to fail.  
 Snapinstaller should be improved to ignore any much non-snapshot as possible. 
  It can use a regular expression to look for snapshot.dd where d 
 is a digit.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-624) patch: Don't take snapshot if there are no differences

2008-07-10 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-624?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12612458#action_12612458
 ] 

Bill Au commented on SOLR-624:
--

Yes, the '*' will cost problem if there are many snapshots.  The snappuller 
script has a shell command to determine the latest snapshot on the master.  We 
can use the same command in snapshooter.

 patch: Don't take snapshot if there are no differences
 --

 Key: SOLR-624
 URL: https://issues.apache.org/jira/browse/SOLR-624
 Project: Solr
  Issue Type: Improvement
  Components: replication
Affects Versions: 1.3
Reporter: Richard Trey Hyde
 Fix For: 1.3

 Attachments: solr.check.patch


 This is similar in concept to a change I made several years ago in Solar.   
 Cronned snapshooters can quickly generate a lot of snaps which will then be 
 unnecessarily distributed to the slaves if there hasn't been any changes in 
 that period.
 Adds a check argument to make sure there where changes to the index before 
 taking the snap.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-590) Limitation in pgrep on Linux platform breaks script-utils fixUser

2008-06-04 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au reassigned SOLR-590:


Assignee: Bill Au

 Limitation in pgrep on Linux platform breaks script-utils fixUser 
 --

 Key: SOLR-590
 URL: https://issues.apache.org/jira/browse/SOLR-590
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.2
 Environment: Linux 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:37:38 EST 
 2008 x86_64 x86_64 x86_64 GNU/Linux
 procps-3.2.7-8.1.el5
Reporter: Hannes Schmidt
Assignee: Bill Au
Priority: Minor
 Attachments: fixUser.patch


 The fixUser function in script-utils uses two methods to determine the 
 username of the parent process (oldwhoami). If the first method fails for 
 certain reasons it will fallback to the second method. For most people the 
 first method will succeed but I know that in my particular installation the 
 first method fails so I need the second method to succeed. Unfortunately, 
 that fallback method doesn't work because it uses pgrep to lookup the current 
 script's name and on my Linux 2.6.18 platform pgrep is limited to 15 
 characters. The names of many scripts in the SOLR distribution are longer 
 than that, causing pgrep to return nothing and the subsequent ps invocation 
 to fail with an error:
 ERROR: List of process IDs must follow -p.
 You can easily reproduce that behaviour with
 /app/solr/solr/bin/snappuller-enable  /dev/null
 The redirection of stdin from /dev/null causes fixUser to fallback to the 
 second method but there are other, more realistic scenarios in which the 
 fallback happens, like
 ssh [EMAIL PROTECTED] /app/solr/solr/bin/snappuller-enable
 The fix is to use the -f option which causes pgrep to compare the full path 
 of the executable. Interestingly, that method is not subject to the 15 
 character length limit. The limit is not actually enforced by jetty but 
 rather by the procfs file system of the linux kernel. If you look at 
 /proc/*/stat you will notice that the second column is limited to 15 
 characters.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-590) Limitation in pgrep on Linux platform breaks script-utils fixUser

2008-06-04 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-590?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-590.
--

Resolution: Fixed

Patch commited and CHANGES.txt updated.  Thanks Hannes.

SendingCHANGES.txt
Sendingsrc/scripts/scripts-util
Transmitting file data ..
Committed revision 663089.


 Limitation in pgrep on Linux platform breaks script-utils fixUser 
 --

 Key: SOLR-590
 URL: https://issues.apache.org/jira/browse/SOLR-590
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.2
 Environment: Linux 2.6.18-53.1.14.el5 #1 SMP Wed Mar 5 11:37:38 EST 
 2008 x86_64 x86_64 x86_64 GNU/Linux
 procps-3.2.7-8.1.el5
Reporter: Hannes Schmidt
Assignee: Bill Au
Priority: Minor
 Attachments: fixUser.patch


 The fixUser function in script-utils uses two methods to determine the 
 username of the parent process (oldwhoami). If the first method fails for 
 certain reasons it will fallback to the second method. For most people the 
 first method will succeed but I know that in my particular installation the 
 first method fails so I need the second method to succeed. Unfortunately, 
 that fallback method doesn't work because it uses pgrep to lookup the current 
 script's name and on my Linux 2.6.18 platform pgrep is limited to 15 
 characters. The names of many scripts in the SOLR distribution are longer 
 than that, causing pgrep to return nothing and the subsequent ps invocation 
 to fail with an error:
 ERROR: List of process IDs must follow -p.
 You can easily reproduce that behaviour with
 /app/solr/solr/bin/snappuller-enable  /dev/null
 The redirection of stdin from /dev/null causes fixUser to fallback to the 
 second method but there are other, more realistic scenarios in which the 
 fallback happens, like
 ssh [EMAIL PROTECTED] /app/solr/solr/bin/snappuller-enable
 The fix is to use the -f option which causes pgrep to compare the full path 
 of the executable. Interestingly, that method is not subject to the 15 
 character length limit. The limit is not actually enforced by jetty but 
 rather by the procfs file system of the linux kernel. If you look at 
 /proc/*/stat you will notice that the second column is limited to 15 
 characters.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-139) Support updateable/modifiable documents

2008-05-31 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12601408#action_12601408
 ] 

Bill Au commented on SOLR-139:
--

I noticed that this bug is no longer included in the 1.3 release.  Are there 
any outstanding issues if all the fields are stored?  Requiring that all fields 
are stored for a document to be update-able seems like reasonable to me.  This 
feature will simplify things for Solr users who are doing a query to get all 
the fields following by an add when they only want to update a very small 
number of fields.

 Support updateable/modifiable documents
 ---

 Key: SOLR-139
 URL: https://issues.apache.org/jira/browse/SOLR-139
 Project: Solr
  Issue Type: New Feature
  Components: update
Reporter: Ryan McKinley
Assignee: Ryan McKinley
 Attachments: Eriks-ModifiableDocument.patch, 
 Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, 
 Eriks-ModifiableDocument.patch, Eriks-ModifiableDocument.patch, 
 Eriks-ModifiableDocument.patch, getStoredFields.patch, getStoredFields.patch, 
 getStoredFields.patch, getStoredFields.patch, getStoredFields.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-ModifyInputDocuments.patch, 
 SOLR-139-ModifyInputDocuments.patch, SOLR-139-ModifyInputDocuments.patch, 
 SOLR-139-ModifyInputDocuments.patch, SOLR-139-XmlUpdater.patch, 
 SOLR-269+139-ModifiableDocumentUpdateProcessor.patch


 It would be nice to be able to update some fields on a document without 
 having to insert the entire document.
 Given the way lucene is structured, (for now) one can only modify stored 
 fields.
 While we are at it, we can support incrementing an existing value - I think 
 this only makes sense for numbers.
 for background, see:
 http://www.nabble.com/loading-many-documents-by-ID-tf3145666.html#a8722293

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Add SolrCore.getSolrCore() to SolrServlet.doGet to work arround Resin bug?

2008-05-28 Thread Bill Au
I agree with Otis and prefer the cleaner approach.

By the way, I have filed a bug with Caucho for this particular resin bug:

http://bugs.caucho.com/view.php?id=2706

We have a support contract with them so this will be fixed in the next
release.

Bill

On Tue, May 27, 2008 at 6:33 PM, Otis Gospodnetic 
[EMAIL PROTECTED] wrote:

 I think I'd prefer to have that single core instance and a slower first
 request instead of doing extra initialization work and then letting extra
 instances linger... seems cleaner.

 Otis
 --
 Sematext -- http://sematext.com/ -- Lucene - Solr - Nutch


 - Original Message 
  From: Chris Hostetter [EMAIL PROTECTED]
  To: Solr Dev solr-dev@lucene.apache.org
  Sent: Tuesday, May 27, 2008 6:27:18 PM
  Subject: Add SolrCore.getSolrCore() to SolrServlet.doGet to work arround
 Resin bug?
 
 
  Over a year ago, Ryan noticed that Resin wasn't correctly loading the
  SolrDispatchFilter prior to the SolrServlet in violation of the Servlet
  Spec...
 
 https://issues.apache.org/jira/browse/SOLR-166?focusedCommentId=12474310#action_12474310
 
  ...at the time, the only downside of this was a weird error which was
  easily dealt with usingsome more robust error handling.  However,
  with the addition of the multicore code to SolrDispatchFilter, the result
  now is that...
 
1) SolrServlet.init() calls SolrCore.getSolrCore()
   1.1) SolrCore.getSolrCore() sees no singleton, so it
constructs a new instance and sets the singleton
   1.2) SolrServlet stores the result in a member variable core
2) SolrDispatchFilter.init calls new SolrCore(...)
   2.1) the constructor for SolrCore sets the singleton
(per previous discussion about how two best support legacy
 uses
of the singleton in a multicore world: it's always the newest
core)
3) SolrServlet.doGet uses it's private core, which is now differnet
 then
   what SolrDispatchFilter is using.
 
  Meanwhile, the legacy SolrUpdateServlet winds up getting the current
  singleton for every request.
 
  ...it seems like a simple fix to try and make things more correct
  (regardless of what order things are loaded in) would be to either...
 
  a) remove the getSolrCore() call in SolrServlet.init() and replace the
  core member variable with a new call to getSolrCore() at the start of
  doGet().
 
  OR
 
  b) Leave the getSolrCore() call in SolrServlet.init(), but still replace
  the core member variable with a new call to getSolrCore() at the start
  of doGet().
 
  Option (a) would insure that only one core ever exists, regardless of
  which order the various classes are initalied in, as long as the Filter
  was initialized before the first request to SolrServlet -- but that first
  request might be slower for legacy users of SolrServlet.  Option (b)
 would
  garuntee that at least one core was initialzed before the first request
 so
  it would be just as fast for legacy users of SolrServlet, at the
 expensive
  of initializing (and then ignoring) an extra SolrCore.
 
  Either appraoch would ensure that SolrServlet was at least consistent
 with
  SolrUpdateServlet (and SolrDispatchFilter) in always using the current
  singleton core.
 
 
  thoughts?
 
 
 
 
 
 
  -Hoss




Re: Welcome, Koji

2008-05-07 Thread Bill Au
Welcome abroad, Koji.

Bill

On Tue, May 6, 2008 at 6:56 PM, Koji Sekiguchi [EMAIL PROTECTED] wrote:

 Hi Erik and everyone!

 I'm looking forward to working with you. :)

 Cheers,

 Koji


 Erik Hatcher wrote:

  A warm welcome to our newest Solr committer, Koji Sekiguchi!  He's been
  providing solid patches and improvements to Solr and the Ruby
  (solr-ruby/Flare) integration for a while now.
 
 Erik
 
 
 



Re: replication should include the schema also

2008-04-25 Thread Bill Au
Synchronizing solrconfig.xml is definitely not a good idea.  Typically the
master has a post commit/optimize hook to execute
snapshooter.

Solr's replication is meant for replicating the Lucene index.  One can argue
that the schema is part of the Solr index
so it should be included in the replication.  But I don't think it would be
use to do software installation.  I don't think
we should use it to distribute configuration files, just like we shouldn't
use it to distribute updated to the Solr binaries.

Bill

On Fri, Apr 25, 2008 at 4:48 AM, Guillaume Smet [EMAIL PROTECTED]
wrote:

 On Fri, Apr 25, 2008 at 6:05 AM, Noble Paul നോബിള്‍ नोब्ळ्
 [EMAIL PROTECTED] wrote:
  Synchronizing solrconfig is not a very desired behavior. Typically the
   solrconfigs of master and slaves tend to differ. For instance we may
   disable the UpdateHandler in slaves and there may be tuning done in
   master to optimize indexing etc etc. The index data is not dependent
   on the config itself.

 +1 for not synchronizing the solrconfig.xml itself.

 But perhaps we could have a solrconfig.slave.xml which could be
 synchronized with slaves' solrconfig.xml if present?

 --
 Guillaume



[jira] Assigned: (SOLR-531) rsyncd-start and snappuller should exit with different exit code when disabled

2008-04-07 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au reassigned SOLR-531:


Assignee: Bill Au

 rsyncd-start and snappuller should exit with different exit code when disabled
 --

 Key: SOLR-531
 URL: https://issues.apache.org/jira/browse/SOLR-531
 Project: Solr
  Issue Type: Improvement
  Components: replication
Affects Versions: 1.3
Reporter: Thomas Peuss
Assignee: Bill Au
Priority: Trivial
 Attachments: SOLR-531.patch


 When the rsyncd-start and snappuller scripts get executed the scripts check 
 if they are enabled or not. If they are disabled they exit with 1. The 
 scripts exit with 1 on error as well. So you cannot decide in upstream 
 scripts if really an error occurred or if they are only disabled.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-531) rsyncd-start and snappuller should exit with different exit code when disabled

2008-04-07 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-531?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-531.
--

Resolution: Fixed

Patch committed:

SendingCHANGES.txt
Sendingsrc/scripts/rsyncd-start
Sendingsrc/scripts/snappuller
Transmitting file data ...
Committed revision 645608.


Thanks, Thomas.

 rsyncd-start and snappuller should exit with different exit code when disabled
 --

 Key: SOLR-531
 URL: https://issues.apache.org/jira/browse/SOLR-531
 Project: Solr
  Issue Type: Improvement
  Components: replication
Affects Versions: 1.3
Reporter: Thomas Peuss
Assignee: Bill Au
Priority: Trivial
 Attachments: SOLR-531.patch


 When the rsyncd-start and snappuller scripts get executed the scripts check 
 if they are enabled or not. If they are disabled they exit with 1. The 
 scripts exit with 1 on error as well. So you cannot decide in upstream 
 scripts if really an error occurred or if they are only disabled.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Assigned: (SOLR-535) Small typo in schema.jsp (Tokenzied - Tokenized)

2008-04-07 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au reassigned SOLR-535:


Assignee: Bill Au

 Small typo in schema.jsp (Tokenzied - Tokenized)
 -

 Key: SOLR-535
 URL: https://issues.apache.org/jira/browse/SOLR-535
 Project: Solr
  Issue Type: Bug
  Components: web gui
Reporter: Thomas Peuss
Assignee: Bill Au
Priority: Trivial
   Original Estimate: 0.02h
  Remaining Estimate: 0.02h

 Small typo in schema.jsp:
 Tokenzied - Tokenized (line 274)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-535) Small typo in schema.jsp (Tokenzied - Tokenized)

2008-04-07 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-535?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-535.
--

Resolution: Fixed

Typo fixed:
SendingCHANGES.txt
Sendingsrc/webapp/web/admin/schema.jsp
Transmitting file data ..
Committed revision 645614.


Thanks, Thomas.

 Small typo in schema.jsp (Tokenzied - Tokenized)
 -

 Key: SOLR-535
 URL: https://issues.apache.org/jira/browse/SOLR-535
 Project: Solr
  Issue Type: Bug
  Components: web gui
Reporter: Thomas Peuss
Assignee: Bill Au
Priority: Trivial
   Original Estimate: 0.02h
  Remaining Estimate: 0.02h

 Small typo in schema.jsp:
 Tokenzied - Tokenized (line 274)

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-531) rsyncd-start and snappuller should exit with different exit code when disabled

2008-04-03 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-531?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12585100#action_12585100
 ] 

Bill Au commented on SOLR-531:
--

The patch looks good to me.  I will apply it if no one objects.

 rsyncd-start and snappuller should exit with different exit code when disabled
 --

 Key: SOLR-531
 URL: https://issues.apache.org/jira/browse/SOLR-531
 Project: Solr
  Issue Type: Improvement
  Components: replication
Affects Versions: 1.3
Reporter: Thomas Peuss
Priority: Trivial
 Attachments: SOLR-531.patch


 When the rsyncd-start and snappuller scripts get executed the scripts check 
 if they are enabled or not. If they are disabled they exit with 1. The 
 scripts exit with 1 on error as well. So you cannot decide in upstream 
 scripts if really an error occurred or if they are only disabled.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-524) snappuller has limitation w/r/t/ handling multiple web apps

2008-04-01 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-524?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12584150#action_12584150
 ] 

Bill Au commented on SOLR-524:
--

In the command line in question:

rsync -Wa${verbose}${compress} --delete ${sizeonly} \
${stats} rsync://${master_host}:${rsyncd_port}/solr/${name}/ 
${data_dir}/${name}-wip

The string solr IS NOT the webapp name.  II is the name used by rsyncd to map 
to a file system path.

Here is the content of rsyncd.conf, which is generated by rsyncd-start 
dynamically:
 rsyncd.conf file 

uid = $(whoami)
gid = $(whoami)
use chroot = no
list = no
pid file = ${solr_root}/logs/rsyncd.pid
log file = ${solr_root}/logs/rsyncd.log
[solr]
path = ${data_dir}
comment = Solr



 snappuller has limitation w/r/t/ handling multiple web apps
 ---

 Key: SOLR-524
 URL: https://issues.apache.org/jira/browse/SOLR-524
 Project: Solr
  Issue Type: Improvement
  Components: replication
Affects Versions: 1.2
 Environment: Linux (CentOS release 5 (Final))
 Java JDK 6
Reporter: Ezra Epstein
Priority: Minor
   Original Estimate: 0.5h
  Remaining Estimate: 0.5h

 The snappuller has a limitation which makes it hard to use for replicating 
 the indices for multiple webapps.  In particular, by changing:
 # rsync over files that have changed
 rsync -Wa${verbose}${compress} --delete ${sizeonly} \
 ${stats} rsync://${master_host}:${rsyncd_port}/solr/${name}/ 
 ${data_dir}/${name}-wip
 to: 
 # rsync over files that have changed
 rsync -Wa${verbose}${compress} --delete ${sizeonly} \
 ${stats} rsync://${master_host}:${rsyncd_port}/${rsync_module_path}/${name}/ 
 ${data_dir}/${name}-wip
 and adding an rsync_module_path variable to scripts.conf, plus giving it a 
 default value of solr before the 'unset' commands at the top of the 
 snappuller script, I've worked around the issue.  Still, it seems better to 
 not hard-code the module name ([solr]) and also to allow some flexibility in 
 the location of the data files under that module.  This is req'd for multiple 
 webapps since they won't share a data folder.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: restructure webapp code so we can easily test .jsp

2008-03-16 Thread Bill Au
+1
I like putting index.jsp, admin, and WEB-INF all in the same directory.

Bill

On Sun, Mar 16, 2008 at 9:34 AM, Yonik Seeley [EMAIL PROTECTED] wrote:

 On Thu, Mar 13, 2008 at 1:48 PM, Ryan McKinley [EMAIL PROTECTED] wrote:
   The thing I'm tyring to work with is how to get the jsp files to easily
   load into jetty for testing/debugging

 +1

 resources is only there because IntelliJ put it there lng ago.

 -Yonik



Re: Solr Perl Interface

2008-02-18 Thread Bill Au
Thanks for the contribution, Tim.
You probably want to add this information to the Solr Wiki also:

http://wiki.apache.org/solr/IntegratingSolr

Bill

On Feb 15, 2008 4:47 PM, Timothy Garafola [EMAIL PROTECTED] wrote:

 I recently released a simple perl wrapper module to CPAN which supplies
 methods in perl for posting adds, deletes, commits, and optimizes to a
 solr server.  Originally I had written some simple processes to handle
 this when porting a large collection of documents from FAST into SOLR.
 Mark Backman and Yousef Ourabi suggested that I take this a step further
 and submit something to the open source community.  So now it's
 available for download at via
 http://search.cpan.org/author/GARAFOLA/Solr-0.03/lib/Solr.pm.



 As time permits, I'm adding to this.  I also welcome contributions and
 hope to release the first updated version in a week or so.  Yousef has
 stated interest in extending it to supply querying functionality.



 One of the things I have a need for is the ability to update by query;
 something similar in functionality to SOLRs delete_by_query.  Is there
 anything like this already in SOLR?I'm playing with doing this in
 perl by first issuing a query, parsing the returned xml into  a perl
 data structure, updating element values and/or extending it with dynamic
 fields in the data structure, then reposting the docs returned in the
 initial query.  Can anyone tell me if I'm reinventing a wheel here or
 suggest an alternative approach?



 Thanks,

 Tim Garafola


















Re: Default Logging: OFF

2008-02-14 Thread Bill Au
Those levels comes from java.util.logging.Level:

http://java.sun.com/javase/6/docs/api/java/util/logging/Level.html


I disagree that the default should be off.  Logging doesn't impact
performance that much unless it is set to something like FINEST.
I don't think you want to ignore SECERE and/or WARNING message.

Bill


On Thu, Feb 14, 2008 at 12:42 AM, Fuad Efendi [EMAIL PROTECTED] wrote:

 Would be nice to set default logging OFF.
 - improved performance
 - no need to open admin console after each (possible automated) SOLR
 restart
 - no need to alter files outside SOLR (security...)

 What I see from Admin Console is small subset of standard Java logging
 API:
 [ALL] [CONFIG] [FINE] [FINER] [FINEST] [INFO] [OFF] [SEVERE] [WARNING]

 - we are using SINGLE(!) logger for all classes and packages (at least via
 Admin Console).

 Isn't it better to have defaults and parameter in solrconfig.xml?



  http://wiki.apache.org/solr/FAQ?highlight=%28logging%29#head-f
 fe035452f21ffdb4e4658c2f8f6553bd6cahttp://wiki.apache.org/solr/FAQ?highlight=%28logging%29#head-ffe035452f21ffdb4e4658c2f8f6553bd6ca
 
  Solr uses JDK standard logging so you should be able to
  change its log level
  as part of the configuration of your container.
  The admin GUI can also be used to change the logging level
  once Solr is up
  and running.




Re: Default Logging: OFF

2008-02-13 Thread Bill Au
http://wiki.apache.org/solr/FAQ?highlight=%28logging%29#head-ffe035452f21ffdb4e4658c2f8f6553bd6ca

Solr uses JDK standard logging so you should be able to change its log level
as part of the configuration of your container.
The admin GUI can also be used to change the logging level once Solr is up
and running.

Bill

On Feb 12, 2008 11:54 PM, Fuad Efendi [EMAIL PROTECTED] wrote:


 Hi,
 How to set *default* SOLR logging level to OFF? My logs quickly become
 very
 huge...




[jira] Commented: (SOLR-463) script commit failed

2008-01-21 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-463?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanelfocusedCommentId=12561205#action_12561205
 ] 

Bill Au commented on SOLR-463:
--

This looks like a duplicate of solr-426.  Can you either apply the patch 
contained in solr-426 or try the latest version of the commit script?

 script commit failed
 

 Key: SOLR-463
 URL: https://issues.apache.org/jira/browse/SOLR-463
 Project: Solr
  Issue Type: Bug
Affects Versions: 1.3
 Environment: Linux, Ubantu
Reporter: Eric
 Fix For: 1.3


  commit.log 
 -
 2008/01/22 09:33:03 started by root
 2008/01/22 09:33:03 command: ./solr/bin/commit
 2008/01/22 09:33:03 commit request to Solr at 
 http://localhost:8080/solr/update 
 failed:
 2008/01/22 09:33:03 
 ?xml version=1.0 encoding=UTF-8? response lst 
 name=responseHeaderint name=status0/intint 
 name=QTime12/int/lststr name=WARNINGThis response format is 
 experimental. It is likely to change in the future./str /response
 2008/01/22 09:33:03 failed (elapsed time: 0 sec)
 --
 when i execute snapinstaller or commit scripts in sor/bin, get above commit 
 log, and snapinstaller shows snapshot installed but Solr server has not open 
 a new Searcher, in fact solr has already opened new searcher and updated 
 contented can be searched;here the os is ubantu;
 But under federa, i can get same xml content in commit.log and without failed 
 tip.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: Solr CWIKI ready for experimenting

2007-12-23 Thread Bill Au
Hoss, I have recreated my Conflucene accout.  My account name is billa.
 Please add me to the appropriate groups.  Thanks.

Happy holidays.

Bill

On Dec 23, 2007 3:19 AM, Chris Hostetter [EMAIL PROTECTED] wrote:


 See this thread for background...
 http://www.nabble.com/Confluence-wiki-vs-MoinMoin-to14207960.html

 The new Solr Confluence wiki is ready for experimenting ... i played with
 it just enough to confirm that i can create pages and manage users ... the
 permissions should currently be setup so that anyone who makes an account
 can comment on pages, but only people in the asf-cla (or one of the
 numerous solr-* groups that seem to have been created automaticly by the
 Confluence wiki software) can add or edit pages.

http://cwiki.apache.org/SOLRxSITE/

 the only problem is: the groups all have to be managed manually -- they
 aren't auto generated from ASF unix groups or anytihng like that
 (basically it's done the same as jira  which is kind of nice since it
 lets you use a differnet email if you want)

 So, where that leaves us is that i (or someone else in the
 confluence-admin group, but no one else in that group is a Lucene'r so
 let's not bug them) need to add people to groups *after* they/you create a
 Conflucene account...

 http://cwiki.apache.org/confluence/signup.action

 If you are a Solr committer and/or have a CLA on file with the ASF and
 want ot help with Solr documentation, please reply to this thread when you
 make an account (with the account name please), and i'll add you to the
 appropriate groups.


 -Hoss




[jira] Resolved: (SOLR-393) contentType is set twice with conflicting values in raw-schema.jsp

2007-11-06 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-393.
--

Resolution: Fixed

patch committed.  Also updated CHANGES.txt.

 contentType is set twice with conflicting values in raw-schema.jsp
 --

 Key: SOLR-393
 URL: https://issues.apache.org/jira/browse/SOLR-393
 Project: Solr
  Issue Type: Bug
  Components: web gui
Reporter: Bill Au
 Attachments: solr-393.patch


 Content type is set twice in raw-schema.jsp with conflicting values, causing 
 a compiling error in Resin:
 %@ page contentType=text/html; charset=utf-8 pageEncoding=UTF-8%
 %--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the License); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an AS IS BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
 --%
 %@ page import=org.apache.solr.core.SolrCore,
  org.apache.solr.schema.IndexSchema%
 %@ page import=java.io.InputStreamReader%
 %@ page import=java.io.Reader%
 %@ page contentType=text/plain;charset=UTF-8 language=java %

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: heading.jsp not used. Can/Should it be removed?

2007-10-26 Thread Bill Au
Thanks, Erik.  I brought this up cause I was having compilation issues with
it too.

Bill

On 10/25/07, Erik Hatcher [EMAIL PROTECTED] wrote:

 Done.   That one had just bitten me with compilation issues while
 setting up a new IDE project with IntelliJ 7.

 Erik


 On Oct 25, 2007, at 11:04 AM, Ryan McKinley wrote:

  Bill Au wrote:
  Subject says it all.
 
  +1




[jira] Created: (SOLR-393) contentType is set twice with conflicting values in raw-schema.jsp

2007-10-25 Thread Bill Au (JIRA)
contentType is set twice with conflicting values in raw-schema.jsp
--

 Key: SOLR-393
 URL: https://issues.apache.org/jira/browse/SOLR-393
 Project: Solr
  Issue Type: Bug
  Components: web gui
Reporter: Bill Au


Content type is set twice in raw-schema.jsp with conflicting values, causing a 
compiling error in Resin:

%@ page contentType=text/html; charset=utf-8 pageEncoding=UTF-8%
%--
 Licensed to the Apache Software Foundation (ASF) under one or more
 contributor license agreements.  See the NOTICE file distributed with
 this work for additional information regarding copyright ownership.
 The ASF licenses this file to You under the Apache License, Version 2.0
 (the License); you may not use this file except in compliance with
 the License.  You may obtain a copy of the License at

 http://www.apache.org/licenses/LICENSE-2.0

 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an AS IS BASIS,
 WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 See the License for the specific language governing permissions and
 limitations under the License.
--%
%@ page import=org.apache.solr.core.SolrCore,
 org.apache.solr.schema.IndexSchema%
%@ page import=java.io.InputStreamReader%
%@ page import=java.io.Reader%
%@ page contentType=text/plain;charset=UTF-8 language=java %




-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Updated: (SOLR-393) contentType is set twice with conflicting values in raw-schema.jsp

2007-10-25 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-393?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au updated SOLR-393:
-

Attachment: solr-393.patch

The attached patch remove duplicates contentType settings, keeping the last one:

%@ page contentType=text/plain;charset=UTF-8 language=java %


 contentType is set twice with conflicting values in raw-schema.jsp
 --

 Key: SOLR-393
 URL: https://issues.apache.org/jira/browse/SOLR-393
 Project: Solr
  Issue Type: Bug
  Components: web gui
Reporter: Bill Au
 Attachments: solr-393.patch


 Content type is set twice in raw-schema.jsp with conflicting values, causing 
 a compiling error in Resin:
 %@ page contentType=text/html; charset=utf-8 pageEncoding=UTF-8%
 %--
  Licensed to the Apache Software Foundation (ASF) under one or more
  contributor license agreements.  See the NOTICE file distributed with
  this work for additional information regarding copyright ownership.
  The ASF licenses this file to You under the Apache License, Version 2.0
  (the License); you may not use this file except in compliance with
  the License.  You may obtain a copy of the License at
  http://www.apache.org/licenses/LICENSE-2.0
  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an AS IS BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License.
 --%
 %@ page import=org.apache.solr.core.SolrCore,
  org.apache.solr.schema.IndexSchema%
 %@ page import=java.io.InputStreamReader%
 %@ page import=java.io.Reader%
 %@ page contentType=text/plain;charset=UTF-8 language=java %

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



heading.jsp not used. Can/Should it be removed?

2007-10-25 Thread Bill Au
Subject says it all.


Bill


[jira] Commented: (SOLR-365) improper handling of user login greater than 8 characters

2007-09-27 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-365?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12530711
 ] 

Bill Au commented on SOLR-365:
--

Paul, what OS are you running?  The who command on Red Hat Linux does not seems 
to have this problem.

 improper handling of user login greater than 8 characters
 -

 Key: SOLR-365
 URL: https://issues.apache.org/jira/browse/SOLR-365
 Project: Solr
  Issue Type: Bug
  Components: replication
 Environment: linux and probably other unix operating systems
Reporter: Paul Sundling
Priority: Minor

 to reproduce, create a user account that is more than 8 characters long.  
 Then try to do a command like snappuller and even though the config is setup 
 properly, it attempts to do a sudo.  The reason is that  2 different methods 
 are used to calculate the user, one of which truncates accounts to 8 
 characters.
 While user logins used to be limited to 8 characters, this may not be the 
 case on modern UNIX.  
 Here is a snippet I get by adding the -x debug flag to bash.  Note how 
 oldwhoami is a truncated version (psundlin) of the full login (psundling).
 + fixUser
 + [[ -z psundling ]]
 ++ whoami
 + [[ psundling != psundling ]]
 ++ who -m
 ++ cut '-d ' -f1
 ++ sed '-es/^.*!//'
 + oldwhoami=psundlin
 + [[ psundlin == '' ]]

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Closed: (SOLR-353) default values of UpdateRequest not supported by Solr

2007-09-12 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-353?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au closed SOLR-353.


   Resolution: Fixed
Fix Version/s: 1.3

Patch committed (Committed revision 574920).

 default values of UpdateRequest not supported by Solr
 -

 Key: SOLR-353
 URL: https://issues.apache.org/jira/browse/SOLR-353
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 1.3
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Fix For: 1.3

 Attachments: solr-353.patch


 The default values of allowDups, overwriteCommitted, and overwritePending are 
 all false in UpdateRequest.  This combination is not supported in Solr.  The 
 default should be changed to overwrite = true:
 allowDups = false;
 overwriteCommitted = true;
 overwritePending = true;

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-352) UpdateRequest is duplicating commit and optimize requests

2007-09-11 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-352?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12526463
 ] 

Bill Au commented on SOLR-352:
--

Thanks Ryan.  I got it to work but I have to add

req.setOverwrite(true);

Otherwise, a got an exception from Solr:

org.apache.solr.common.SolrException: unsupported param 
combo:add:,allowDups=false,overwritePending=false,overwriteCommitted=false

unsupported param 
combo:add:,allowDups=false,overwritePending=false,overwriteCommitted=false

request: 
http://cn-ewr1-dev40-pi2.cnet.com:7905/solr/update?commit=truewaitFlush=falsewaitSearcher=falsewt=xmlversion=2.2

So it looks like the combination of default values of allowDups, 
overwriteCommitted, and overwritePending (all false) in UpdateRequest is not 
supported by Solr.  Should we change the default to something that is supported 
(setting overwirte to true)?  I can open a separate bug and take care of that.

 UpdateRequest is duplicating commit and optimize requests
 -

 Key: SOLR-352
 URL: https://issues.apache.org/jira/browse/SOLR-352
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 1.3
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Attachments: solr-352.patch, solr-352.patch


 UpdateRequest current sets both query args and a update XML message in the 
 POST body.  This causes Solr to do two commit/optimize for each 
 commit/optimize request sent in by SolrJ.  I will be attaching a patch to 
 remove the commit/optimize query args.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-335) solrj and ping / PingRequestHandler

2007-09-11 Thread Bill Au (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-335?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12526464
 ] 

Bill Au commented on SOLR-335:
--

Since I am returning a null NamedList in SolrPingResponse, I have also put some 
checks into SolrResponseBase to guide against NPE:

--- 
client/java/solrj/src/org/apache/solr/client/solrj/response/SolrResponseBase.java
   (revision 574346)
+++ 
client/java/solrj/src/org/apache/solr/client/solrj/response/SolrResponseBase.java
   (working copy)
@@ -63,11 +63,23 @@
   
   // these two methods are based on the logic in 
SolrCore.setResponseHeaderValues(...)
   public int getStatus() {
-return (Integer) getResponseHeader().get(status);
+NamedList header = getResponseHeader();
+if (header != null) {
+return (Integer) header.get(status);
+}
+else {
+return 0;
+}
   }
   
   public int getQTime() {
-return (Integer) getResponseHeader().get(QTime);
+NamedList header = getResponseHeader();
+if (header != null) {
+return (Integer) header.get(QTime);
+}
+else {
+return 0;
+}
   }
 
   public String getRequestUrl() {


 solrj and ping / PingRequestHandler
 ---

 Key: SOLR-335
 URL: https://issues.apache.org/jira/browse/SOLR-335
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Reporter: Ryan McKinley
Priority: Minor
 Attachments: solr-335-workaround.patch


 Solrj needs to talk to a PingRequestHandler
 see: 
 http://www.nabble.com/-Solrj--Documentation---SolrServer-Ping-tf4246988.html

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Created: (SOLR-353) default values of UpdateRequest not supported by Solr

2007-09-11 Thread Bill Au (JIRA)
default values of UpdateRequest not supported by Solr
-

 Key: SOLR-353
 URL: https://issues.apache.org/jira/browse/SOLR-353
 Project: Solr
  Issue Type: Bug
  Components: clients - java
Affects Versions: 1.3
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor


The default values of allowDups, overwriteCommitted, and overwritePending are 
all false in UpdateRequest.  This combination is not supported in Solr.  The 
default should be changed to overwrite = true:

allowDups = false;
overwriteCommitted = true;
overwritePending = true;


-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



  1   2   >