[jira] Commented: (SOLR-303) Federated Search over HTTP

2007-07-17 Thread Sharad Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513167
 ] 

Sharad Agarwal commented on SOLR-303:
-

 Index view consistency between multiple requests requirement is relaxed in 
 this implementation.
Do you have plans to remedy that? Or do you think that most people are OK 
with inconsistencies that could arise?
The thing to note here is that currently multi phase execution is based on 
document unique fields, NOT on doc internal ids. So there wont be much 
inconsistencies between requests; as it does not depend on changing internal 
doc ids. 
The possibility is that a particular document may have been deleted when the 
second phase executes.; which in my opinion should be OK to live with.
Other possibility could be the document is changed and original query terms are 
not present in the document anymore. This can be solved by doing a AND with the 
original query and uniq field document query.

If people think it is really crucial to have index view consistency, then it 
should be easy to implement Consistency via Retry as mentioned in 
http://wiki.apache.org/solr/FederatedSearch 
Consistency via specifying Index version would be little involved. Session 
management with Sticky load balancers could be explored.

It might also be the case that a custom partitioning function could be 
implemented (such as improving caching by partitioning queries, etc) or it 
may be more efficient to do the second phase of a query on the same shard 
copy as the first phase.
In that case it might make sense load balancing across shards from Solr. 
For second phase of a query to execute on the same shard copy, third party 
Sticky load balancers can be used. I believe Apache already does that. All 
copies of a single partition can sit behind the Apache load balancer (doing the 
Sticky). The merger just needs to know about the Load-balancer ip/port for 
each partition. Now based on the query, merger can search the appropriate 
partitions only.

To improve the caching, Solr itself has to do the load balancing. Other option 
could be to introduce the query result cache at the merger itself.

Where are terms extracted from (some queries require index access)? This 
should be delegated to the shards, no?It can be the same step that gets the 
docFreqs from the shards (pass the query, *not* the terms). 
yes, if thats the case, should be easy to implement as you have suggested.

I think we should base the solution on something like 
https://issues.apache.org/jira/browse/SOLR-281 
cool, I was looking for something like this. This looks like the way to go.

Any thoughts on RMI vs HTTP for the searcher-subsearcher interface? 
RMI could be supported as an option by enhancing the ResponseParser (better 
name ??) interface. The remote search server can directly return the 
SolrQueryResponse object. I understand that there will be some performance 
benefit if doing the native java marshalling/unmarshalling of object; instead 
of Solr response writing and then parsing (if done the HTTP way). The question 
we need to answer is: Is the effort/complexity worth it?

In our organization we made a conscious decision to go for HTTP. The operation 
folks like HTTP as it is standard stuff, load balancing, monitoring etc. Lot of 
tools already available for it. With RMI, I am not sure external Sticky 
load-balancing is possible; the merger itself has to build the logic.
Moreover, I think HTTP fits more naturally with Solr in its Request handler 
model.





 Federated Search over HTTP
 --

 Key: SOLR-303
 URL: https://issues.apache.org/jira/browse/SOLR-303
 Project: Solr
  Issue Type: New Feature
  Components: search
Reporter: Sharad Agarwal
Priority: Minor
 Attachments: fedsearch.patch


 Motivated by http://wiki.apache.org/solr/FederatedSearch
 Index view consistency between multiple requests requirement is relaxed in 
 this implementation.
 Does the federated search query side. Update not yet done.
 Tries to achieve:-
 
 - The client applications are totally agnostic to federated search. The 
 federated search and merging of results are totally behind the scene in Solr 
 in request handler . Response format remains the same after merging of 
 results.
 The response from individual shard is deserialized into SolrQueryResponse 
 object. The collection of SolrQueryResponse objects are merged to produce a 
 single SolrQueryResponse object. This enables to use the Response writers as 
 it is; or with minimal change.
 - Efficient query processing with highlighting and fields getting generated 
 only for merged documents. The query is executed in 2 phases. First phase 
 gets the doc unique keys with sort criteria. Second phase brings all 
 requested fields and highlighting information. This saves 

[jira] Issue Comment Edited: (SOLR-303) Federated Search over HTTP

2007-07-17 Thread Sharad Agarwal (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-303?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513167
 ] 

Sharad Agarwal edited comment on SOLR-303 at 7/17/07 12:27 AM:
---

 Index view consistency between multiple requests requirement is relaxed in 
 this implementation.
Do you have plans to remedy that? Or do you think that most people are OK 
with inconsistencies that could arise?
The thing to note here is that currently multi phase execution is based on 
document unique fields, NOT on doc internal ids. So there wont be much 
inconsistencies between requests; as it does not depend on changing internal 
doc ids. 
The possibility is that a particular document may have been deleted when the 
second phase executes.; which in my opinion should be OK to live with.
Other possibility could be the document is changed and original query terms are 
not present in the document anymore. This can be solved by doing a AND with the 
original query and uniq field document query.

If people think it is really crucial to have index view consistency, then it 
should be easy to implement Consistency via Retry as mentioned in 
http://wiki.apache.org/solr/FederatedSearch 

It might also be the case that a custom partitioning function could be 
implemented (such as improving caching by partitioning queries, etc) or it 
may be more efficient to do the second phase of a query on the same shard 
copy as the first phase.
In that case it might make sense load balancing across shards from Solr. 
For second phase of a query to execute on the same shard copy, third party 
Sticky load balancers can be used. I believe Apache already does that. All 
copies of a single partition can sit behind the Apache load balancer (doing the 
Sticky). The merger just needs to know about the Load-balancer ip/port for 
each partition. Now based on the query, merger can search the appropriate 
partitions only.

To improve the caching, Solr itself has to do the load balancing. Other option 
could be to introduce the query result cache at the merger itself.

Where are terms extracted from (some queries require index access)? This 
should be delegated to the shards, no?It can be the same step that gets the 
docFreqs from the shards (pass the query, *not* the terms). 
yes, if thats the case, should be easy to implement as you have suggested.

I think we should base the solution on something like 
https://issues.apache.org/jira/browse/SOLR-281 
cool, I was looking for something like this. This looks like the way to go.

Any thoughts on RMI vs HTTP for the searcher-subsearcher interface? 
RMI could be supported as an option by enhancing the ResponseParser (better 
name ??) interface. The remote search server can directly return the 
SolrQueryResponse object. I understand that there will be some performance 
benefit if doing the native java marshalling/unmarshalling of object; instead 
of Solr response writing and then parsing (if done the HTTP way). The question 
we need to answer is: Is the effort/complexity worth it?

In our organization we made a conscious decision to go for HTTP. The operation 
folks like HTTP as it is standard stuff, load balancing, monitoring etc. Lot of 
tools already available for it. With RMI, I am not sure external Sticky 
load-balancing is possible; the merger itself has to build the logic.
Moreover, I think HTTP fits more naturally with Solr in its Request handler 
model.






 was:
 Index view consistency between multiple requests requirement is relaxed in 
 this implementation.
Do you have plans to remedy that? Or do you think that most people are OK 
with inconsistencies that could arise?
The thing to note here is that currently multi phase execution is based on 
document unique fields, NOT on doc internal ids. So there wont be much 
inconsistencies between requests; as it does not depend on changing internal 
doc ids. 
The possibility is that a particular document may have been deleted when the 
second phase executes.; which in my opinion should be OK to live with.
Other possibility could be the document is changed and original query terms are 
not present in the document anymore. This can be solved by doing a AND with the 
original query and uniq field document query.

If people think it is really crucial to have index view consistency, then it 
should be easy to implement Consistency via Retry as mentioned in 
http://wiki.apache.org/solr/FederatedSearch 
Consistency via specifying Index version would be little involved. Session 
management with Sticky load balancers could be explored.

It might also be the case that a custom partitioning function could be 
implemented (such as improving caching by partitioning queries, etc) or it 
may be more efficient to do the second phase of a query on the same shard 
copy as the first phase.
In that case it might make sense load balancing across shards from Solr. 
For second phase of a 

Build failed in Hudson: Solr-Nightly #145

2007-07-17 Thread hudson
See http://lucene.zones.apache.org:8080/hudson/job/Solr-Nightly/145/changes

Changes:

[ryan] exposing the CommonsHttpSolrServer invariant params.

--
[...truncated 897 lines...]
AUclient/ruby/solr-ruby/test/unit/array_mapper_test.rb
A client/ruby/solr-ruby/test/unit/field_test.rb
AUclient/ruby/solr-ruby/test/unit/solr_mock_base.rb
A client/ruby/solr-ruby/test/unit/add_document_test.rb
AUclient/ruby/solr-ruby/test/unit/request_test.rb
A client/ruby/solr-ruby/test/unit/commit_test.rb
AUclient/ruby/solr-ruby/test/unit/xpath_mapper_test.rb
AUclient/ruby/solr-ruby/test/unit/suite.rb
A client/ruby/solr-ruby/test/unit/ping_test.rb
A client/ruby/solr-ruby/test/unit/dismax_request_test.rb
A client/ruby/solr-ruby/test/unit/response_test.rb
AUclient/ruby/solr-ruby/test/unit/indexer_test.rb
AUclient/ruby/solr-ruby/test/unit/connection_test.rb
A client/ruby/solr-ruby/test/unit/delete_test.rb
AUclient/ruby/solr-ruby/test/unit/tab_delimited.txt
A client/ruby/solr-ruby/test/unit/hpricot_test_file.xml
AUclient/ruby/solr-ruby/test/unit/standard_request_test.rb
A client/ruby/solr-ruby/test/unit/hpricot_mapper_test.rb
AUclient/ruby/solr-ruby/test/unit/data_mapper_test.rb
AUclient/ruby/solr-ruby/test/unit/util_test.rb
A client/ruby/solr-ruby/test/functional
A client/ruby/solr-ruby/test/functional/test_solr_server.rb
A client/ruby/solr-ruby/test/functional/server_test.rb
A client/ruby/solr-ruby/test/conf
AUclient/ruby/solr-ruby/test/conf/schema.xml
A client/ruby/solr-ruby/test/conf/protwords.txt
A client/ruby/solr-ruby/test/conf/stopwords.txt
AUclient/ruby/solr-ruby/test/conf/solrconfig.xml
A client/ruby/solr-ruby/test/conf/scripts.conf
A client/ruby/solr-ruby/test/conf/admin-extra.html
A client/ruby/solr-ruby/test/conf/synonyms.txt
A client/ruby/solr-ruby/LICENSE.txt
A client/ruby/solr-ruby/Rakefile
A client/ruby/solr-ruby/script
AUclient/ruby/solr-ruby/script/setup.rb
AUclient/ruby/solr-ruby/script/solrshell
A client/ruby/solr-ruby/lib
A client/ruby/solr-ruby/lib/solr
AUclient/ruby/solr-ruby/lib/solr/util.rb
A client/ruby/solr-ruby/lib/solr/document.rb
A client/ruby/solr-ruby/lib/solr/exception.rb
AUclient/ruby/solr-ruby/lib/solr/indexer.rb
AUclient/ruby/solr-ruby/lib/solr/response.rb
AUclient/ruby/solr-ruby/lib/solr/connection.rb
A client/ruby/solr-ruby/lib/solr/importer
AUclient/ruby/solr-ruby/lib/solr/importer/delimited_file_source.rb
AUclient/ruby/solr-ruby/lib/solr/importer/solr_source.rb
AUclient/ruby/solr-ruby/lib/solr/importer/array_mapper.rb
AUclient/ruby/solr-ruby/lib/solr/importer/mapper.rb
AUclient/ruby/solr-ruby/lib/solr/importer/xpath_mapper.rb
A client/ruby/solr-ruby/lib/solr/importer/hpricot_mapper.rb
A client/ruby/solr-ruby/lib/solr/xml.rb
AUclient/ruby/solr-ruby/lib/solr/importer.rb
A client/ruby/solr-ruby/lib/solr/field.rb
AUclient/ruby/solr-ruby/lib/solr/solrtasks.rb
A client/ruby/solr-ruby/lib/solr/request
A client/ruby/solr-ruby/lib/solr/request/ping.rb
A client/ruby/solr-ruby/lib/solr/request/select.rb
AUclient/ruby/solr-ruby/lib/solr/request/optimize.rb
AUclient/ruby/solr-ruby/lib/solr/request/standard.rb
A client/ruby/solr-ruby/lib/solr/request/delete.rb
AUclient/ruby/solr-ruby/lib/solr/request/index_info.rb
A client/ruby/solr-ruby/lib/solr/request/update.rb
A client/ruby/solr-ruby/lib/solr/request/dismax.rb
A client/ruby/solr-ruby/lib/solr/request/add_document.rb
A client/ruby/solr-ruby/lib/solr/request/commit.rb
A client/ruby/solr-ruby/lib/solr/request/base.rb
AUclient/ruby/solr-ruby/lib/solr/request.rb
A client/ruby/solr-ruby/lib/solr/response
A client/ruby/solr-ruby/lib/solr/response/ping.rb
AUclient/ruby/solr-ruby/lib/solr/response/optimize.rb
A client/ruby/solr-ruby/lib/solr/response/standard.rb
A client/ruby/solr-ruby/lib/solr/response/xml.rb
A client/ruby/solr-ruby/lib/solr/response/ruby.rb
A client/ruby/solr-ruby/lib/solr/response/delete.rb
AUclient/ruby/solr-ruby/lib/solr/response/index_info.rb
A client/ruby/solr-ruby/lib/solr/response/dismax.rb
A client/ruby/solr-ruby/lib/solr/response/add_document.rb
A client/ruby/solr-ruby/lib/solr/response/commit.rb
A client/ruby/solr-ruby/lib/solr/response/base.rb
AUclient/ruby/solr-ruby/lib/solr.rb
A client/ruby/solr-ruby/CHANGES.yml
A client/ruby/solr-ruby/README
A client/ruby/solr-ruby/examples
A client/ruby/solr-ruby/examples/marc
AU

[jira] Resolved: (SOLR-294) scripts fail to log elapsed time on Solaris

2007-07-17 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-294?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-294.
--

Resolution: Fixed

Patch has been committed.

 scripts fail to log elapsed time on Solaris
 ---

 Key: SOLR-294
 URL: https://issues.apache.org/jira/browse/SOLR-294
 Project: Solr
  Issue Type: Bug
  Components: replication
 Environment: Solaris
Reporter: Bill Au
Assignee: Bill Au
Priority: Minor
 Attachments: solr-294.patch


 The code in the scritps to determine the elapsed time does not work on 
 Solaris because the date command there does not support the %s output format.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-136) snappuller - date -d and locales don't mix

2007-07-17 Thread Bill Au (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-136?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Bill Au resolved SOLR-136.
--

Resolution: Fixed

Patch commited.  Thanks Jürgen.

 snappuller - date -d and locales don't mix
 

 Key: SOLR-136
 URL: https://issues.apache.org/jira/browse/SOLR-136
 Project: Solr
  Issue Type: Bug
  Components: replication
 Environment: SuSE 9.1
Reporter: Jürgen Hermann
 Fix For: 1.1.0


 In snappuller, the output of $(date) is fed back into date -d, which 
 doesn't work in some (non-US) locales:
   date -d$(date)
 date: ungültiges Datum ,,Fr Feb  2 13:39:04 CET 2007
   date -d$(date +'%Y-%m-%d %H:%M:%S')
 Fr Feb  2 13:39:10 CET 2007
 This is the fix:
 --- snappuller  (revision 1038)
 +++ snappuller  (working copy)
 @@ -214,7 +214,7 @@
  ssh -o StrictHostKeyChecking=no ${master_host} mkdir -p ${master_status_dir}
  # start new distribution stats
 -rsyncStart=`date`
 +rsyncStart=`date +'%Y-%m-%d %H:%M:%S'`
  startTimestamp=`date -d $rsyncStart +'%Y%m%d-%H%M%S'`
  rsyncStartSec=`date -d $rsyncStart +'%s'`
  startStatus=rsync of `basename ${name}` started:$startTimestamp
 @@ -226,7 +226,7 @@
  ${stats} rsync://${master_host}:${rsyncd_port}/solr/${name}/ 
 ${data_dir}/${name}-wip
  rc=$?
 -rsyncEnd=`date`
 +rsyncEnd=`date +'%Y-%m-%d %H:%M:%S'`
  endTimestamp=`date -d $rsyncEnd +'%Y%m%d-%H%M%S'`
  rsyncEndSec=`date -d $rsyncEnd +'%s'`
  elapsed=`expr $rsyncEndSec - $rsyncStartSec`

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-306) copyFields with a dynamic destination should support static source

2007-07-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513298
 ] 

Yonik Seeley commented on SOLR-306:
---

Given that this looks like very simple syntactic sugar:
  source=name dest=text_*
is the same as
  source=name dest=text_name

It seems like this should be handled at schema-parse time (and not as a dynamic 
copyfield).


 copyFields with a dynamic destination should support static source
 --

 Key: SOLR-306
 URL: https://issues.apache.org/jira/browse/SOLR-306
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
 Fix For: 1.3

 Attachments: SOLR-306-CopyField.patch


 In SOLR-226, we added support for copy fields with a dynamic destination.
 We should also support dynamic copyFields with a static source:
  copyField source=name dest=text_* / 
 This will copy the contents of name to text_name

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-306) copyFields with a dynamic destination should support static source

2007-07-17 Thread Hoss Man (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513310
 ] 

Hoss Man commented on SOLR-306:
---

i'm a little worried that the syntax as described would give people the wrong 
impression.the syntax added in SOLR-226 was fairly clear because there was 
a one to one mapping between the * in the source and the * in the dest ... 
there is a lot more ambiguity here.

if copyField source=foo_* dest=bar / will copy from all incoming fields 
starting with foo_ to the field bar, people might reasonably assume that  
copyField source=bar dest=foo_* / will copy from the field bar to ALL 
fields starting with foo_ ... where different people might have different 
assumptions about what ALL means ... someone might assume Solr will map it to 
every explicitly created field matching the pattern, others might assume it 
will map to every dynamicField that is based on a suffix (ie: if there are 
dynamic fields for *_string and *_text then a copyfield with a dest of foo_* 
should create foo_string and foo_text)

...the point being it could get very confusing.  it would probably be better to 
do this using a different tag name / syntax, to help it stand out as 
functionally different ... perhaps this is a good opportunity to do true regex 
based matching...

  regexCopyField source=(name) dest=text_$1 /


 copyFields with a dynamic destination should support static source
 --

 Key: SOLR-306
 URL: https://issues.apache.org/jira/browse/SOLR-306
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
 Fix For: 1.3

 Attachments: SOLR-306-CopyField.patch


 In SOLR-226, we added support for copy fields with a dynamic destination.
 We should also support dynamic copyFields with a static source:
  copyField source=name dest=text_* / 
 This will copy the contents of name to text_name

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Resolved: (SOLR-306) copyFields with a dynamic destination should support static source

2007-07-17 Thread Ryan McKinley (JIRA)

 [ 
https://issues.apache.org/jira/browse/SOLR-306?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Ryan McKinley resolved SOLR-306.


Resolution: Invalid

 copyFields with a dynamic destination should support static source
 --

 Key: SOLR-306
 URL: https://issues.apache.org/jira/browse/SOLR-306
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
 Fix For: 1.3

 Attachments: SOLR-306-CopyField.patch


 In SOLR-226, we added support for copy fields with a dynamic destination.
 We should also support dynamic copyFields with a static source:
  copyField source=name dest=text_* / 
 This will copy the contents of name to text_name

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-306) copyFields with a dynamic destination should support static source

2007-07-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513316
 ] 

Yonik Seeley commented on SOLR-306:
---

Of course if the source isn't dynamic, there really isn't a reason for this 
feature at all.

 copyFields with a dynamic destination should support static source
 --

 Key: SOLR-306
 URL: https://issues.apache.org/jira/browse/SOLR-306
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
 Fix For: 1.3

 Attachments: SOLR-306-CopyField.patch


 In SOLR-226, we added support for copy fields with a dynamic destination.
 We should also support dynamic copyFields with a static source:
  copyField source=name dest=text_* / 
 This will copy the contents of name to text_name

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-306) copyFields with a dynamic destination should support static source

2007-07-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-306?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513320
 ] 

Yonik Seeley commented on SOLR-306:
---

 maybe a regex version could be started later

Don't hurry... I think that's moving in the wrong direction for good 
performance ;-)
http://www.nabble.com/NO_NORMS-and-TOKENIZED--tf3064721.html#a9047635

 copyFields with a dynamic destination should support static source
 --

 Key: SOLR-306
 URL: https://issues.apache.org/jira/browse/SOLR-306
 Project: Solr
  Issue Type: Improvement
Reporter: Ryan McKinley
Priority: Minor
 Fix For: 1.3

 Attachments: SOLR-306-CopyField.patch


 In SOLR-226, we added support for copy fields with a dynamic destination.
 We should also support dynamic copyFields with a static source:
  copyField source=name dest=text_* / 
 This will copy the contents of name to text_name

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



Re: [jira] Commented: (SOLR-306) copyFields with a dynamic destination should support static source

2007-07-17 Thread Chris Hostetter

:  maybe a regex version could be started later
:
: Don't hurry... I think that's moving in the wrong direction for good 
performance ;-)
: http://www.nabble.com/NO_NORMS-and-TOKENIZED--tf3064721.html#a9047635

i'd completley forgotten about that thread (was it just Feb? it seems like
a year ago)  ... a regex based approach would certainly be slower then
the prefix/suffix mechanism we have now ... but i'm guessing for a lot of
use cases it would be fine -- it's also the kind of thing whee we could
document that they expressions are evaluated in order -- putting the onus
on teh schema creator to order them based on the likelyhood of matching.



-Hoss



[jira] Commented: (SOLR-139) Support updateable/modifiable documents

2007-07-17 Thread Yonik Seeley (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-139?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513355
 ] 

Yonik Seeley commented on SOLR-139:
---

FYI, I'm working on a loadStoredFields() for UpdateHandler now.

 Support updateable/modifiable documents
 ---

 Key: SOLR-139
 URL: https://issues.apache.org/jira/browse/SOLR-139
 Project: Solr
  Issue Type: Improvement
  Components: update
Reporter: Ryan McKinley
Assignee: Ryan McKinley
 Attachments: SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-IndexDocumentCommand.patch, SOLR-139-IndexDocumentCommand.patch, 
 SOLR-139-ModifyInputDocuments.patch, SOLR-139-ModifyInputDocuments.patch, 
 SOLR-139-XmlUpdater.patch, 
 SOLR-269+139-ModifiableDocumentUpdateProcessor.patch


 It would be nice to be able to update some fields on a document without 
 having to insert the entire document.
 Given the way lucene is structured, (for now) one can only modify stored 
 fields.
 While we are at it, we can support incrementing an existing value - I think 
 this only makes sense for numbers.
 for background, see:
 http://www.nabble.com/loading-many-documents-by-ID-tf3145666.html#a8722293

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.



[jira] Commented: (SOLR-304) Dynamic fields cause IsValidUpdateIndexDocument to fail

2007-07-17 Thread Jeff Rodenburg (JIRA)

[ 
https://issues.apache.org/jira/browse/SOLR-304?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12513436
 ] 

Jeff Rodenburg commented on SOLR-304:
-

Support for evaluation of dynamic fields in IsValidUpdateIndexDocument has been 
added to source.  The code revision may be obtained from 
http://solrstuff.org/svn/solrsharp.

 Dynamic fields cause IsValidUpdateIndexDocument to fail
 ---

 Key: SOLR-304
 URL: https://issues.apache.org/jira/browse/SOLR-304
 Project: Solr
  Issue Type: Bug
  Components: clients - C#
Affects Versions: 1.2
Reporter: Jeff Rodenburg
Assignee: Jeff Rodenburg

 I am using solrsharp-1.2-07082007 - I have a dynamicField declared in my 
 schema.xml file as
 dynamicField name=*_demo type=text_ws indexed=true stored=true/
 -but, if I try to add a field using my vb.net application
 doc.Add(id_demo, s)
 where is a string value, the document does fails
 solrSearcher.SolrSchema.IsValidUpdateIndexDocument(doc)
 MS

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.