[ 
https://issues.apache.org/jira/browse/SOLR-13077?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel
 ]

Michael Gibney updated SOLR-13077:
----------------------------------
    Description: 
{{TokenStreamComponents}} for {{PreAnalyzedField}} is currently recreated from 
scratch for every field value.

This is necessary at the moment because the current implementation has no a 
priori knowledge about the schema/TokenStream that it's deserializing – 
Attributes are implicit in the serialized token stream, and token Attributes 
are lazily instantiated in {{incrementToken()}}.

Reuse of {{TokenStreamComponents}} with the current implementation would at a 
minimum cause problems at index time, when Attributes are cached in indexing 
components (e.g., {{FieldInvertState}}), keyed per {{AttributeSource}}. For 
instance, if the first field encountered has no value specified for 
{{PayloadAttribute}}, a {{null}} value would be cached for that 
{{PayloadAttribute}} for the corresponding {{AttributeSource}}. If that 
{{AttributeSource}} were to be reused for a field that _does_ specify a 
{{PayloadAttribute}}, indexing components would "consult" the cached {{null}} 
value, and the payload (and all subsequent payloads) would be silently ignored 
(not indexed).

This is not exactly _broken_ currently, but I gather it's an unorthodox 
implementation of {{TokenStream}}, and the current workaround of disabling 
{{TokenStreamComponents}} reuse necessarily adds to object creation and GC 
overhead.

For reference (and see LUCENE-8610), the [TokenStream 
API|https://lucene.apache.org/core/7_5_0/core/org/apache/lucene/analysis/TokenStream.html]
 says:
{quote}To make sure that filters and consumers know which attributes are 
available, the attributes must be added during instantiation.
{quote}

  was:
{{TokenStreamComponents}} for {{PreAnalyzedField}} is currently recreated from 
scratch for every field value.

This is necessary at the moment because the current implementation has no a 
priori knowledge about the schema/TokenStream that it's deserializing -- 
Attributes are implicit in the serialized token stream, and token Attributes 
are lazily instantiated in {{incrementToken()}}.

Reuse of {{TokenStreamComponents}} with the current implementation would at a 
minimum cause problems at index time, when Attributes are cached in indexing 
components (e.g., {{FieldInvertState}}), keyed per {{AttributeSource}}. For 
instance, if the first field encountered has no value specified for 
{{PayloadAttribute}}, a {{null}} value would be cached for that 
{{PayloadAttribute}} for the corresponding {{AttributeSource}}. If that 
{{AttributeSource}} were to be reused for a field that _does_ specify a 
{{PayloadAttribute}}, indexing components would "consult" the cached {{null}} 
value, and the payload (and all subsequent payloads) would be silently ignored 
(not indexed).

This is not exactly _broken_ currently, but I gather it's an unorthodox 
implementation of {{TokenStream}}, and the current workaround of disabling 
{{TokenStreamComponents}} reuse necessarily adds to object creation and GC 
overhead.

For reference, the [TokenStream 
API|https://lucene.apache.org/core/7_5_0/core/org/apache/lucene/analysis/TokenStream.html]
 says:
bq.To make sure that filters and consumers know which attributes are available, 
the attributes must be added during instantiation.


> PreAnalyzedField TokenStreamComponents should be reusable
> ---------------------------------------------------------
>
>                 Key: SOLR-13077
>                 URL: https://issues.apache.org/jira/browse/SOLR-13077
>             Project: Solr
>          Issue Type: Improvement
>      Security Level: Public(Default Security Level. Issues are Public) 
>          Components: Schema and Analysis
>            Reporter: Michael Gibney
>            Priority: Minor
>
> {{TokenStreamComponents}} for {{PreAnalyzedField}} is currently recreated 
> from scratch for every field value.
> This is necessary at the moment because the current implementation has no a 
> priori knowledge about the schema/TokenStream that it's deserializing – 
> Attributes are implicit in the serialized token stream, and token Attributes 
> are lazily instantiated in {{incrementToken()}}.
> Reuse of {{TokenStreamComponents}} with the current implementation would at a 
> minimum cause problems at index time, when Attributes are cached in indexing 
> components (e.g., {{FieldInvertState}}), keyed per {{AttributeSource}}. For 
> instance, if the first field encountered has no value specified for 
> {{PayloadAttribute}}, a {{null}} value would be cached for that 
> {{PayloadAttribute}} for the corresponding {{AttributeSource}}. If that 
> {{AttributeSource}} were to be reused for a field that _does_ specify a 
> {{PayloadAttribute}}, indexing components would "consult" the cached {{null}} 
> value, and the payload (and all subsequent payloads) would be silently 
> ignored (not indexed).
> This is not exactly _broken_ currently, but I gather it's an unorthodox 
> implementation of {{TokenStream}}, and the current workaround of disabling 
> {{TokenStreamComponents}} reuse necessarily adds to object creation and GC 
> overhead.
> For reference (and see LUCENE-8610), the [TokenStream 
> API|https://lucene.apache.org/core/7_5_0/core/org/apache/lucene/analysis/TokenStream.html]
>  says:
> {quote}To make sure that filters and consumers know which attributes are 
> available, the attributes must be added during instantiation.
> {quote}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to