[ 
https://issues.apache.org/jira/browse/SOLR-7535?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15076420#comment-15076420
 ] 

Joel Bernstein edited comment on SOLR-7535 at 1/2/16 2:52 AM:
--------------------------------------------------------------

A few possible streams:

*PartitionStream*: writes to local disk partitioning on keys. Used when the 
next stage does not require a re-sort.
*ShuffleStream*: writes to local disk sorting and partitioning by keys. Used 
when the next stage requires a re-sort.
*HttpStream*: Gets passed a list of URL'S to read a stream from. This would 
read directly from worker nodes. We could simply point directly to the files 
and let jetty stream the data back directly. As a bonus this stream could also 
be a generic way to hook in any Http service.
{code}
Step1:

parallel(partition(innerJoin(search(...), search(...))

Step 2:

parallel(hashJoin(http(...), search(..))
{code}

The PartitionStream would return a Tuple when it's finished writing the 
partitions that includes its node address. There would need to be a little glue 
code that would gather the node addresses from step 1 and kick off step 2. This 
could be written in any language. The SQLHandler will of course the steps 
behind the scene.



was (Author: joel.bernstein):
A couple of possible streams:

*PartitionStream*: writes to local disk partitioning on keys. Used when the 
next stage does not require a re-sort.
*ShuffleStream*: writes to local disk sorting and partitioning by keys. Used 
when the next stage requires a re-sort.
*HttpStream*: Gets passed a list of URL'S to read a stream from. This would 
read directly from worker nodes. We could simply point directly to the files 
and let jetty stream the data back directly. As a bonus this stream could also 
be a generic way to hook in any Http service.
{code}
Step1:

parallel(partition(innerJoin(search(...), search(...))

Step 2:

parallel(hashJoin(http(...), search(..))
{code}

The PartitionStream would return a Tuple when it's finishing writing the 
partitions that includes its node address. There would need to be a little glue 
code that would gather the node addresses from step 1 and kick off step 2. This 
could be written in any language. The SQLHandler will of course perform the 
steps behind the scene.


 

> Add UpdateStream to Streaming API and Streaming Expression
> ----------------------------------------------------------
>
>                 Key: SOLR-7535
>                 URL: https://issues.apache.org/jira/browse/SOLR-7535
>             Project: Solr
>          Issue Type: New Feature
>          Components: clients - java, SolrJ
>            Reporter: Joel Bernstein
>            Priority: Minor
>         Attachments: SOLR-7535.patch, SOLR-7535.patch
>
>
> The ticket adds an UpdateStream implementation to the Streaming API and 
> streaming expressions. The UpdateStream will wrap a TupleStream and send the 
> Tuples it reads to a SolrCloud collection to be indexed.
> This will allow users to pull data from different Solr Cloud collections, 
> merge and transform the streams and send the transformed data to another Solr 
> Cloud collection.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to