GitHub user brkyvz opened a pull request:

    https://github.com/apache/spark/pull/12080

    [SPARK-14287] isStreaming method for Dataset

    With the addition of StreamExecution (ContinuousQuery) to Datasets, data 
will become unbounded. With unbounded data, the execution of some methods and 
operations will not make sense, e.g. `Dataset.count()`.
    
    A simple API is required to check whether the data in a Dataset is bounded 
or unbounded. This will allow users to check whether their Dataset is in 
streaming mode or not. ML algorithms may check if the data is unbounded and 
throw an exception for example.
    
    The implementation of this method is simple, however naming it is the 
challenge. Some possible names for this method are:
     - isStreaming
     - isContinuous
     - isBounded
     - isUnbounded
    
    I've gone with `isStreaming` for now. We can change it before Spark 2.0 if 
we decide to come up with a different name. For that reason I've marked it as 
`@Experimental`


You can merge this pull request into a Git repository by running:

    $ git pull https://github.com/brkyvz/spark is-streaming

Alternatively you can review and apply these changes as the patch at:

    https://github.com/apache/spark/pull/12080.patch

To close this pull request, make a commit to your master/trunk branch
with (at least) the following in the commit message:

    This closes #12080
    
----
commit 7459a3c7293e2659aaf87485d4a937bae9fdd384
Author: Burak Yavuz <[email protected]>
Date:   2016-03-31T03:56:29Z

    added isStreaming method to Dataset

----


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at [email protected] or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to