Seems good to me. BTW, its find to MEMORY_ONLY (i.e. without replication)
for testing, but you should turn on replication if you want
fault-tolerance.
TD
On Mon, Feb 3, 2014 at 3:19 PM, Eduardo Costa Alfaia e.costaalf...@unibs.it
wrote:
Hi Tathagata,
You were right when you have said for
Hi Tathagata
I am playing with NetworkWordCount.scala, I did some changes like this(in red):
// Create the context with a 1 second batch size
67 val ssc = new StreamingContext(args(0), NetworkWordCount, Seconds(1),
68 System.getenv(SPARK_HOME),
Hey Henry, this makes sense. I’d like to add that one other vehicle for
discussion has been JIRA at https://spark-project.atlassian.net/browse/SPARK.
Right now the dev list is not subscribed to JIRA, but we’d be happy to
subscribe it anytime if that helps. We were hoping to do this only when
Hi Everyone,
In an effort to coordinate development amongst the growing list of
Spark contributors, I've taken some time to write up a proposal to
formalize various pieces of the development process. The next release
of Spark will likely be Spark 1.0.0, so this message is intended in
part to
Looks good.
One question and one comment:
How are Alpha components and higher level libraries which may add small
features within a maintenance release going to be marked with that status?
Somehow/somewhere within the code itself, as just as some kind of external
reference?
I would strongly
How are Alpha components and higher level libraries which may add small
features within a maintenance release going to be marked with that status?
Somehow/somewhere within the code itself, as just as some kind of external
reference?
I think we'd mark alpha features as such in the
Yup, the intended merge level is just a hint, the responsibility still lies
with the committers. It can be a helpful hint, though.
On Wed, Feb 5, 2014 at 4:55 PM, Patrick Wendell pwend...@gmail.com wrote:
How are Alpha components and higher level libraries which may add small
features
I would even take it further, when it comes to PR's:
- any pr needs to reference a jira
- the pr should be rebased before submitting, to avoid merge commits
- as patrick said: require squashed commits
/heiko
Am 06.02.2014 um 01:39 schrieb Mark Hamstra m...@clearstorydata.com:
I would
+1 on time boxed releases and compatibility guidelines
Am 06.02.2014 um 01:20 schrieb Patrick Wendell pwend...@gmail.com:
Hi Everyone,
In an effort to coordinate development amongst the growing list of
Spark contributors, I've taken some time to write up a proposal to
formalize various
Agree on timeboxed releases as well.
Is there a vision for where we want to be as a project before declaring the
first 1.0 release? While we're in the 0.x days per semver we can break
backcompat at will (though we try to avoid it where possible), and that
luxury goes away with 1.x I just don't
+1 for 0.10.0 now with the option to switch to 1.0.0 after further
discussion.
On Feb 5, 2014 9:53 PM, Andrew Ash and...@andrewash.com wrote:
Agree on timeboxed releases as well.
Is there a vision for where we want to be as a project before declaring the
first 1.0 release? While we're in the
If people feel that merging the intermediate SNAPSHOT number is
significant, let's just defer merging that until this discussion
concludes.
That said - the decision to settle on 1.0 for the next release is not
just because it happens to come after 0.9. It's a conscientious
decision based on the
12 matches
Mail list logo