Hi,
Now I need to upgrade my spark cluster from version 1.1.0 to 1.2.1 , if
there is convenient way to do this. something like ./start-dfs.sh
(http://start-dfs.sh) -upgrade in hadoop
Best Wishs
THX
--
qiaou
已使用 Sparrow (http://www.sparrowmailapp.com/?sig)
thanks for you reply and patience
Best regards
--
qiaou
已使用 Sparrow (http://www.sparrowmailapp.com/?sig)
在 2014年11月12日 星期三,下午3:45,Shixiong Zhu 写道:
> The `conf` object will be sent to other nodes via Broadcast.
>
> Here is the scaladoc of Broadcast:
> http://spark.apa
this work!
but can you explain why should use like this?
--
qiaou
已使用 Sparrow (http://www.sparrowmailapp.com/?sig)
在 2014年11月12日 星期三,下午3:18,Shixiong Zhu 写道:
> You need to create a new configuration for each RDD. Therefore, "val
> hbaseConf = HBaseConfigUtil.getHBaseConfigurat
ableInputFormat],
classOf[ImmutableBytesWritable], classOf[Result]).map {
case (_: ImmutableBytesWritable, result: Result) => {
result
}
}
}
return generateRdd
}
--
qiaou
已使用 Sparrow (http://www.sparrowmailapp.com/?sig)
在 2014年11月1
eQuery('aa’).collect.toList :::
hbaseQuery(’bb’).collect.toList) it return the right value
obviously i have got an action after my transformation action ,but why it did
not work
fyi
--
qiaou
已使用 Sparrow (http://www.sparrowmailapp.com/?sig)
eQuery('aa’).collect.toList :::
hbaseQuery(’bb’).collect.toList).count() it return the right value
obviously i have got an action after my transformation action ,but why it did
not work
fyi
--
qiaou
已使用 Sparrow (http://www.sparrowmailapp.com/?sig)