I've noticed this new feature of 4.0: Streaming optimizations
(https://cassandra.apache.org/blog/2018/08/07/faster_streaming_in_cassandra.html)
Is this mean that we could have much more data density with Cassandra 4.0
(less problems than 3.X)? I mean > 10 TB of data on each node without
Messenger can allow for some losses in degenerate infra cases, given a
given infra footprint. Also some ability to handle scale up faster as
demand increases, peak loads, etc. It therefore becomes a use case specific
optimization. Also hBase can run in Hadoop more easily, leveraging blobs
(HDFS),
Hi Vitaliy,
That method
(https://docs.datastax.com/en/latest-java-driver-api/com/datastax/driver/core/ExecutionInfo.html#getAchievedConsistencyLevel--)
is a bit confusing as it will return null when your desired
consistency level is achieved:
> If the query returned without achieving the
Nope, Spark cassandra connector leverages data locality and get tremendous
improvements due to localitty.
- Affan
On Sat, Aug 25, 2018 at 11:25 AM CharSyam wrote:
> Spark can read hdfs directly so locality is important but Spark can't read
> Cassandra data directly it can only connect by
Spark can read hdfs directly so locality is important but Spark can't read
Cassandra data directly it can only connect by api. So I think you don't
need to install them a same node
2018년 8월 25일 (토) 오후 3:16, Affan Syed 님이 작성:
> Tobias,
>
> This is very interesting. Can I inquire a bit more on why
Tobias,
This is very interesting. Can I inquire a bit more on why you have both C*
and Kudu in the system?
Wouldnt keeping just Kudu work (that was its initial purpose?). Is there
something to do with its production readiness? I ask as we have a similar
concern as well.
Finally, how are your