Howdy
I'm looking at the possibility of using cassandra as an object store to
offload image/blob data from an Oracle database. I've seen mentions of it
being used as an object store in a large scale fashion, like with Walmart:
We do multiple nodes per host as a standard practice. In our case, we never
put 2 nodes from a single cluster on the same host, though as mentioned
before, you could potentially get away with that if you properly use rack
awareness, just be careful of load.
We also do NOT use any other layer of
hi,
Thank you for your answers, starting with the most important point from your
answers I understand that
"it is OK to go more than 1 TB in disk usage"
so in this case if I am going to use the 50% of the disk capacity I will end up
having around 3 TB per node which in this case I will not
This is a response to a message from 2017 that I found unanswered on the
user list, we were getting the same error.
Also in this stackoverflow
https://stackoverflow.com/questions/53160611/frame-size-352518912-larger-than-max-length-15728640-exception-while-runnin/55751104#55751104
I have noted
So how much data can you safely fit per node using SSDs with Cassandra 3.11?
How much free space do you need on your disks?
There should be some recommendations on node sizes on:
http://cassandra.apache.org/doc/latest/operating/hardware.html
Documentation - Apache
Agreed with Jeff here. The whole "community recommends no more than
1TB" has been around, and inaccurate, for a long time.
The biggest issue with dense nodes is how long it takes to replace
them. 4.0 should help with that under certain circumstances.
On Thu, Apr 18, 2019 at 6:57 AM Jeff Jirsa
Agreed that you can go larger than 1T on ssd
You can do this safely with both instances in the same cluster if you guarantee
two replicas aren’t on the same machine. Cassandra provides a primitive to do
this - rack awareness through the network topology snitch.
The limitation (until 4.0) is
What is the data problem that you are trying to solve with Cassandra? Is it
high availability? Low latency queries? Large data volumes? High concurrent
users? I would design the solution to fit the problem(s) you are solving.
For example, if high availability is the goal, I would be very
Hi all,
In our small company we have 10 nodes of (2 x 3 TB HD) 6 TB each, 128 GB ram
and 64 cores and we are thinking to use them as Cassandra nodes. From what I am
reading around, the community recommends that every node should not keep more
than 1 TB data so in this case I am wondering if it
Hi Laxmikant,
My remaining nodes are using 60% of the HD.
Cheers
Sent with [ProtonMail](https://protonmail.com) Secure Email.
‐‐‐ Original Message ‐‐‐
On Monday, April 15, 2019 6:28 AM, Laxmikant Upadhyay
wrote:
> What is used and available disk space of each node ?
>
> On Thu, Apr
10 matches
Mail list logo