Hi, We are using Apache Cassandra 3.11.2 on RedHat 7 The data can grow to +100TB however the hot data will be in most cases less than 10TB but we still need to keep the rest of data accessible. Anyone has this problem? What is the best way to make the cluster more efficient? Is there a way to somehow automatically move the old data to different storage (rack, dc, etc)? Any ideas?
Regards, -- Alaa -- This message may contain confidential and privileged information. If it has been sent to you in error, please reply to advise the sender of the error and then immediately permanently delete it and all attachments to it from your systems. If you are not the intended recipient, do not read, copy, disclose or otherwise use this message or any attachments to it. The sender disclaims any liability for such unauthorized use. PLEASE NOTE that all incoming e-mails sent to PDF e-mail accounts will be archived and may be scanned by us and/or by external service providers to detect and prevent threats to our systems, investigate illegal or inappropriate behavior, and/or eliminate unsolicited promotional e-mails (“spam”). If you have any concerns about this process, please contact us at legal.departm...@pdf.com <mailto:legal.departm...@pdf.com>.