Hi Nandan,
If there is a requirement to answer a query "What are the changes to a book
made by a particular user?", then yes the schema you have proposed can
work. To obtain the list of updates for a book by a user from the
*book_title_by_user* table will require the partition key (*book_title*),
So you are not upgrading the kernel, you are upgrading the OS.
Sorry Daemeon, my bad. I meant the OS :-)
So what would you recommend, replace a node with a new OS node
with -Dcassandra.replace_address (never tried it before), or try to format
the root directory of the existing node, without
If you make all as 10gb each, they will compact immediately into same size
again.
The idea is actually to trigger the compaction so the tombstones will be
removed. That's the whole purpose of the split. and if the split sstable
has lots of tombstones, it'll be compacted to a much smaller size.
So you are not upgrading the kernel, you are upgrading the OS. Not what
you asked about. Your devops team is right.
However, Depending on what is using python, the new version of python may
break older scripts (I do not know, mentioning this, testing required?)
W
hen I am doing an OS upgrade
Right, but realistically that is what happens with SizeTiered. Another option
is to split the tables in proportion size NOT same size. Like 100 GB into 50,
25, 12,13. If you make all as 10gb each, they will compact immediately into
same size again. Motive is to get rid of duplicates which exist
Basically meaning that if you run major compaction (=nodetool compact), you
will end up with even bigger file and that is likely to never get compacted
without running major compaction again. And therefore not recommended for
production system.
Hannu
> On 17 May 2017, at 19:46, Nitan Kainth
Hi Folks,
I've been noticing some missing rows, any where from 20-40% missing, while
executing paging queries over my cluster.
Basically the query is to hit every row, subdividing the entire token range
into a few tens of token ranges to parallelize the work, there is no wrap
around involved, at
You can try running major compaction to get rid of duplicate data and deleted
data. But will be the routine for future.
> On May 17, 2017, at 10:23 AM, Jan Kesten wrote:
>
> me patt
Hi Zaidi,
We use Chef for the configuration management of our 14 node cluster.
You can have a look at Chef or maybe some other config management tools too
like Ansible and Puppet.
Thanks,
Abhishek
On May 17, 2017 10:08 PM, "DuyHai Doan" wrote:
> For configuration
For configuration management there are tons of tools out there:
- ansible
- chef
- puppet
- saltstack
I surely forgot a few others
On Wed, May 17, 2017 at 6:33 PM, ZAIDI, ASAD A wrote:
> Good Morning Folks –
>
>
>
> I’m running 14 nodes Cassandra cluster in two data centers ,
Good Morning Folks –
I’m running 14 nodes Cassandra cluster in two data centers , each node is has
roughly 1.5TB. we’re anticipating more load therefore we’ll be expanding
cluster with additional nodes.
At this time, I’m kind of struggling to keep consistent cassandra.yaml file on
each server
Hi all,
I have some problem with really large sstables which dont get compacted
anymore and I know there are many duplicated rows in them. Splitting the
tables into smaller ones to get them compacted again would help I
thought, so I tried sstablesplit, but:
cassandra@cassandra01
Thanks Jeff.
I have taken backup and did manual removal of hints with rolling restart.
This brought cluster back in stable state.
Can you Please share some recommendation for write intensive job . Actually
,we need to load dump from kafka to 3 node cassandra cluster . Write TPS
per node will be
We've done such in-place upgrade in the past but not for a real production.
However you're MISSING the point. The root filesystem along with the entire
OS should be completely separated from your data directories. It should
reside
in a different logical volume and thus you can easily change the
Our DevOPS team told me that their policy is not to perform major kernel
upgrades but simply install a clean new version.
I also checked online and found a lot of recommendations *not *to do so as
there might be a lot of dependencies issues that may affect processes such
as yum.
e.g.
15 matches
Mail list logo