Hi Justin,
If you have 6 node cluster with RF = 3 nodes in each rack. So nodes in rac1
will be primary ower of different token ranges and their replica will be
rac2 and rac3.
If one of the node in rac1 goes down then their replicas present in rac2
and rac3 will be serving the request. However
Nice.. Good to see the community producing tools around the Cassandra
product.
Few pieces of feedback
*Kudos*
1. Glad that you are doing it
2. Looks great
3. Willing to try it out if you find this guy called "Free Time" for me :)
*Criticism*
1. It mimics a lot of stack components that are out
Carl,
If you have done an automation and tested it a few time on a lower
environment with the same data from production, I'd say go for it.. but as
Jonathan said, if there's an issue, you won't be able to continue
operations.
On Tue, Mar 12, 2019 at 3:20 PM Jonathan Haddad wrote:
> Nothing
Hi Folks,
I am delighted to share with you that we, the Apache Cassandra
community, have been given a two day track at this year's ApacheCon
North America.
The goal of this track is simple: we are going to get together to talk
about Apache Cassandra. As such, this will be the ideal place to
Hi Sean,
for sure, the best approach would be to create another table which would
treat just that specific query.
How do I set the flag for not allowing allow filtering in cassandra.yaml? I
read a doco and there seems to be nothing about that.
Regards
On Wed, 13 Mar 2019 at 06:57, Durity, Sean
Maybe this was a specific issue to my topology in the past where I had 9 nodes
with a 3 rack implementation. Each rack contained a unique replica set so when
a node went down it put very high load on the nodes in the same rack. How does
the data get distributed in this case where there are only
If there are 2 access patterns, I would consider having 2 tables. The first one
with the ID, which you say is the majority use case. Then have a second table
that uses a time-bucket approach as others have suggested:
(time bucket, id) as primary key
Choose a time bucket (day, week, hour, month,
Nothing prevents it technically, but operationally you might not want to.
Personally I’d prefer have the safety net of a dc to fall back on in case
there’s an issue with the upgrade.
On Wed, Mar 13, 2019 at 7:48 AM Carl Mueller
wrote:
> If there are multiple DCs in a cluster, is it safe to
If there are multiple DCs in a cluster, is it safe to upgrade them in
parallel, with each DC doing a node-at-a-time?
Hi Justin,
I'm not sure I follow your reasoning. In a 6 node cluster with 3 racks (2
nodes per rack) and RF 3, if a node goes down you'll still have one node in
each of the other racks to serve the requests. Nodes within the same racks
aren't replicas for the same tokens (as long as the number of
On Tue, Mar 12, 2019 at 5:28 PM Justin Sanciangco
wrote:
> I would recommend that you do not go into a 3 rack single dc
> implementation with only 6 nodes. If a node goes down in this situation,
> the node that is paired with the node that is downed will have to service
> all of the load instead
I would recommend that you do not go into a 3 rack single dc implementation
with only 6 nodes. If a node goes down in this situation, the node that is
paired with the node that is downed will have to service all of the load
instead of being evenly distributed throughout the cluster. While its
Hello !
Just wanted to let you know : We finally managed to get a solution !
First of all we increased `streaming_socket_timeout_in_ms` to `8640`.
We are using cassandra-reaper to manage our repairs, they last about 15
days on this cluster and a re-launched almost immediately once they are
Our data model cannot be like below as you have recommended as majority of the
reads need to select the data by the partition key (id) only, not by date.
You could remodel your data in such way that you would make primary key like
this
((date), hour-minute, id)
or
((date, hour-minute), id)
By
Hi Vsevolod,
> Are there any workarounds to speed up the process? (e.g. doing cleanup only
> after all 4 new nodes joined cluster), or inserting multiple nodes
> simultaneously with specific settings?
e.g. doing cleanup only after all 4 new nodes joined cluster === allowed
inserting multiple
Hello everyone!
We have a cluster of 4 nodes, 4.5 tb/data per node, and are in the middle
of adding 4 more nodes to the cluster.
Joining a new node based on official guidelines/helps (setup cassandra on a
new node, start cassandra instance, wait until node goes from JOINING state
to NORMAL,
thanks for the answer Nate,
my queries are more like the following:
select f1,f2,f3, bigtxt from mytable where f1= ? and f2= ? limit 10;
insert into mytable (f1,f2,f3,bigtxt) values (?,?,?,?)
actually I have a text field (bigtxt) that could be > 1MB.
Marco
Il giorno lun 11 mar 2019 alle ore
17 matches
Mail list logo