>This means that from the client driver perspective when I define the
contact points I can specify any node in the cluster as contact point and
not necessary a seed node?
Correct.
On Wed, Feb 12, 2020 at 11:48 AM Sergio wrote:
> So if
> 1) I stop the a Cassandra node that doesn't have in the
Seed nodes are special in the sense that other nodes need them for
bootstrap (first startup only) and they have a special place in the Gossip
system. Odds of gossiping to a seed node are higher than other nodes, which
makes them "hubs" of gossip messaging.
Also, they do not bootstrap, so they
So if
1) I stop the a Cassandra node that doesn't have in the seeds IP list
itself
2) I change the cassandra.yaml of this node and I add it to the seed list
3) I restart the node
It will work completely fine and this is not even necessary.
This means that from the client driver perspective when
I believe seed nodes are not special nodes, it's just that you choose a few
nodes from cluster that helps to bootstrap new joining nodes. You can
change Cassandra.yaml to make any other node as seed node. There's nothing
like promotion.
-Arvinder
On Wed, Feb 12, 2020, 8:37 AM Sergio wrote:
>
Hi guys!
Is there a way to promote a not seed node to a seed node?
If yes, how do you do it?
Thanks!
ded to CQLSH documentation or
> some other relevant documentation.
>
> On Fri, May 3, 2019 at 12:56 AM Shaurya Gupta
> wrote:
>
>> Thanks Jeff.
>>
>> On Fri, May 3, 2019 at 12:38 AM Jeff Jirsa wrote:
>>
>>> No. Don’t mix LWT and normal writes.
>
eff Jirsa wrote:
>
>> No. Don’t mix LWT and normal writes.
>>
>> --
>> Jeff Jirsa
>>
>>
>> > On May 2, 2019, at 11:43 AM, Shaurya Gupta
>> wrote:
>> >
>> > Hi,
>> >
>> > We are seeing really odd behavio
Thanks Jeff.
On Fri, May 3, 2019 at 12:38 AM Jeff Jirsa wrote:
> No. Don’t mix LWT and normal writes.
>
> --
> Jeff Jirsa
>
>
> > On May 2, 2019, at 11:43 AM, Shaurya Gupta
> wrote:
> >
> > Hi,
> >
> > We are seeing really odd behaviour while
No. Don’t mix LWT and normal writes.
--
Jeff Jirsa
> On May 2, 2019, at 11:43 AM, Shaurya Gupta wrote:
>
> Hi,
>
> We are seeing really odd behaviour while try to delete a row which is
> simultaneously being updated in a light weight transaction.
> The delete command
in many such scenarios.
Is it fine to mix LWT and normal operations for the same partition? Is it
expected to work?
Thanks
Shaurya
he problem?
>
> Cheers,
> Hannu
>
>> On 11 Jan 2017, at 16.07, Cogumelos Maravilha <cogumelosmaravi...@sapo.pt>
>> wrote:
>>
>> Cassandra 3.9.
>>
>> nodetool status
>> Datacenter: dc1
>> ===
>> Status=Up/Down
>> |/ S
avi...@sapo.pt> wrote:
> >>
> >> Cassandra 3.9.
> >>
> >> nodetool status
> >> Datacenter: dc1
> >> ===
> >> Status=Up/Down
> >> |/ State=Normal/Leaving/Joining/Moving
> >> -- Address Load Tokens
osmaravi...@sapo.pt>
>> wrote:
>>
>> Cassandra 3.9.
>>
>> nodetool status
>> Datacenter: dc1
>> ===
>> Status=Up/Down
>> |/ State=Normal/Leaving/Joining/Moving
>> -- Address Load Tokens Owns (effective) Ho
Just to understand:
What exactly is the problem?
Cheers,
Hannu
> On 11 Jan 2017, at 16.07, Cogumelos Maravilha <cogumelosmaravi...@sapo.pt>
> wrote:
>
> Cassandra 3.9.
>
> nodetool status
> Datacenter: dc1
> ===
> Status=Up/Down
> |/
Cassandra 3.9.
nodetool status
Datacenter: dc1
===
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns (effective) Host
ID Rack
UN 10.0.120.145 1.21 MiB 256 49.5%
da6683cd-c3cf
ll time blocked
Native-Transport-Requests 128 128 1420623949 1
142821509
...
What is this? Is it normal?
On Tue, Jul 12, 2016 at 3:03 PM, Yuan Fang <y...@kryptoncloud.com> wrote:
Hi Jonathan,
Here is the result:
ubuntu@ip-172-31-44-250:~$ iostat -dmx 2 10
ative-Transport-Requests 128 1281420623949 1
> 142821509
> ...
>
>
>
> What is this? Is it normal?
>
> On Tue, Jul 12, 2016 at 3:03 PM, Yuan Fang <y...@kryptoncloud.com> wrote:
>
>> Hi Jonathan,
>>
>> Here is the r
Active Pending Completed
> Blocked All time blocked
> Native-Transport-Requests 128 1281420623949 1
> 142821509
> ...
>
>
>
> What is this? Is it normal?
>
> On Tue, Jul 12, 2016 at 3:03 PM, Yuan Fang <y...@kr
$nodetool tpstats
...
Pool Name Active Pending Completed
Blocked All time blocked
Native-Transport-Requests 128 1281420623949 1
142821509
...
What is this? Is it normal?
On Tue, Jul 12, 2016 at 3:03 PM, Yuan Fang &l
ge cache – the default settings for Cassandra (64k compression chunks)
>>>> are really inefficient for small reads served off of disk. If you drop the
>>>> compression chunk size (4k, for example), you’ll probably see your read
>>>> throughput increase significantly, which will
ge cache – the default settings for Cassandra (64k compression chunks)
>>>> are really inefficient for small reads served off of disk. If you drop the
>>>> compression chunk size (4k, for example), you’ll probably see your read
>>>> throughput increase significantly, which will
), you’ll probably see your read
>>> throughput increase significantly, which will give you more iops for
>>> commitlog, so write throughput likely goes up, too.
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *From: *Jonatha
>> commitlog, so write throughput likely goes up, too.
>>
>>
>>
>>
>>
>>
>>
>> *From: *Jonathan Haddad <j...@jonhaddad.com>
>> *Reply-To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
>> *Date: *Thursday, Ju
> *Date: *Thursday, July 7, 2016 at 6:54 PM
> *To: *"user@cassandra.apache.org" <user@cassandra.apache.org>
> *Subject: *Re: Is my cluster normal?
>
>
>
> What's your CPU looking like? If it's low, check your IO with iostat or
> dstat. I know some people have u
t;> Results:
>>>>>> op rate : 12200 [WRITE:12200]
>>>>>> partition rate: 12200 [WRITE:12200]
>>>>>> row rate : 12200 [WRITE:12200]
>>>>>> latency mean : 16.4 [WRITE:16
gt; On Thu, Jul 7, 2016 at 2:49 PM, Ryan Svihla <r...@foundev.pro> wrote:
>
>> Lots of variables you're leaving out.
>>
>> Depends on write size, if you're using logged batch or not, what
>> consistency level, what RF, if the writes come in bursts, etc, etc.
>> Howeve
6 at 6:54 PM
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
Subject: Re: Is my cluster normal?
What's your CPU looking like? If it's low, check your IO with iostat or dstat.
I know some people have used Ebs and say it's fine but ive been burned too many
times.
On Th
.1 [WRITE:7.1]
>>>>> latency 95th percentile : 38.1 [WRITE:38.1]
>>>>> latency 99th percentile : 204.3 [WRITE:204.3]
>>>>> latency 99.9th percentile : 465.9 [WRITE:465.9]
>>>>> latency max : 1408.4 [WRITE:1408.4]
&g
>>>> total gc time (s) : 0
>>>> avg gc time(ms) : NaN
>>>> stdev gc time(ms) : 0
>>>> Total operation time : 00:01:21
>>>> END
>>>>
>>>> On Thu, Jul 7, 2016 at 2:49 PM, Ryan Svihl
>>> Total operation time : 00:01:21
>>> END
>>>
>>> On Thu, Jul 7, 2016 at 2:49 PM, Ryan Svihla <r...@foundev.pro> wrote:
>>>
>>>> Lots of variables you're leaving out.
>>>>
>>>> Depends on write size,
t; wrote:
>>
>>> Lots of variables you're leaving out.
>>>
>>> Depends on write size, if you're using logged batch or not, what
>>> consistency level, what RF, if the writes come in bursts, etc, etc.
>>> However, that's all sort of moot for
ND
>
> On Thu, Jul 7, 2016 at 2:49 PM, Ryan Svihla <r...@foundev.pro> wrote:
>
>> Lots of variables you're leaving out.
>>
>> Depends on write size, if you're using logged batch or not, what
>> consistency level, what RF, if the writes come in bursts, etc, etc.
logged batch or not, what consistency
>> level, what RF, if the writes come in bursts, etc, etc. However, that's all
>> sort of moot for determining "normal" really you need a baseline as all
>> those variables end up mattering a huge amount.
>>
>> I w
atch or not, what
>> consistency level, what RF, if the writes come in bursts, etc, etc.
>> However, that's all sort of moot for determining "normal" really you need a
>> baseline as all those variables end up mattering a huge amount.
>>
>> I would suggest using
s on write size, if you're using logged batch or not, what
> consistency level, what RF, if the writes come in bursts, etc, etc.
> However, that's all sort of moot for determining "normal" really you need a
> baseline as all those variables end up mattering a huge amount.
>
>
Lots of variables you're leaving out.
Depends on write size, if you're using logged batch or not, what consistency
level, what RF, if the writes come in bursts, etc, etc. However, that's all
sort of moot for determining "normal" really you need a baseline as all those
variab
u, Jul 7, 2016 at 1:25 PM, Yuan Fang <y...@kryptoncloud.com> wrote:
>>>
>>>>
>>>>
>>>> I have a cluster of 4 m4.xlarge nodes(4 cpus and 16 gb memory and 600GB
>>>> ssd EBS).
>>>> I can reach a cluster wide write requests of 30k/second and read
>>>> request about 100/second. The cluster OS load constantly above 10. Are
>>>> those normal?
>>>>
>>>> Thanks!
>>>>
>>>>
>>>> Best,
>>>>
>>>> Yuan
>>>>
>>>>
>>>
>>
>
ncloud.com> wrote:
>>
>>>
>>>
>>> I have a cluster of 4 m4.xlarge nodes(4 cpus and 16 gb memory and 600GB
>>> ssd EBS).
>>> I can reach a cluster wide write requests of 30k/second and read request
>>> about 100/second. The cluster OS load constantly above 10. Are those normal?
>>>
>>> Thanks!
>>>
>>>
>>> Best,
>>>
>>> Yuan
>>>
>>>
>>
>
gt; wrote:
>
>>
>>
>> I have a cluster of 4 m4.xlarge nodes(4 cpus and 16 gb memory and 600GB
>> ssd EBS).
>> I can reach a cluster wide write requests of 30k/second and read request
>> about 100/second. The cluster OS load constantly above 10. Are those normal?
>>
>> Thanks!
>>
>>
>> Best,
>>
>> Yuan
>>
>>
>
<y...@kryptoncloud.com> wrote:
>
>
> I have a cluster of 4 m4.xlarge nodes(4 cpus and 16 gb memory and 600GB
> ssd EBS).
> I can reach a cluster wide write requests of 30k/second and read request
> about 100/second. The cluster OS load constantly above 10. Are those normal
I have a cluster of 4 m4.xlarge nodes(4 cpus and 16 gb memory and 600GB ssd
EBS).
I can reach a cluster wide write requests of 30k/second and read request
about 100/second. The cluster OS load constantly above 10. Are those normal?
Thanks!
Best,
Yuan
"user@cassandra.apache.org"
> Subject: Nodetool Rebuild sending few big packets of data. Is it normal?
>
> Hi,
>
> I'm running a nodetool rebuild to include a new DC in my cluster.
> My config is:
> DC1, 2 nodes per rack (2 racks), 70gb each node
> DC2, 2 nodes per
To: "user@cassandra.apache.org"
Subject: Nodetool Rebuild sending few big packets of data. Is it normal?
Hi,
I'm running a nodetool rebuild to include a new DC in my cluster.
My config is:
DC1, 2 nodes per rack (2 racks), 70gb each node
DC2, 2 nodes per rack (1 rack), 90gb each node
DC3, 2 no
.
In the instance logs, I have only stream messages from when I've started
the rebuild.
My point is, is it normal to Cassandra accumulate this amount of data and
then send it? I was hoping that it was more of a gradual and incremental
proccess.
thanks,
Felipe Esteves
Tecnologia
felipe.este
one of choice for the worlds
most innovative companies such as Netflix, Adobe, Intuit, and eBay.
On Sun, Oct 18, 2015 at 8:18 PM, Kevin Burton <bur...@spinn3r.com> wrote:
> I'm doing a big nodetool repair right now and I'm pretty sure the added
> overhead is impacting our perfor
On Mon, Oct 19, 2015 at 9:30 AM, Kevin Burton <bur...@spinn3r.com> wrote:
> I think the point I was trying to make is that on highly loaded boxes,
> repair should take lower priority than normal compactions.
>
You can manually do this by changing the thread priority of compactio
o make is that on highly loaded boxes,
>> repair should take lower priority than normal compactions.
>>
>
> You can manually do this by changing the thread priority of compaction
> threads which you somhow identify as doing repair related compaction...
>
> ..
I think the point I was trying to make is that on highly loaded boxes,
repair should take lower priority than normal compactions.
Having a throttle on *both* doesn't solve the problem.
So I need a
setcompactionthroughput
and a
setrepairthroughput
and total througput would be the sum of both
I'm doing a big nodetool repair right now and I'm pretty sure the added
overhead is impacting our performance.
Shouldn't you be able to throttle repair so that normal compactions can use
most of the resources?
--
We’re hiring if you know of any awesome Java Devops or Linux Operations
Engineers
I noticed in the system.log of one of my nodes
INFO [HANDSHAKE-mia1-cas-001.bongojuice.com/172.16.245.1] 2015-09-10
16:00:37,748 OutboundTcpConnection.java:485 - Handshaking version with
mia1-cas-001.bongojuice.com/172.16.245.1
The machine I am on is mia1-cas-001.
If it's nothing, never mind,
mine).
However, the commitlog directory shows a size of 8.1g. Is that really
normal? If this keeps up, I may run out of disk space due to the size of
the commit log rather than because of data.
I have run nodetool compact, but it didnt bring down the size.
mine).
However, the commitlog directory shows a size of 8.1g. Is that really
normal? If this keeps up, I may run out of disk space due to the size of
the commit log rather than because of data.
I have run nodetool compact, but it didnt bring down the size.
dropped / truncated, and the data directory itself is
showing about 45mb used (most of it is probably in the OpsCenter tables
rather than mine).
However, the commitlog directory shows a size of 8.1g. Is that really
normal? If this keeps up, I may run out of disk space due to the size
tables have been dropped / truncated, and the data directory itself is
showing about 45mb used (most of it is probably in the OpsCenter tables
rather than mine).
However, the commitlog directory shows a size of 8.1g. Is that really
normal? If this keeps up, I may run out of disk space due
Hi,
I know that this question might look silly, but I really need to know how
the cassandra stress tool works.
I developed my data model and used Cassandra-Stress tool with u option
where you pass your own data-model for column family (Table in CQL )and
distribution of each column in the column
normal conditions and the hanging ALTER TABLE seem pretty weird. Any
ideas here? Sound like a bug?
Yes, that sounds like a bug. This behavior is less common in 1.2.x than it
was previously, but still happens sometimes. It's interesting that restarting
the affected node helped, in previous
schema updates issued. Going to the nodes with stale schema and trying to
do the ALTER TABLE there resulted in hanging. We were eventually able to
get schema agreement by restarting nodes, but both the initial disagreement
under normal conditions and the hanging ALTER TABLE seem pretty weird. Any
to do the ALTER
TABLE there resulted in hanging. We were eventually able to get schema
agreement by restarting nodes, but both the initial disagreement under normal
conditions and the hanging ALTER TABLE seem pretty weird. Any ideas here? Sound
like a bug?
We're on 1.2.8.
Thanks,
Josh
--
Josh
:
I noticed on EC2, the c* nodes according to OpsCenter have never gone
above 1.6-2.2MBps. That seems abnormally low but I have no reference as to
what is normal for cassandra on EC2 and curious what other people have
seen according to OpsCenter for the OS: Disk Throughput metric.
Thanks
. That seems abnormally low but I have no reference as to
what is normal for cassandra on EC2 and curious what other people have
seen according to OpsCenter for the OS: Disk Throughput metric.
Thanks,
Dave
. That seems abnormally low but I have no reference as to
what is normal for cassandra on EC2 and curious what other people have
seen according to OpsCenter for the OS: Disk Throughput metric.
Thanks,
Dave
used JMX to check current
number of threads in a production cassandra machine, and it was ~27,000.
Is that a normal thread count? Could my OOM be related to stack + number
of threads, or am I overlooking something more simple?
will
of threads in a production cassandra machine, and it was ~27,000.
Is that a normal thread count? Could my OOM be related to stack + number
of threads, or am I overlooking something more simple?
will
messing with it) of 180k. I used JMX to check current
number of threads in a production cassandra machine, and it was ~27,000.
Is that a normal thread count? Could my OOM be related to stack + number
of threads, or am I overlooking something more simple?
will
is the default (I hope,
don't remember messing with it) of 180k. I used JMX to check current
number of threads in a production cassandra machine, and it was ~27,000.
Is that a normal thread count? Could my OOM be related to stack + number
of threads, or am I overlooking something more simple?
will
remember messing with it) of 180k. I used JMX to check current number
of threads in a production cassandra machine, and it was ~27,000. Is that a
normal thread count? Could my OOM be related to stack + number of threads,
or am I overlooking something more simple?
will
cassandra machine, and it was ~27,000.
Is that a normal thread count? Could my OOM be related to stack + number
of threads, or am I overlooking something more simple?
will
sees
or respects this setting). My -Xss for cassandra is the default (I hope,
don't remember messing with it) of 180k. I used JMX to check current
number of threads in a production cassandra machine, and it was ~27,000.
Is that a normal thread count? Could my OOM be related to stack + number
cassandra machine, and it was ~27,000. Is that a
normal thread count? Could my OOM be related to stack + number of threads,
or am I overlooking something more simple?
will
I forgot, lets stay on the edge with C* 1.2.* branch ;)
Hvala in lp,
*Alan Ristić*
*w*: personal blog http://alanristic.wordpress.com/
*t*: @alanristic http://twitter.com/alanristic
* l:* linkedin.com/alanristic http://si.linkedin.com/in/alanristic/
*m*: 068 15 73 88
2013/4/3 Alan Ristić
1. Is size getting bigger in either one in storing one Tweet?
If you store the data in one blob then we only store one column name and the
blob. If they are in different cols then we store the column names and their
values.
2. Has either choice have impact on read/write performance on large
On Thu, Apr 4, 2013 at 6:58 AM, aaron morton aa...@thelastpickle.comwrote:
1. Is size getting bigger in either one in storing one Tweet?
If you store the data in one blob then we only store one column name and
the blob. If they are in different cols then we store the column names and
their
What is the downside, anyway?
you code is now the only thing that can read the data. So it makes it harder to
look at in a CLI tool.
IMHO just store the data in columns.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
It seems to be normal to explode data size during repair. For our case, we have
a node around 200G with RF =3, during repair, it goes to as high as 300G. We
are using LCS, it creates more than 5000 compaction tasks and takes more than a
day to finish. We are on 1.1.6
There is parallel LCS
I ran a pretty solid QA test(cleaned data from scratch) on version 1.2.2
My test was as so
1. Start up 4 node cassandra cluster
2. Populate with initial test data (no other data is added to system after
this point!!!)
3. Run nodetool drain on every node(move stuff from commit log to
15. Size of nreldata is now 220K ….it has exploded in size!!
This may be explained by fragmentation in the sstables, which compaction would
eventually resolve.
During repair the data came from multiple nodes and created multiple sstables
for each CF. Streaming copies part of an SSTable on
@cassandra.apache.org
Date: Wednesday, March 6, 2013 9:29 AM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: should I file a bug report on this or is this normal?
15. Size of nreldata is now 220K ….it has exploded in size
Date: Wednesday, March 6, 2013 9:29 AM
To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Subject: Re: should I file a bug report on this or is this normal?
15. Size of nreldata is now 220K ….it has exploded in size
On Thu, Oct 14, 2010 at 7:36 PM, Henry Luo h...@choicestream.com wrote:
Thanks for the advice. Follow up questions:
a) is 0.6.6 compactable with 0.6.1?
Yes, you can upgrade one node at a time and it will particpate w/ the
0.6.1 nodes until they are done too. Just restart w/ 0.6.6, no data
...@gmail.com]
Sent: Thursday, October 14, 2010 4:33 PM
To: user
Subject: Re: Hundreds compaction a day, is it normal?
a) 0.6.1 is ancient, upgrade to 0.6.6 (see
http://www.riptano.com/blog/whats-new-cassandra-066 for links to all
the improvements since 0.6.1 -- the links to older versions
Hi
I am doing some load test with 4 nodes cluster. My client is PHP. I found
some reads/writes were time out no matter how I tuned the parameters. These
time-outs could be caught by client code. My question is: are these
time-outs normal even in production environment? Should they be treated
yes, I've tried the patch on
https://issues.apache.org/jira/browse/THRIFT-347, but seems not work for me.
I doubt I am involving another issue with Thrift. If my column value size is
more than 8KB(with thrift php extension enabled), my client has more chances
to get timed out error. I am still
82 matches
Mail list logo