Thank you Jeff
Regards,
Nitan
Cell: 510 449 9629
> On Jan 23, 2019, at 10:13 AM, Jeff Jirsa wrote:
>
>
>
>> On Jan 23, 2019, at 8:00 AM, Nitan Kainth wrote:
>>
>> Hi,
>>
>> Why does nodetool compactionstats not show time remaining when
>> compactionthroughput is set to 0?
>
> Because we
> On Jan 23, 2019, at 8:00 AM, Nitan Kainth wrote:
>
> Hi,
>
> Why does nodetool compactionstats not show time remaining when
> compactionthroughput is set to 0?
Because we don’t have a good estimate if we’re not throttling (could be added,
just not tracked now)
>
> If the node is restar
Hi,
Why does nodetool compactionstats not show time remaining when
compactionthroughput is set to 0?
If the node is restarted during compaction, does it continue from where it
is left or does it start over?
If it starts over, what happens to the new sstable that was being used for
compaction?
Hi Onmstester,
Thank you all. Now I understand whether to use batch or asynchronous writes
really depends on use case. Till now batch writes work for me in a 8 nodes
cluster with over 500 million requests per day.
> Did you compare the cluster performance including blocked natives, dropped
> m
unlogged batch meaningfully outperforms parallel execution of individual
statements, especially at scale, and creates lower memory pressure on both the
clients and cluster. They do outperform parallel individuals, but in cost of
higher pressure on coordinators which leads to more blocked Native
quot;
Date : Thu, 01 Nov 2018 10:48:33 +0330
Subject : A quick question on unlogged batch
Forwarded message
Hi All,
What's the difference between logged batch and unlogged batch? I'm asking this
question it's because I'm seeing the below WARNINGs after
=
> From : wxn...@zjqunshuo.com
> To : "user"
> Date : Thu, 01 Nov 2018 10:48:33 +0330
> Subject : A quick question on unlogged batch
> Forwarded message
>
> Hi All,
>
> What's the difference between logged batch and unlo
parallel single statements
with executeAsync. Sent using Zoho Mail Forwarded message
From : wxn...@zjqunshuo.com To : "user"
Date : Thu, 01 Nov 2018 10:48:33 +0330 Subject : A quick question on unlogged
batch Forwarded message Hi All,
Hi All,
What's the difference between logged batch and unlogged batch? I'm asking this
question it's because I'm seeing the below WARNINGs after a new app started
writting to the cluster.
WARNING in system.log:
Unlogged batch covering 135 partitions detected against table
[cargts.eventdata].
Full repair on TWCS maintains proper bucketing
--
Jeff Jirsa
> On Jan 9, 2018, at 5:36 PM, "wxn...@zjqunshuo.com"
> wrote:
>
> Hi All,
> If using TWCS, will a full repair trigger major compaction and then compact
> all the sstable files into big ones no matter the time bucket?
>
> Thanks
Hi All,
If using TWCS, will a full repair trigger major compaction and then compact all
the sstable files into big ones no matter the time bucket?
Thanks,
-Simon
Petrus & Kiran,
Thank you for the guide and suggestions. I will have a try.
Cheers,
Simon
From: Petrus Gomes
Date: 2017-07-21 00:45
To: user
Subject: Re: Quick question to config Prometheus to monitor Cassandra cluster
I use the same environment. Follow a few links:
Use this link, is the
I use the same environment. Follow a few links:
Use this link, is the best one to connect Cassandra and prometheus:
https://www.robustperception.io/monitoring-cassandra-with-prometheus/
JMX agent: https://github.com/nabto/cassandra-prometheus
https://community.grafana.com/t/how-to-connect-prometh
You have to download the Prometheus HTTP jmx dependencies jar and download
the Cassandra yaml and mention the jmx port in the config (7199).
Run the agent on specific port" on all the Cassandra nodes.
After this go to your Prometheus server and make the scrape config to
metrics from all clients.
You have to download the Prometheus HTTP jmx dependencies jar and download
the Cassandra yaml and mention the jmx port in the config (7199).
Run the agent on specific port" on all the Cassandra nodes.
After this go to your Prometheus server and make the scrape config to
metrics from all clients.
Hi,
I'm going to set up Prometheus+Grafana to monitor Cassandra cluster. I
installed Prometheus and started it, but don't know how to config it to support
Cassandra.
Any ideas or related articles are appreciated.
Cheers,
Simon
Adding dev only for this thread.
On Wed, Feb 1, 2017 at 4:39 AM, Kant Kodali wrote:
> What is the difference between accepting a value and committing a value?
>
>
>
> On Wed, Feb 1, 2017 at 4:25 AM, Kant Kodali wrote:
>
>> Hi,
>>
>> Thanks for the response. I finished watching this video but I
What is the difference between accepting a value and committing a value?
On Wed, Feb 1, 2017 at 4:25 AM, Kant Kodali wrote:
> Hi,
>
> Thanks for the response. I finished watching this video but I still got
> few questions.
>
> 1) The speaker seems to suggest that there are different consistenc
Hi,
Thanks for the response. I finished watching this video but I still got few
questions.
1) The speaker seems to suggest that there are different consistency levels
being used in different phases of paxos protocol. If so, what is right
consistency level to set on these phases?
2) Right now, we
Hi,
I believe that this talk from Christopher Batey at the Cassandra Summit
2016 might answer most of your questions around LWT:
https://www.youtube.com/watch?v=wcxQM3ZN20c
He explains a lot of stuff including consistency considerations. My
understanding is that the quorum read can only see the d
When you initiate a LWT(write) and do a QUORUM read is there a chance that
one might not see the LWT write ? If so, can someone explain a bit more?
Thanks!
I mean disk/cpu/network usage but I understand what Dynamic snitch does!
On Wed, Oct 19, 2016 at 3:34 AM, Vladimir Yudovin
wrote:
> What exactly do you mean by "resource usage"? If you mean "data size on
> disk" - no.
> If you mean "current CPU usage" - it depends on query. Modify query should
What exactly do you mean by "resource usage"? If you mean "data size on disk" -
no.
If you mean "current CPU usage" - it depends on query. Modify query should be
be sent to all nodes owning specific partition key.
For read queries see
http://www.datastax.com/dev/blog/dynamic-snitching-in-cassa
The coordinator can optimize latency for a SELECT by asking data from the
lowest-latency replica using DynamicSnitch. It's not really load balancing
per se but it's the closest idea.
can Cassandra cluster direct or load balance the requests by detecting the
resource usage of a particular node?
> I had seed nodes ip1,ip2,ip3 as the seeds but what I didn't realize was then
> that these nodes had themselves as seeds. I am assuming that should never be
> done, is that correct.
The only reason nodes listing them selves as seeds can be a pain is during
bootstrap. Seeds nodes will not str
Hi ,
The seeds are only used when a node appears in the cluster. At this moment
it chooses a seed (in the same dc) in order to have some information.
So, the most secure way is to write all your other nodes as seed, but in
fact you need only one up.
if you think that you will never have 3 node do
For ease of use, we actually had a single cassandra.yaml deployed to every
machine and a script that swapped out the token and listen address. I had seed
nodes ip1,ip2,ip3 as the seeds but what I didn't realize was then that these
nodes had themselves as seeds. I am assuming that should never
Thanks a lot Tyler. That clears lot of my doubt. I have couple more
questions related to Datastax Java Driver-
1) Firstly, is there any way to figure out what version of CQL I am
running? Is it CQL 3 or something else? Is there any command that we can
use to check? Abd also by default CQLish behav
On Thu, Apr 18, 2013 at 9:02 PM, Techy Teck wrote:
>
> When I was working with Cassandra CLI using the Netflix client(Astyanax
> client), then I created the column family like this-
>
> create column family profile
> with key_validation_class = 'UTF8Type'
> and comparator = 'UTF8Type'
I have started working with Cassandra database. I am planning to use
Datastax API to upsert/read into/from cassandra database. I am totally new
to this Datastax API (which uses new Binary protocol) and I am not able to
find lot of documentations as well which have some proper examples.
I am not su
; to all the new nodes that come online(cassandra actually has a very data
>> center/rack aware topology to transfer data correctly to not use up all
>> bandwidth unecessarily…I am not sure mongodb has that). Anyways, just food
>> for thought.
>>
>> From: aaron morton
>> mai
ashes but the data
is still good on the drives, it would just mean bringing up the node using the
same storage ? would this not be fast…?
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: 21 February 2013 11:46
To: user@cassandra.apache.org
Subject: Re: cassandra vs. mongodb quick que
dra.apache.org>>
> Date: Monday, February 18, 2013 1:39 PM
> To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
> mailto:user@cassandra.apache.org>>, Vegard Berget
> mailto:p...@fantasista.no>>
> Subject: Re: cassandra vs. mongodb quick question
2013 1:39 PM
> To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <
> user@cassandra.apache.org<mailto:user@cassandra.apache.org>>, Vegard
> Berget mailto:p...@fantasista.no>>
> Subject: Re: cassandra vs. mongodb quick question
>
>
gt;>
>> From: Bryan Talbot mailto:btal...@aeriagames.com>>
>> Reply-To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
>> mailto:user@cassandra.apache.org>>
>> Date: Wednesday, February 20, 2013 1:04 PM
>> To: "user@cassandra.apache.o
ailto:user@cassandra.apache.org>" <
> user@cassandra.apache.org<mailto:user@cassandra.apache.org>>
> Date: Wednesday, February 20, 2013 1:04 PM
> To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>" <
> user@cassandra.apache.org<mailto:user@cassa
g>>
Date: Wednesday, February 20, 2013 1:04 PM
To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>
Subject: Re: cassandra vs. mongodb quick question(good additional info)
This calculation is incorrect btw. 10,000 GB
This calculation is incorrect btw. 10,000 GB transferred at 1.25 GB / sec
would complete in about 8,000 seconds which is just 2.2 hours and not 5.5
days. The error is in the conversion (1hr/60secs) which is off by 2 orders
of magnitude since (1hr/3600secs) is the correct conversion.
-Bryan
On
19, 2013 7:02:56 AM
Subject: Re: cassandra vs. mongodb quick question(good additional info)
The 40 TB use case you heard about is probably one 40TB mysql machine
that someone migrated to mongo so it would be "web scale" Cassandra is
NOT good with drives that big, get a blade center or a
assandra.apache.org>"
> mailto:user@cassandra.apache.org>>
> Date: Monday, February 18, 2013 1:39 PM
> To: "user@cassandra.apache.org<mailto:user@cassandra.apache.org>"
> mailto:user@cassandra.apache.org>>, Vegard Berget
> mailto:p...@f
user@cassandra.apache.org>"
mailto:user@cassandra.apache.org>>, Vegard Berget
mailto:p...@fantasista.no>>
Subject: Re: cassandra vs. mongodb quick question
My experience is repair of 300GB compressed data takes longer than 300GB of
uncompressed, but I cannot point to an exact nu
;
> - Original Message -
> From:
> user@cassandra.apache.org
>
> To:
>
> Cc:
>
> Sent:
> Mon, 18 Feb 2013 08:41:25 +1300
> Subject:
> Re: cassandra vs. mongodb quick question
>
>
> If you have spinning disk and 1G networking and no virtual nodes, I
Subject:Re: cassandra vs. mongodb quick question
If you have spinning disk and 1G networking and no virtual nodes, I
would still say 300G to 500G is a soft limit.
If you are using virtual nodes, SSD, JBOD disk configuration or
faster networking you may go higher.
The limiting factors are the
If you have spinning disk and 1G networking and no virtual nodes, I would still
say 300G to 500G is a soft limit.
If you are using virtual nodes, SSD, JBOD disk configuration or faster
networking you may go higher.
The limiting factors are the time it take to repair, the time it takes to
rep
So I found out mongodb varies their node size from 1T to 42T per node depending
on the profile. So if I was going to be writing a lot but rarely changing
rows, could I also use cassandra with a per node size of +20T or is that not
advisable?
Thanks,
Dean
Thanks Russell, that's the info I was looking for!
On Sat, Aug 11, 2012 at 11:23 AM, Russell Haering
wrote:
> Your update doesn't go directly to an sstable (which are immutable),
> it is first merged to an in-memory table. Eventually the memtable is
> flushed to a new sstable.
>
> See http://wiki
Aaron,
I have not deep dived the data files in a while but this is how I understand it.
http://wiki.apache.org/cassandra/ArchitectureSSTable
There is no need to store the row key each time with the column.
RowKey to columns is a one to many relationship. This would be a
diagram of a physical fil
Your update doesn't go directly to an sstable (which are immutable),
it is first merged to an in-memory table. Eventually the memtable is
flushed to a new sstable.
See http://wiki.apache.org/cassandra/MemtableSSTable
On Sat, Aug 11, 2012 at 11:03 AM, Aaron Turner wrote:
> So how does that work?
So how does that work? An sstable is for a single CF, but it can and
likely will have multiple rows. There is no read to write and as I
understand it, writes are append operations.
So if you have an sstable with say 26 different rows (A-Z) already in
it with a bunch of columns and you add a new
Rowkey is stored only once in any sstable file.
That is, in the spesial case where you get sstable file per column/value, you
are correct, but normally, I guess most of us are storing more per key.
Regards,
Terje
On 11 Aug 2012, at 10:34, Aaron Turner wrote:
> Curious, but does cassandra stor
Curious, but does cassandra store the rowkey along with every
column/value pair on disk (pre-compaction) like Hbase does? If so
(which makes the most sense), I assume that's something that is
optimized during compaction?
--
Aaron Turner
http://synfin.net/ Twitter: @synfinatic
http://tcp
>>> >> > ColumnOrSuperColumn csc = new ColumnOrSuperColumn();
>> >>> >> > csc.setSuper_column(superColumn);
>> >>> >> > csc.setSuper_columnIsSet(true);
>> >>> >> > Mutation m = new M
;> > m.setColumn_or_supercolumn(csc);
> >>> >> > m.setColumn_or_supercolumnIsSet(true);
> >>> >> > mutations.add(m);
> >>> >> >
> >>> >> >
> >>> >> > Map> allMutations =
Mutations = new HashMap>> >> > List>();
>>> >> > allMutations.put("ColumnFamilyName", mutations);
>>> >> > Map>> mutationMap = new
>>> >> > HashMap>>();
>>> >>
uffer("RowKey"), mutations);
>> >> > client.batch_mutate(mutationMap, ConsistencyLevel.ONE);
>> >> >
>> >> > HTH!
>> >> >
>> >> > Thanks,
>> >> > Naren
>> >> >
t;();
> >> > mutationMap.put(getByteBuffer("RowKey"), mutations);
> >> > client.batch_mutate(mutationMap, ConsistencyLevel.ONE);
> >> >
> >> > HTH!
> >> >
> >> > Thanks,
> >> > Naren
> &g
t; >
>> >
>> >
>> > On Thu, Jan 6, 2011 at 10:42 PM, Arijit Mukherjee
>> > wrote:
>> >>
>> >> Thank you. And is it similar if I want to search a subcolumn within a
>> >> given supercolumn? I mean I have the supercolumn key and t
; wrote:
> >>
> >> Thank you. And is it similar if I want to search a subcolumn within a
> >> given supercolumn? I mean I have the supercolumn key and the subcolumn
> >> key - can I fetch the particular subcolumn?
> >>
> >> Can you share a small piece of exampl
it similar if I want to search a subcolumn within a
>> given supercolumn? I mean I have the supercolumn key and the subcolumn
>> key - can I fetch the particular subcolumn?
>>
>> Can you share a small piece of example code for both?
>>
>> I'm still new int
o more confusion.
>
> Arijit
>
> On 7 January 2011 11:44, Roshan Dawrani wrote:
> >
> > On Fri, Jan 7, 2011 at 11:39 AM, Arijit Mukherjee
> wrote:
> >>
> >> Hi
> >>
> >> I've a quick question abou
On Fri, Jan 7, 2011 at 12:12 PM, Arijit Mukherjee wrote:
> Thank you. And is it similar if I want to search a subcolumn within a
> given supercolumn? I mean I have the supercolumn key and the subcolumn
> key - can I fetch the particular subcolumn?
>
> Can you share a small piece of example code fo
t the Thrift APIs. I
attempted to use Hector, but got myself into more confusion.
Arijit
On 7 January 2011 11:44, Roshan Dawrani wrote:
>
> On Fri, Jan 7, 2011 at 11:39 AM, Arijit Mukherjee wrote:
>>
>> Hi
>>
>> I've a quick question about supercolumns.
>> E
On Fri, Jan 7, 2011 at 11:39 AM, Arijit Mukherjee wrote:
> Hi
>
> I've a quick question about supercolumns.
> EventRecord = {
>eventKey2: {
>e2-ts1: {set of columns},
>e2-ts2: {set of columns},
>...
>e2-tsn: {set of columns}
&g
Hi
I've a quick question about supercolumns. Say I've a structure like
this (based on the supercolumn family structured mention in WTF is a
SuperColum):
EventRecord = {
eventKey1: {
e1-ts1: {set of columns},
e1-ts2: {set of columns},
...
e1-ts
65 matches
Mail list logo