The snapshot gives you a complete set of your sstables at a point in time.
If you were copying sstables directly from a live node you would have to
deal with files coming and going due to compactions.
Cheers
Ben
On Thu, 10 May 2018 at 16:45 wrote:
> Dear Community,
>
>
>
> Is there any benefit
Dear community,
Is it possible to have a cluster in Cassandra where each of the server is
running multiple instances of Cassandra(each instance is part of the same
cluster).
I'm aware that if there's a single server in the cluster, then it's possible to
run multiple instances of Cassandra on
It works fine, and there can be meaningful performance benefits if you have
a sufficiently large machine where either you have so much RAM or so much
disk that a single instance would likely underutilize those resources. You
can configure it by adding multiple IPs to the servers, and running one
in
If you backup the current state of the sstables at the time you upload the
new sstables, you can keep a running point-in-time view without an explicit
snapshot. This is similar to what tablesnap does (
https://github.com/JeremyGrosser/tablesnap )
On Thu, May 10, 2018 at 12:30 PM, Ben Slater
wrote
On Thu, May 10, 2018 at 2:25 AM wrote:
> Dear community,
>
> Is it possible to have a cluster in Cassandra where each of the server is
> running multiple instances of Cassandra(each instance is part of the same
> cluster).
>
> I'm aware that if there's a single server in the cluster, then it's
>
+1 to what Ben said. Lynn Bender has a great reputation for building
vendor-neutral events and this is shaping up to be a really good one for
the Cassandra community. I'm devoting a lot of DataStax resources to it and
I know Ben is doing the same at Instacluster.
Now that being said. If you want a
Sorry Ben. Instaclustr. My spell checker keeps buying vowels.
On Thu, May 10, 2018 at 10:43 AM, Patrick McFadin
wrote:
> +1 to what Ben said. Lynn Bender has a great reputation for building
> vendor-neutral events and this is shaping up to be a really good one for
> the Cassandra community. I'm
Hello,
Upgraded Cassandra 2.1 to 3.0. We see certain data in few columns being set to
"null". These null columns were created during the row creation time.
After looking at the data see a pattern where update was done on these rows.
Rows which were updated has data but rows which were not par
Which minor version of 3.0
--
Jeff Jirsa
> On May 11, 2018, at 2:54 AM, kooljava2 wrote:
>
>
> Hello,
>
> Upgraded Cassandra 2.1 to 3.0. We see certain data in few columns being set
> to "null". These null columns were created during the row creation time.
>
> After looking at the data s
There may be some unstated environmental dependencies at issue here. If you
run the dtests on an Ubuntu 16.04 LTS environment with the configuration
described in the dtest README.md, then when you run cqlsh by calling the a
ccm Node object's run_cqlsh() function, it will run cqlsh with Python 2.7.
Hello Jeff,
2.1.19 to 3.0.15.
Thank you.
On Thursday, 10 May 2018, 17:43:58 GMT-7, Jeff Jirsa
wrote:
Which minor version of 3.0
-- Jeff Jirsa
On May 11, 2018, at 2:54 AM, kooljava2 wrote:
Hello,
Upgraded Cassandra 2.1 to 3.0. We see certain data in few columns being set to
"n
Hi Team,
I have 64GB of total system memory. 5 node cluster.
x ~# free -m
totalusedfree shared buff/cache available
Mem: 64266 17549 41592 665124 46151
Swap: 0 0 0
xxx
There's no single right answer. It depends a lot on the read/write patterns
and other settings (onheap memtable, offheap memtable, etc).
One thing that's probably always true, if you're using ParNew/CMS, 16G heap
is a bit large, but may be appropriate for some read heavy workloads, but
you'd want
13 matches
Mail list logo