Are you saying if a node had double the hardware capacity in every way it
would be a bad idea to up num_tokens? I thought that was the whole idea of
that setting though?
On Thu, Aug 17, 2017 at 9:52 AM, Carlos Rolo wrote:
> No.
>
> If you would double all the hardware on that
11 20:09 (-0700), "Kevin O'Connor" <ke...@reddit.com.INVALID>
> wrote:
> > This might be an interesting question - but is there a way to truncate
> data
> > from just a single node or two as a test instead of truncating from the
> > entire cluster? We have time ser
This might be an interesting question - but is there a way to truncate data
from just a single node or two as a test instead of truncating from the
entire cluster? We have time series data we don't really care if we're
missing gaps in, but it's taking up a huge amount of space and we're
looking to
Great post Akhil! Thanks for explaining that.
On Mon, May 29, 2017 at 5:43 PM, Akhil Mehra wrote:
> Hi Preetika,
>
> After thinking about your scenario I believe your small SSTable size might
> be due to data compression. By default, all tables enable SSTable
>
lived access tokens in some very specific
scenarios, so they are much less likely to be in that CF than the standard
3600s ones, but they're there.
>
> -Mark
>
> On Thu, Sep 1, 2016 at 3:53 AM, Kevin O'Connor <ke...@reddit.com> wrote:
> > We're running C* 1.2.11 and have
We're running C* 1.2.11 and have two CFs, one called OAuth2AccessToken and
one OAuth2AccessTokensByUser. OAuth2AccessToken has the token as the row
key, and the columns are some data about the OAuth token. There's a TTL set
on it, usually 3600, but can be higher (up to 1 month).
Now that OpsCenter doesn't work with open source installs, are there any
runs at an open source equivalent? I'd be more interested in looking at
metrics of a running cluster and doing other tasks like managing
repairs/rolling restarts more so than historical data.
Are you in VPC or EC2 Classic? Are you using enhanced networking?
On Tue, Apr 12, 2016 at 9:52 AM, Alessandro Pieri wrote:
> Hi Jack,
>
> As mentioned before I've used m3.xlarge instance types together with two
> ephemeral disks in raid 0 and, according to Amazon, they have
Have you tried restarting? It's possible there's open file handles to
sstables that have been compacted away. You can verify by doing lsof and
grepping for DEL or deleted.
If it's not that, you can run nodetool cleanup on each node to scan all of
the sstables on disk and remove anything that it's