We use COPY command to generate a file, from source and destination. After
that you can use diff tool.
On Mon, Oct 30, 2017 at 10:11 PM Pradeep Chhetri
wrote:
> Hi,
>
> We are taking daily snapshots for backing up our cassandra data and then
> use our backups to restore in
Does, nodetool stopdaemon, implicitly drain too? or we should invoke drain
and then stopdaemon?
On Mon, Oct 16, 2017 at 4:54 AM, Simon Fontana Oscarsson <
simon.fontana.oscars...@ericsson.com> wrote:
> Looking at the code in trunk, the stopdemon command invokes the
> CassandraDaemon.stop()
If you already have a regular cadence of repair then you can set
"commit_failure_policy" to ignore in cassandra.yaml. So that C* process
does not crash on corrupt commit log.
On Fri, Jul 7, 2017 at 2:10 AM, Hannu Kröger wrote:
> Hello,
>
> yes, that’s what we do when things
I do not see the need to run repair, as long as cluster was in healthy
state on adding new nodes.
On Fri, Jul 7, 2017 at 8:37 AM, vasu gunja wrote:
> Hi ,
>
> I have a question regarding "nodetool repair -dc" option. recently we
> added multiple nodes to one DC center, we
Hi Cassandra-Users,
C* 3.0.13 RELEASE HAS A CORNER CASE BUG ON SCHEMA UPDATE, WHICH CORRUPTS
THE DATA. PLEASE DO NOT UPDATE SCHEMA. OUR TEAM IS WORKING ON FIXING THE
ISSUE!
Thanks, Varun
Akhil,
As per the blog, nodetool status shows data size for node1 even for token
ranges it does not own. Ain't this is bug in Cassandra?
Yes, on disk data will be present but it should be reflected in nodetool
status.
On Thu, Jun 15, 2017 at 6:17 PM, Akhil Mehra wrote:
>
Can you please check Cassandra Stats, if cluster is under too much load.
This is the symptom, not the root cause.
On Tue, May 30, 2017 at 2:33 AM, Abhishek Kumar Maheshwari <
abhishek.maheshw...@timesinternet.in> wrote:
> Hi All,
>
>
>
> Please let me know why this debug log is coming:
>
>
>
>
Can you please check if you have incremental backup enabled and snapshots
are occupying the space.
run nodetool clearsnapshot command.
On Tue, May 30, 2017 at 11:12 AM, Daniel Steuernol
wrote:
> It's 3-4TB per node, and by load rises, I'm talking about load as reported
>
I am missing the point, why do you want to re-trigger the process post
repair. Repair will sync the data correctly.
On Mon, May 29, 2017 at 8:07 AM, Jan Algermissen wrote:
> Hi,
>
> is it possible to extract from repair logs the writetime of the writes
> that needed
We upgraded from 2.2.5 to 3.0.11 and it works fine. I will suggest not to go
with 3.013, we are seeing some issues with schema mismatch due to which we had
to rollback to 3.0.11.
Thanks,
Varun
> On May 19, 2017, at 7:43 AM, Stefano Ortolani wrote:
>
> Here
Yes the bugs need to be fixed, but as a work around on dev environment, you can
enable cassandra.yaml option to override any corrupted commit log file.
Thanks,
Varun
> On May 19, 2017, at 11:31 AM, Jeff Jirsa wrote:
>
>
>
>> On 2017-05-19 08:13 (-0700), Haris Altaf
down, I'm
> not sure how else it would be pre-empted. No one else on the team is on the
> servers and I haven't been shutting them down. There also is no java memory
> dump on the server either. It appears that the process just died.
>
>
>
> On May 11 2017, at 1:36 pm, Varun
I did not get your question completely, with "snapshot files are mixed with
files and backup files".
When you call nodetool snapshot, it will create a directory with snapshot
name if specified or current timestamp at
/data///backup/. This directory will
have all sstables, metadata files and
What do you mean by "no obvious error in the logs", do you see node was
drained or shutdown. Are you sure, no other process is calling nodetool
drain or shutdown, OR pre-empting cassandra process?
On Thu, May 11, 2017 at 1:30 PM, Daniel Steuernol
wrote:
>
> I have a 6
If there was no node down during that period, and you are using
LOCAL_QUORUM read/write, then yes above command works.
On Thu, May 11, 2017 at 11:59 AM, Gopal, Dhruva
wrote:
> Hi –
>
> I have a question on running a repair after bringing up a node that was
> down
Hi Igor,
You can setup cluster with configuration as below.
Replication: DC1: 3 and DC2: 1.
If you are using datastax java driver, then use dcaware load balancing
policy and pass DC1, as input. As well as add DC2 node in ignore nodes, so
request never goes to that node.
Thanks,
Varun
On Wed,
Hi,
We are periodically backing up sstables, and need to learn, what sanity
checks should be performed after restoring them?
Thanks,
Varun
17 matches
Mail list logo