"system_auth" not my table.
On 04/11/2017 07:12 AM, Oskar Kjellin wrote:
> You changed to 6 nodes because you were running out of disk? But you
> still replicate 100% to all so you don't gain anything
>
>
>
> On 10 Apr 2017, at 13:48, Cogumelos Maravilha
>
You changed to 6 nodes because you were running out of disk? But you still
replicate 100% to all so you don't gain anything
> On 10 Apr 2017, at 13:48, Cogumelos Maravilha
> wrote:
>
> No.
>
> nodetool status, nodetool describecluster also nodetool ring shows a
@cassandra.apache.org
Subject: Re: Node always dieing
Notice
.SimpleSeedProvider{seeds=10.100.100.19, 10.100.100.85, 10.100.100.185,
10.100.100.161, 10.100.100.52, 10.100.1000.213};
Why do you have all six of your nodes as seeds? is it possible that the last
one you added used itself as the seed and is isolated
No.
nodetool status, nodetool describecluster also nodetool ring shows a
correct cluster.
Not all nodes needs to be a seed, but can be.
I had also ALTER KEYSPACE system_auth WITH REPLICATION = { 'class' :
'SimpleStrategy', 'replication_factor' : 6 } AND durable_writes = false;
And the first
Notice
.SimpleSeedProvider{seeds=10.100.100.19, 10.100.100.85, 10.100.100.185,
10.100.100.161, 10.100.100.52, 10.100.1000.213};
Why do you have all six of your nodes as seeds? is it possible that the
last one you added used itself as the seed and is isolated?
On Thu, Apr 6, 2017 at 6:48 AM,
There's a tweak.
I've forgot to put this in the new instance:
At /lib/udev/rules.d/
|cat ||40-vm-hotadd.rules**||# On Hyper-V and Xen Virtual Machines we
want to add memory and cpus as soon as they appear|
|ATTR{[dmi/id]sys_vendor}=="Microsoft Corporation",
ATTR{[dmi/id]product_name}=="Virtual
i3 are having those issues more than the other instances it seems. Not the
first report I heard about.
Regards,
Carlos Juzarte Rolo
Cassandra Consultant / Datastax Certified Architect / Cassandra MVP
Pythian - Love your data
rolo@pythian | Twitter: @cjrolo | Skype: cjr2k3 | Linkedin:
Yes but this time I going to give lots of time between killing and pickup.
Thanks a lot.
On 04/06/2017 05:31 PM, Avi Kivity wrote:
>
> Your disk is bad. Kill that instance and hope someone else gets it.
>
>
> On 04/06/2017 07:27 PM, Cogumelos Maravilha wrote:
>>
>> Interesting
>>
>> [
Your disk is bad. Kill that instance and hope someone else gets it.
On 04/06/2017 07:27 PM, Cogumelos Maravilha wrote:
Interesting
[ 720.693768] blk_update_request: I/O error, dev nvme0n1, sector
1397303056
[ 750.698840] blk_update_request: I/O error, dev nvme0n1, sector
1397303080
[
Interesting
[ 720.693768] blk_update_request: I/O error, dev nvme0n1, sector 1397303056
[ 750.698840] blk_update_request: I/O error, dev nvme0n1, sector 1397303080
[ 1416.202103] blk_update_request: I/O error, dev nvme0n1, sector 1397303080
On 04/06/2017 05:26 PM, Avi Kivity wrote:
>
> Is
Is there anything in dmesg?
On 04/06/2017 07:25 PM, Cogumelos Maravilha wrote:
Now dies and restart (systemd) without logging why
system.log
INFO [Native-Transport-Requests-2] 2017-04-06 16:06:55,362
AuthCache.java:172 - (Re)initializing RolesCache (validity period
/update interval/max
Now dies and restart (systemd) without logging why
system.log
INFO [Native-Transport-Requests-2] 2017-04-06 16:06:55,362
AuthCache.java:172 - (Re)initializing RolesCache (validity period
/update interval/max entries) (2000/2000/1000)
INFO [main] 2017-04-06 16:17:42,535
find /mnt/cassandra/ \! -user cassandra
nothing
I've found some "strange" solutions on Internet
chmod -R 2777 /tmp
chmod -R 2775 cassandra folder
Lets give some time to see the result
On 04/06/2017 03:14 PM, Michael Shuler wrote:
> All it takes is one frustrated `sudo cassandra` run. Checking
All it takes is one frustrated `sudo cassandra` run. Checking only the
top level directory ownership is insufficient, since root could own
files/dirs created below the top level. Find all files not owned by user
cassandra: `find /mnt/cassandra/ \! -user cassandra`
Just another thought.
--
We tested in c4 instances but EBS is too slow. So we deployed for
production in i3.
It was running with 5 nodes without problems but we started running out
of space so we added another node. And is this last node that is giving
problems. I've already terminated the instance and created another
There was some issue with the i3 instances and Cassandra. Did you had this
cluster running always on i3?
On Apr 6, 2017 13:06, "Cogumelos Maravilha"
wrote:
> Limit Soft Limit Hard Limit
> Units
> Max cpu time unlimited
Limit Soft Limit Hard Limit
Units
Max cpu time unlimitedunlimited
seconds
Max file size unlimitedunlimited
bytes
Max data size unlimitedunlimited
Limits: You should check them in /proc/$pid/limits
2017-04-06 12:48 GMT+02:00 Cogumelos Maravilha :
> Yes C* is running as cassandra:
>
> cassand+ 2267 1 99 10:18 ?00:02:56 java
> -Xloggc:/var/log/cassandra/gc.log -ea -XX:+UseThreadPriorities
>
Yes C* is running as cassandra:
cassand+ 2267 1 99 10:18 ?00:02:56 java
-Xloggc:/var/log/cassandra/gc.log -ea -XX:+UseThreadPriorities -XX:Threa...
INFO [main] 2017-04-06 10:35:42,956 Config.java:474 - Node
configuration:[allocate_tokens_for_keyspace=null;
Have you checked the effective limits of a running CS process?
Is CS run as Cassandra? Just to rule out missing file perms.
Am 06.04.2017 12:24 schrieb "Cogumelos Maravilha" <
cogumelosmaravi...@sapo.pt>:
>From cassandra.yaml:
hints_directory: /mnt/cassandra/hints
data_file_directories:
-
From cassandra.yaml:
hints_directory: /mnt/cassandra/hints
data_file_directories:
- /mnt/cassandra/data
commitlog_directory: /mnt/cassandra/commitlog
saved_caches_directory: /mnt/cassandra/saved_caches
drwxr-xr-x 3 cassandra cassandra 23 Apr 5 16:03 mnt/
drwxr-xr-x 6 cassandra
Cassandra cannot write an SSTable to disk. Are you sure the disk/volume
where SSTables reside (normally /var/lib/cassandra/data) is writeable for
the CS user and has enough free space?
The CDC warning also implies that.
The other warnings indicate you are probably not running CS as root and you
22 matches
Mail list logo