Hi,
Here are some upgrade options: - Standard rolling upgrade: node by node -
Fast rolling upgrade: rack by rack. If clients use CL=LOCAL_ONE then it's OK
as long as one rack is UP. For higher CL it's possible assuming you have no
more than one replica per rack e.g. CL=LOCAL_QUORUM with
5,17,19,21,23
There is nothing in server logs.
On monday I will activate debug and try again to startup cassandra node
Thanks
Francesco Messere
On 09/11/2018 18:51, Romain Hardouin wrote:
Ok so all nodes in Firenze are down. I thought only one was KO.
After
file
Regards
Francesco Messere
On 09/11/2018 17:48, Romain Hardouin wrote:
Hi Francesco, it can't work! Milano and Firenze, oh boy, Calcio vs Calcio
Storico X-D
Ok more seriously, "Updating topology ..." is not a problem. But you have low
resources and system miscon
Hi Francesco, it can't work! Milano and Firenze, oh boy, Calcio vs Calcio
Storico X-D
Ok more seriously, "Updating topology ..." is not a problem. But you have low
resources and system misconfiguration:
- Small heap size: 3.867GiB From the logs: "Unable to lock JVM memory
(ENOMEM). This can
Note that one "user"/application can open multiple connections. You have also
the number of Thrift connections available in JMX if you run a legacy
application.
Max is right, regarding where they're come from you can use lsof. For instance
on AWS - but you can adapt it for your needs:
Also, you didn't mention which C*2.0 version you're using but prior to upgrade
to 2.1.20, make sure to use the latest 2.0 - or at least >= 2.0.7
Le vendredi 3 août 2018 à 13:03:39 UTC+2, Romain Hardouin
a écrit :
Hi Joel,
No it's not supported. C*2.0 can't stream data to C*3
Hi Joel,
No it's not supported. C*2.0 can't stream data to C*3.11.
Make the upgrade 2.0 -> 2.1.20 then you'll be able to upgrade to 3.11.3 i.e.
2.1.20 -> 3.11.3. You can upgrade to 3.0.17 as an intermediary step (I would
do), but don't upgrade to 2.2. Also make sure to read carefully
Rocksandra is very interesting for key/value data model. Let's hope it will
land in C* upstream in the near future thanks to pluggable storage.Thanks
Dikang!
Le mardi 6 mars 2018 à 10:06:16 UTC+1, Kyrylo Lebediev
a écrit :
#yiv7016643451 #yiv7016643451 --
At Teads we use Terraform, Chef, Packer and Rundeck for our AWS infrastructure.
I'll publish a blog post on Medium which talk about that, it's in the pipeline.
Terraform is awesome.
Best,
RomainLe vendredi 9 février 2018 à 00:57:01 UTC+1, Ben Wood
a écrit :
https://github.com/Netflix/sstable-adaptor ? -
how do we send these tables to cassandra? does a simple SCP work? - what is the
recommended size for sstables for when it does not fit a single executor
On 5 February 2018 at 18:40, Romain Hardouin <romainh...@yahoo.fr.invalid>
wrote:
Hi Jul
Hi Julien,
We have such a use case on some clusters. If you want to insert big batches at
fast pace the only viable solution is to generate SSTables on Spark side and
stream them to C*. Last time we benchmarked such a job we achieved 1.3 million
partitions inserted per seconde on a 3 C* nodes
Hi,
We also noticed an increase of CPU - both system and user - on our c3.4xlarge
fleet. So far it's really visible with max(%user) and especially max(%system),
it has doubled!I graphed a ratio "write/s / %system", it's interesting to see
how the value dropped yesterday, you can see it here:
Does "nodetool describecluster" shows an actual schema disagreement?You can
try "nodetool resetlocalschema" to fix the issue on the node experiencing
disagreement.
Romain
Le jeudi 9 novembre 2017 à 02:55:22 UTC+1, Erick Ramirez
a écrit :
It looks like you have a
Hi,
You should read about repair maintenance:
http://cassandra.apache.org/doc/latest/operating/repair.htmlConsider installing
and running C* reaper to do so: http://cassandra-reaper.io/STCS doesn't work
well with TTL. I saw you have done some tuning, hard to say if it's OK without
knowing the
Hi,
It might be useful to enable compaction logging with log_all subproperties.
Best,
Romain
Le vendredi 8 septembre 2017 à 00:15:19 UTC+2, kurt greaves
a écrit :
Might be worth turning on debug logging for that node and when the compaction
kicks off and CPU
Hi,
Before: 1 cluster with 2 DC. 3 nodes in each DCNow: 1 cluster with 1 DC. 6
nodes in this DC
Is it right?
If yes, depending on the RF - and assuming NetworkTopologyStrategy - I would
do: - RF = 2 => 2 C* rack, one rack in each AZ - RF = 3 => 3 C* rack, one
rack in each AZ
In other words, I
ntext on this?
Thanks,kant
On Fri, Mar 3, 2017 at 4:42 AM, Romain Hardouin <romainh...@yahoo.fr> wrote:
Also, I should have mentioned that it would be a good idea to spawn your three
benchmark instances in the same AZ, then try with one instance on each AZ to
see how network latency affects
2017 at 6:51 PM, Romain Hardouin <romainh...@yahoo.fr> wrote:
Did you inspect system tables to see if there is some traces of your keyspace?
Did you ever drop and re-create this keyspace before that?
Lines in debug appear because fd interval is > 2 seconds (logs are in
nanoseconds). You c
Also, I should have mentioned that it would be a good idea to spawn your three
benchmark instances in the same AZ, then try with one instance on each AZ to
see how network latency affects your LWT rate. The lower latency is achievable
with three instances on the same placement group of course
ust send this over in case if it helps.
Thanks,kant
On Tue, Feb 28, 2017 at 7:51 PM, Kant Kodali <k...@peernova.com> wrote:
Hi Romain,
Thanks again. My response are inline.
kant
On Tue, Feb 28, 2017 at 10:04 AM, Romain Hardouin <romainh...@yahoo.fr> wrote:
> we are currently using
Did you inspect system tables to see if there is some traces of your keyspace?
Did you ever drop and re-create this keyspace before that?
Lines in debug appear because fd interval is > 2 seconds (logs are in
nanoseconds). You can override intervals via -Dcassandra.fd_initial_value_ms
and
h CI test suites. Does this
help?
...
Daemeon C.M. Reiydelle
USA (+1) 415.501.0198
London (+44) (0) 20 8144 9872
On Wed, Mar 1, 2017 at 3:30 AM, Romain Hardouin <romainh...@yahoo.fr> wrote:
Hi all,
AWS launched i3 instances few days ago*. NVMe SSDs seem very promising!
Did someone a
Hi all,
AWS launched i3 instances few days ago*. NVMe SSDs seem very promising!
Did someone already benchmark an i3 with Cassandra? e.g. i2 vs i3If yes, with
which OS and kernel version?Did you make any system tuning for NVMe? e.g. PCIe
IRQ? etc.
We plan to make some benchmarks but Debian is not
> we are currently using 3.0.9. should we use 3.8 or 3.10
No, don't use 3.X in production unless you really need a major feature.I would
advise to stick to 3.0.X (i.e. 3.0.11 now).You can backport CASSANDRA-11966
easily but of course you have to deploy from source as a prerequisite.
> I haven't
Hi,
Regarding shared pool workers see CASSANDRA-11966. You may have to backport it
depending on your Cassandra version.
Did you try to lower compaction throughput to see if it helps? Be sure to keep
an eye on pending compactions, SSTables count and SSTable per read of course.
"alloc" is the
ndices work, I use those.
But I would like to solve this.
Cheers,
Michael
On 02.02.2017 15:06, Romain Hardouin wrote:
> Hi,
>
> What's your C* 3.X version?
> I've just tested it on 3.9 and it works:
>
> cqlsh> SELECT * FROM test.idx_static wher
Hi,
What's your C* 3.X version?I've just tested it on 3.9 and it works:
cqlsh> SELECT * FROM test.idx_static where id2=22;
id | added | id2 | source |
dest-+-+-++-- id1 |
2017-01-27 23:00:00.00+ | 22 |
Default TTL is nice to provide information on tables for ops guys. I mean we
know that data in such tables are ephemeral at a glance.
Le Mercredi 1 février 2017 21h47, Carlos Rolo a écrit :
Awsome to know this!
Thanks Jon and DuyHai!
Regards,
Carlos Juzarte
Just a side note: increase system_auth keyspace replication factor if you're
using authentication.
Le Jeudi 12 janvier 2017 14h52, Alain RODRIGUEZ a
écrit :
Hi,
Nodetool repair always list lots of data and never stays repaired. I think.
This might be the
we'll use it in Production any time soon.
Thanks again!
| |
| Shalom Sagges |
| DBA |
| T: +972-74-700-4035 |
|
|
| | | |
| We Create Meaningful Connections |
|
| |
On Mon, Dec 26, 2016 at 7:37 PM, Romain Hardouin <romainh...@yahoo.fr> wrote:
Hi Shalom,
I assume yo
Hi Shalom,
I assume you'll use KVM virtualization so pay attention to your stack at every
level:- Nova e.g. CPU pinning, NUMA awareness if relevant, etc. Have a look to
extra specs.- libvirt - KVM- QEMU
You can also be interested by resources quota on other OpenStack VMs that will
be colocated
Hi all,
Many people here have troubles with repair so I would like to share my
experience regarding the backport of CASSANDRA-12580 "Fix merkle tree size
calculation" (thanks Paulo!) in our C* 2.1.16. I was expecting some minor
improvements but the results are impressive on some tables.
Hi Jean,
I had the same problem, I removed the lines in /etc/init.d/cassandra template
(we use Chef to deploy) and now the HeapDumpPath is not overridden anymore.The
same goes for -XX:ErrorFile.
Best,
Romain
Le Mardi 4 octobre 2016 9h25, Jean Carlo a
écrit :
Hi,
@Edward > In older versions you can not control when this call will
timeout,truncate_request_timeout_in_ms is available for many years, starting
from 1.2. Maybe you have another setting parameter in mind?
@GeorgeTry to put cassandra logs in debug
Best,
Romain
Le Mercredi 28 septembre
Hi Julian,
The problem with any deletes here is that you can *read* potentially many
tombstones. I mean you have two concerns: 1. Avoid to read tombstones during a
query 2. How to evict tombstones as quickly as possible to reclaim disk space
The first point is a data model consideration.
OK. If you still have issues after setting streaming_socket_timeout_in_ms != 0,
consider increasing request_timeout_in_ms to a high value, say 1 or 2 minutes.
See comments in https://issues.apache.org/jira/browse/CASSANDRA-7904Regarding
2.1, be sure to test incremental repair on your data
Alain, you replied faster, I didn't see your answer :-D
I meant that pending (and active) AntiEntropySessions are a simple way to check
if a repair is still running on a cluster. Also have a look at Cassandra reaper:
- https://github.com/spotify/cassandra-reaper
- https://github.com/spodkowinski/cassandra-reaper-ui
Best,
Romain
Le Mercredi 21
Do you see any pending AntiEntropySessions (not AntiEntropyStage) with nodetool
tpstats on nodes?
Romain
Le Mercredi 21 septembre 2016 16h45, "Li, Guangxing"
a écrit :
Alain,
my script actually grep through all the log files, including those
system.log.*.
Hi,
Do you shuffle the replicas with
TokenAwarePolicy?TokenAwarePolicy(LoadBalancingPolicy childPolicy, boolean
shuffleReplicas)
Best,
RomainLe Mardi 20 septembre 2016 15h47, Pranay akula
a écrit :
I was a able to find the hotspots causing the load,but the
Also for testing purposes, you can send only one replica set to the Test DC.
For instance with a RF=3 and 3 C* racks, you can just rsync/sstableload one
rack. It will be faster and OK for tests.
Best,
Romain
Le Mardi 20 septembre 2016 3h28, Michael Laws a
écrit :
Hi,
You should make a benchmark with cassandra-stress to find the sweet spot. With
NVMe I guess you can start with a high value, 128?
Please let us know the results of your findings, it's interesting to know if we
can go crazy with such pieces of hardware :-)
Best,
Romain
Le Mardi 20
Hi,
You can read and write the value of the following MBean via
JMX:org.apache.cassandra.db:type=CompactionManager - CoreCompactorThreads
- MaximumCompactorThreads
If you modify CoreCompactorThreads it will be effective immediatly, I mean
assuming you have some pending compactions, you will
Hi,
> More recent (I think 2.2) don't have this problem since they write hints to
>the file system as per the commit log
Flat files hints were implemented starting from 3.0
https://issues.apache.org/jira/browse/CASSANDRA-6230
Best,
Romain
Hi,
Disk-wise it's the same because a bigint is serialized as a 8 bytes ByteBuffer
and if you want to store a Long as bytes into a blob type it will take 8 bytes
too, right?The difference is the validation. The blob ByteBuffer will be stored
as is whereas the bigint will be validated. So
e keys return results instantly from cqlsh
On Tue, Sep 6, 2016 at 1:57 PM, Romain Hardouin <romainh...@yahoo.fr> wrote:
There is nothing special in the two sstablemetadata outuputs but if the
timeouts are due to a network split or overwhelmed node or something like that
you won't see a
1) Is it a typo or did you really make a giant leap from C* 1.x to 3.4 with all
the C*2.0 and C*2.1 upgrades? (btw if I were you, I would use the last 3.0.X)
2) Regarding NTR all time blocked (e.g. 26070160 from the logs), have a look to
the patch "max_queued_ntr_property.txt":
bstone ratio for your repo.
On Mon, Sep 5, 2016 at 8:11 PM, Romain Hardouin <romainh...@yahoo.fr> wrote:
Yes dclocal_read_repair_chance will reduce the cross-DC traffic and latency, so
you can swap the values ( https://issues.apache.org/ji ra/browse/CASSANDRA-7320
). I guess the sstable_
Hi,
You don't have to worry about that unless you write with CL = ANY. The sole
method to force hints that I know is to invoke scheduleHintDelivery on
"org.apache.cassandra.db:type=HintedHandoffManager" via JMX but it takes an
endpoint as argument. If you have lots of nodes and several DCs,
count definition : is it incremented based on the number of writes
for a given name(key?) and value. This table is heavy on reads and writes. If
so, the value should be much higher?
On Mon, Sep 5, 2016 at 7:35 AM, Romain Hardouin <romainh...@yahoo.fr> wrote:
Hi,
Try to put org.apache.cassandra.db. Consisten
Hi,
Try to put org.apache.cassandra.db.ConsistencyLevel at DEBUG level, it could
help to find a regular pattern. By the way, I see that you have set a global
read repair chance: read_repair_chance = 0.1And not the local read repair:
dclocal_read_repair_chance = 0.0 Is there any reason to
Hi Jérôme,
The code in 2.2.6 allows -local and
-pr:https://github.com/apache/cassandra/blob/cassandra-2.2.6/src/java/org/apache/cassandra/service/StorageService.java#L2899
But... the options validation introduced in CASSANDRA-6455 seems to break this
font-size:10.0pt;} _filtered #yiv4120164789 {margin:72.0pt 90.0pt 72.0pt
90.0pt;}#yiv4120164789 div.yiv4120164789WordSection1 {}#yiv4120164789 yes, we
use Cassandra 2.1.11 in our latest release. From: Romain Hardouin
[mailto:romainh...@yahoo.fr]
Sent: 2016年8月19日 17:36
To: user@cassandra.apache
ka is the 2.1 format... I don't understand. Did you install C* 2.1?
Romain
Le Vendredi 19 août 2016 11h32, "Lu, Boying" a écrit :
#yiv1355196952 #yiv1355196952 -- _filtered #yiv1355196952
{font-family:宋体;panose-1:2 1 6 0 3 1 1 1 1 1;} _filtered #yiv1355196952
Hi,
There are two ways to upgrade SSTables: - online (C* must be UP): nodetool
upgradesstables - offline (when C* is stopped): using the tool called
"sstableupgrade". It's located in the bin directory of Cassandra so
depending on how you installed Cassandra, it may be on the path. See
Hi,
Try this and check the yaml file path: strace -f -e open nodetool
upgradesstables 2>&1 | grep cassandra.yaml
How C* is installed (package, tarball)? Other nodetool commands run fine?Also,
did you try offline SSTable upgrade with the sstableupgrade tool?
Best,
Romain
Le Vendredi 12
selves and about other nodes they know
about.
unreachableNodes = probe.getUnreachableNodes(); ---> i.e if nodedon't
publish heartbeats on x seconds (using gossip protocol), it's therefore marked
'DN: down' ?
That's it?
2016-08-11 13:51 GMT+01:00 Romain Hardouin <romainh...@yahoo.f
Hi Jean Paul,
Yes, the gossiper is used. Example with down nodes:1. The status command
retrieve unreachable nodes from a NodeProbe instance:
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/tools/nodetool/Status.java#L64
2. The NodeProbe list comes from a
Yes. You can even see that some caution is taken in the code
https://github.com/apache/cassandra/blob/trunk/src/java/org/apache/cassandra/config/Config.java#L131
(But if I were you I would not rely on this. It's always better to be
explicit.)
Best,
Romain
Le Mercredi 10 août 2016 17h50,
> Curious why the 2.2 to 3.x upgrade path is risky at best. I guess that
>upgrade from 2.2 is less tested by DataStax QA because DSE4 used C* 2.1, not
>2.2.I would say the safest upgrade is 2.1 to 3.0.x
Best,
Romain
That's a good news if describecluster shows the same version on each node. Try
with a high timeout like 120 seconds to see if it works. Is there a VPN between
DCs? Is there room for improvement at the network level? TCP tuning, etc. I'm
not saying you won't have unreachable nodes but it's worth
Hi,
The latency is high...
Regarding the ALTER, did you try to increase the timeout with "cqlsh
--request-timeout=REQUEST_TIMEOUT"? Because the default is 10 seconds. Apart
the unreachable nodes, do you know if all nodes have the same schema version?
Best,
Romain
Just to know, did you get some errors during the nodetool upgradesstables?
Romain
Le Mardi 2 août 2016 8h40, Julien Anguenot a écrit :
Hey Oskar,
I would comment and add all possible information to that Jira issue…
J.
--Julien Anguenot (@anguenot)
On Aug 2,
DSE 4.8 uses C* 2.1 and DSE 5.0 uses C* 3.0. So I would say that 2.1->3.0 is
more tested by DataStax than 2.2->3.0.
Le Jeudi 14 juillet 2016 11h37, Stefano Ortolani a
écrit :
FWIW, I've recently upgraded from 2.1 to 3.0 without issues of any sort, but
admittedly I
s from it.
- Garo
On Thu, Jul 14, 2016 at 10:54 AM, Romain Hardouin <romainh...@yahoo.fr> wrote:
Do you run C* on physical machine or in the cloud? If the topology doesn't
change too often you can have a look a Zabbix. The downside is that you have to
set up all the JMX metrics yourself... but
Do you run C* on physical machine or in the cloud? If the topology doesn't
change too often you can have a look a Zabbix. The downside is that you have to
set up all the JMX metrics yourself... but that's also a good point because you
can have custom metrics. If you want nice graphs/dashboards
Did you upgrade from a previous version? DId you make some schema changes like
compaction strategy, compression, bloom filter, etc.?What about the R/W
requests? SharedPool Workers are... shared ;-) Put logs in debug to see some
examples of what services are using this pool (many actually).
Same behavior here with a very different setup.After an upgrade to 2.1.14 (from
2.0.17) I see a high load and many NTR "all time blocked". Offheap memtable
lowered the blocked NTR for me, I put a comment on CASSANDRA-11363
Best,
Romain
Le Mercredi 13 juillet 2016 20h18, Yuan Fang
Put the driver logs in debug mode to see what's happen.Btw I am surprised by
the few requests by connections in your setup:
.setConnectionsPerHost(HostDistance.LOCAL, 20, 20)
.setMaxRequestsPerConnection(HostDistance.LOCAL, 128) It looks like a
protocol v2 settings (Cassandra
> Would you know why the driver doesn't automatically change to LOCAL_SERIAL
> during a DC outage ?
I would say because *you* decide, not the driver ;-) This kind of fallback
could be achieved with a custom downgrading policy
(DowngradingConsistencyRetryPolicy [*] doesn't handle
Hi Jason,
It's difficult for the community to help you if you don't share the error
;-)What the logs said when you ran a major compaction? (i.e. the first error
you encountered)
Best,
Romain
Le Mercredi 8 juin 2016 3h34, Jason Kania a écrit :
I am running a 3
Hi,
You can't yet, see https://issues.apache.org/jira/browse/CASSANDRA-10857Note
that secondary indexes don't scale. Be aware of their limitations.If you want
to change the data model of a CF, a Spark job can do the trick.
Best,
Romain
Le Mardi 7 juin 2016 10h51, "Lu, Boying"
during autobootstrap :)
Thanks
Anuj
--------
On Tue, 26/4/16, Romain Hardouin <romainh...@yahoo.fr> wrote:
Subject: Re: Inconsistent Reads after Restoring Snapshot
To: "user@cassandra.apache.org" <user@cassandra.apache.org>
D
You can make a restore on the new node A (don't forget to set the token(s) in
cassandra.yaml), start the node with -Dcassandra.join_ring=false and then run a
repair on it. Have a look at
https://issues.apache.org/jira/browse/CASSANDRA-6961
Best,
Romain
Le Mardi 26 avril 2016 4h26, Anuj
Yes you are right Anishek. If you write with LOCAL_ONE, values will be the same.
As Mohammed said "nodetool clearsnaphost" will do the trick.
Cassandra takes a snapshot by default before keyspace/table dropping or
truncation.
You can disable this feature if it's a dev node (see auto_snapshot in
cassandra.yaml) but if it's a production node is a good thing to keep auto
What is the output on both nodes of the following command?
ls -l /var/lib/cassandra/data/system/*
If one node seems odd you can try "nodetool resetlocalschema" but the other
node must be in clean state.
Best,
Romain
Le Jeudi 11 février 2016 11h10, kedar a écrit :
Would you mind pasting the ouput for both nodes in gist/paste/whatever?
https://gist.github.com http://paste.debian.net
Le Jeudi 11 février 2016 11h57, kedar a écrit :
Thanks for the reply.
ls -l cassandra/data/* lists various *.db files
This problem is on both
Did you run "nodetool flush" on the source node? If not, the missing rows could
be in memtables.
Hi,
I assume a RF > 1. Right?What is the consistency level you used? cqlsh use ONE
by default. Try: cqlsh> CONSISTENCY ALLAnd run your query again.
Best,Romain
Le Vendredi 29 janvier 2016 13h45, Arindam Choudhury
a écrit :
Hi Kai,
The table schema is:
Hi Dillon,
CMIIW I suspect that you use vnodes and you want to "move one of the 256 tokens
to another node". If yes, that's not possible."nodetool move" is not allowed
with vnodes:
a/operations/ops_snapshot_restore_new_cluster.html
From: Romain Hardouin [mailto:romainh...@yahoo.fr]
Sent: Wednesday, November 18, 2015 3:59 PM
To: user@cassandra.apache.org
Subject: Re: Strategy tools for taking snapshots to load in another cluster
instance
|
You can take a snapshot via nodetool then l
You can take a snapshot via nodetool then load sstables on your test cluster
with sstableloader:
docs.datastax.com/en/cassandra/2.1/cassandra/tools/toolsBulkloader_t.html
Sent from Yahoo Mail on Android
From:"Anishek Agarwal"
Date:Wed, Nov 18, 2015 at 11:24
The trap is that each CF will consume 1 MB of memory due to arena
allocation.
This might seem harmless but if you plan thousands of CF it means
thousands of mega bytes...
Up to 1,000 CF I think it could be doable, but not 10,000.
Best,
Romain
tommaso barbugli tbarbu...@gmail.com a écrit sur
somehow?
Thanks
Tommaso
2014-07-02 17:21 GMT+02:00 Romain HARDOUIN romain.hardo...@urssaf.fr:
The trap is that each CF will consume 1 MB of memory due to arena
allocation.
This might seem harmless but if you plan thousands of CF it means
thousands of mega bytes...
Up to 1,000 CF I think
Hi Maria,
It depends which backup software and hardware you plan to use. Do you
store your data on DAS or SAN?
Some hints regarding Cassandra is either to drain the node to backup or
take a Cassandra snapshot and then to backup this snapshot.
We backup our data on tape but we also store our
So you have to install a backup client on each Cassandra node. If the
NetBackup client behaves like EMC Networker, beware the resources
utilization (data deduplication, compression). You could have to boost
CPUs and RAM (+2GB) of each nodes.
Try with one node: make a snapshot with nodetool and
Hi,
You have to define limits for the user.
Here is an example for the user cassandra:
# cat /etc/security/limits.d/cassandra.conf
cassandra - memlock unlimited
cassandra - nofile 10
best,
Romain
opensaf dev opensaf...@gmail.com a écrit sur 21/05/2014 06:59:05 :
De :
Well... you have already changed the limits ;-)
Keep in mind that changes in the limits.conf file will not affect
processes that are already running.
opensaf dev opensaf...@gmail.com a écrit sur 21/05/2014 06:59:05 :
De : opensaf dev opensaf...@gmail.com
A : user@cassandra.apache.org,
Date
RF=1 means no replication
You have to set RF=2 in order to set up a mirroring
-Romain
ng pipeli...@gmail.com a écrit sur 13/05/2014 19:37:08 :
De : ng pipeli...@gmail.com
A : user@cassandra.apache.org user@cassandra.apache.org,
Date : 14/05/2014 04:37
Objet : Datacenter understanding
Hi,
See data_file_directories and commitlog_directory in the settings file
cassandra.yaml.
Cheers,
Romain
Hari Rajendhran hari.rajendh...@tcs.com a écrit sur 07/04/2014 12:56:37
:
De : Hari Rajendhran hari.rajendh...@tcs.com
A : user@cassandra.apache.org,
Date : 07/04/2014 12:58
Objet
cassandra*.noarch.rpm - Install Cassandra Only
dsc*.noarch.rpm - DSC stands for DataStax Community. Install Cassandra +
OpsCenter
Donald Smith donald.sm...@audiencescience.com a écrit sur 27/03/2014
20:36:57 :
De : Donald Smith donald.sm...@audiencescience.com
A : 'user@cassandra.apache.org'
It looks like MagnetoDB for CloudStack.
Nice Clojure project.
Pierre-Yves Ritschard p...@spootnik.org a écrit sur 27/03/2014 08:12:15 :
De : Pierre-Yves Ritschard p...@spootnik.org
A : user user@cassandra.apache.org,
Date : 27/03/2014 08:12
Objet : [ANN] pithos is cassandra-backed S3
If you just want to play with Cassandra then it's OK.
But for production, Cassandra needs some kernel tuning.
user 01 user...@gmail.com a écrit sur 23/03/2014 21:52:52 :
De : user 01 user...@gmail.com
A : user@cassandra.apache.org,
Date : 23/03/2014 21:53
Objet : Can't modify
You have to tune Cassandra in order to run it under a low memory
environment.
Many settings must be tuned. The link that Michael mentions provides a
quick start.
There is a point that I haven't understood. *When* did your nodes die?
Under load? Or can they be killed via OOM killer even if
4 GB is OK for a test cluster.
In the past we encountered a similar issue due to VMWare ESX's memory
overcommit (memory ballooning).
When you talk about overcommit, you talk about Linux (vm.overcommit_*) or
hypervisor (like ESX)?
prem yadav ipremya...@gmail.com a écrit sur 24/03/2014
Set phi_convict_threshold to 12 is a good idea if your network is busy.
Are your VMs located in different datacenters?
Did you check if the nodes are not overloaded? An unresponsive node can be
seen as down even if it's temporary.
Romain
Phil Luckhurst phil.luckhu...@powerassure.com a écrit
OpsCenter provides cluster management features such creating a cluster and
adding a node:
http://www.datastax.com/documentation/opscenter/4.0/webhelp/index.html#opsc/online_help/opscClusterAdmin_c.html
Otherwise you can use Chef, Puppet, Salt, Ansible etc.
Cheers,
Romain
Peter Lin
Since you're on RHEL 5, you have compiled Python (no package available,
right?).
Have you configured Python to be built with zlib support:
--with-zlib=/usr/lib?
If not, compiled it with zlib and then run:
python -c import zlib
No error should appear.
Romain
erwin.karb...@gmail.com a écrit sur
Hi,
So you had to kill -9 the process?
Is there something interesting in system.log?
Can you restart the node or are there any errors on startup?
Romain
Mikhail Mazursky ash...@gmail.com a écrit sur 31/10/2013 08:02:22 :
De : Mikhail Mazursky ash...@gmail.com
A : user@cassandra.apache.org,
1 - 100 of 136 matches
Mail list logo