Naresh, are you deploying cassandra in windows?
If that is the case you may need to change the data and commitlog
directories in cassandra.yaml. Also you should check the log directories.
See the section 2.1 http://wiki.apache.org/cassandra/GettingStarted
On Tue, Aug 13, 2013 at 8:28 AM,
Carlo,
Do you read/write with the consistency levels according to your needs [1]?
Have you tried to see if it happens when using the cassandra-cli to get
that data?
[1] http://wiki.apache.org/cassandra/ArchitectureOverview
On Wed, Jul 24, 2013 at 5:34 PM, cbert...@libero.it
, Richard Low rich...@wentnet.com wrote:
On 19 July 2013 23:31, Alexis Rodríguez arodrig...@inconcertcc.comwrote:
Hi guys,
I've read here [1] that you can make a deletion mutation for the
future. That mechanism operates as a schedule for deletions according to
the stackoverflow post. But, I've
That repo for libcassandra works for cassandra 0.7.x due to changes in the
thrift interface we have faced some problems in the past.
May be you can take a look to my fork of libcassandra https://github.com/axs
-mvd/libcassandra that we are using with cassandra 1.1.11.
Besides that, I recommend
Shubham,
You are right, my point is that with non schema-update thrift calls you can
tune the consistency level used.
bye.
On Wed, Jul 3, 2013 at 10:10 AM, Shubham Mittal smsmitta...@gmail.comwrote:
hi Alexis,
Even if I create keyspaces, column families using cassandra-cli, the
column
Nicolai,
Perhaps you can check the system.log to see if there are any errors on
compaction. Also, I believe C* 1.2.0 it's not a stable version.
On Thu, May 9, 2013 at 2:43 AM, Nicolai Gylling n...@issuu.com wrote:
Hi
I have a 3-node SSD-based cluster, with around 1 TB data, RF:3, C*
Alexis, Yes compaction happens on data files -
1. What my disk latency is high for SSDs which been only used for commit
log
2. Why my compaction is not catching up with my write traffic in spite of
low CPU, low memory and low JVM usage.
I am adding more details to this thread.
Thanks,
Jayant K
, Alexis,
Thanks for reply, Please find some more details below.
*Core problems:* Compaction is taking longer time to finish. So it will
affect my reads. I have more CPU and memory, want to utilize that to speed
up the compaction process.
*Parameters used:*
1. SSTable size: 500MB (tried
:D
Jay, check if your disk(s) utilization allows you to change the
configuration the way Edward suggest. iostat -xkcd 1 will show you how much
of your disk(s) are in use.
On Wed, Apr 17, 2013 at 5:26 PM, Edward Capriolo edlinuxg...@gmail.comwrote:
three things:
1) compaction throughput is
(minor or major), long after all its records have been deleted. This causes
disk usage to rise dramatically. The only way to make the SStable files
disappear is to run “nodetool cleanup” (which takes hours to run).
** **
Just a theory so far….
** **
*From:* Alexis Rodríguez
Adeel,
It may be a problem in the remote node, could you check the system.log?
Also you might want to check the rpc_timeout_in_ms in both nodes, maybe an
increase in this parameter helps.
On Fri, Apr 12, 2013 at 9:17 AM, adeel.ak...@panasiangroup.com wrote:
Hi,
I have started repair on
Aaron,
It seems that we are in the same situation as Nury, we are storing a lot of
files of ~5MB in a CF.
This happens in a test cluster, with one node using cassandra 1.1.5, we
have commitlog in a different partition than the data directory. Normally
our tests use nearly 13 GB in data, but when
Alain,
Can you post your mdadm --detail /dev/md0 output here as well as your
iostat -x -d when that happens. A bad ephemeral drive on EC2 is not unheard
of.
Alexis | @alq | http://datadog.com
P.S. also, disk utilization is not a reliable metric, iostat's await and
svctm are more useful imho
Hi guys!
We are getting the following message in our logs
ERROR [CompactionExecutor:535] 2012-10-31 12:14:14,254 CounterContext.java
(line 381) invalid counter shard detected;
(ea9feac0-ec3b-11e1--fea7847157bf, 1, 60) and
(ea9feac0-ec3b-11e1--fea7847157bf, 1, -60) differ only in count;
still have to trigger a nodetool-repair on node-{01,02}?
Thanks,
Alexis
is responsible for) will get repaired on all
three nodes.
Andrey
On Mon, Oct 15, 2012 at 11:56 AM, Alexis Midon alexismi...@gmail.com
wrote:
Hi all,
I have a 9-node cluster with a replication factor R=3. When I run repair
-pr
on node-00, I see the exact same load and activity on node
forget it. this was nonsense.
On Mon, Oct 15, 2012 at 10:05 PM, Alexis Midon alexismi...@gmail.comwrote:
I see. So if I don't use the '-pr' option, triggering repair on node-00 is
sufficient to repair the first 3 nodes.
No need to cron a repair on node-{01,02}.
correct?
thanks for your
solution seems to be the good one ? Does Cassandra is really a good
fit for this use case ?
Thanks
Alexis Coudeyras
--
View this message in context:
http://cassandra-user-incubator-apache-org.3065146.n2.nabble.com/Data-Modeling-tp7300846p7300846.html
Sent from the cassandra-u
more info.
Alexis Lauthier
De : aaron morton aa...@thelastpickle.com
À : user@cassandra.apache.org
Envoyé le : Mardi 17 Janvier 2012 1h49
Objet : Re: Compressed families not created on new node
eeek, HW errors.
I would guess (thats all it is) that an IO
(Thread.java:722)
How can I get the compressed families on the new node ?
Thanks,
Alexis Lauthier
and insert.
--
Alexis Lê-Quôc | Datadog, Inc. | @alq
cassandra-people,
I'm trying to measure disk usage by cassandra after inserting some columns
in order to plan disk sizes and configurations for future deploys.
My approach is very straightforward:
clean_data (stop_cassandra rm -rf
/var/lib/cassandra/{dara,commitlog,saved_caches}/*)
(response, id, msg.getFrom());
43 }
44 }
Before I dig deeper in the code, has anybody dealt with this before?
Thanks,
--
Alexis Lê-Quôc
%
92535295865117307932921825928971026432
192.168.0.5 Up Normal 263.91 MB 12.28%
113427455640312821154458202477256070485
192.168.0.6 Up Normal 26.21 MB 8.33%
127605887595351923798765477786913079296
--
Dikang Gu
0086 - 18611140205
--
Alexis Lê-Quôc (@alq) | Datadog
this be caused by old hinted handoffs for 2.3.4.193 that were processed
at that time, causing the rest of the nodes to think that the 2.3.4.193 is
still present (albeit down)?
Should cleanup be run periodically? I run repair every few days (my
gcgraceperiod is 10 days).
--
Alexis Lê-Quôc (@datadoghq
26 matches
Mail list logo