Hi Aaron,
Thanks for your answer.
I apologize, I did a mistake in my 1st mail. The cluster was only 12 nodes
instead of 16 (it is a test cluster).
There are 2 datacenters b1 and s1.
Here is the result of nodetool status after adding a new node in the 1st
datacenter (dc s1):
root@node007:~#
Hello all,
The perl script below throws TApplicationException=HASH(0x2323600).
I googled around and it seems to be a thrift issue.
Does anyone have a clue how I can prevent this?
Regards Hans-Peter
use perlcassa;
use strict;
use warnings;
use perlcassa;
my $obj = new perlcassa(
I can describe keyspace keyspace just fine and I see my table(as the CREATE
TABLE seen below) but when I run
describe table nreldata cqlsh just prints out Not in any keyspace. Am I
doing something wrong here? This is 1.1.4 cassandra and I wanted to try to set
my bloomfilter fp to 1.0 (ie.
I am used to systems running a first phase calculating how much files it will
need to go through and then logging out the percent done or X files out of
total files done. I ran this command and it is logging nothing
nodetool upgradesstables databus5 nreldata;
I have 130Gigs of data on my node
Hello,
Anyone have already used ReverseIndexQuery from Astyanay. I was tring to
understand it, but I execute the example of Astyanax Site and can not
understood.
Ssomeone can help me please?
Thanks;
--
Everton Lima Aleixo
Mestrando em Ciência da Computação pela UFG
Programador no LUPA
Hello,
I'm using v1.2.1. If I want to use desc table and I haven't done a use
keyspace then I use desc table keyspace.tablename.
However if I have done use keyspace I only do a desc table tablename
On 22 February 2013 14:09, Hiller, Dean dean.hil...@nrel.gov wrote:
I can describe keyspace
So in the cli, I ran
update column family nreldata with bloom_filter_fp_chance=1.0;
Then I ran
nodetool upgradesstables databus5 nreldata;
But my bloom filter size is still around 2gig(and I want to free up this
heap) According to nodetool cfstats command…
Column Family: nreldata
SSTable
Yes, this is a thrift error returned by C*. You can use Data::Dumper to grab
what's in that hash ref to see if there are more clues. Throw your object in an
eval{} block and then print Dumper($@)
If you file a bug on github I can work with you there more so we don't bother
everyone on the
Hi,
My impression from reading docs is that in old versions of Cassandra, you
could create very wide rows, say with timestamps as column names for time
series data, and read an ordered slice of the row. So,
RowKeyColumns
=== ==
RowKey1 1:val1 2:val2 3:val3 N:valN
With this
AFAIk this is still roughly correct
http://thelastpickle.com/2011/04/28/Forces-of-Write-and-Read/
It includes information on the page size read from disk.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 22/02/2013,
Ok. So for 10 TB, I could have at least 4 SStables files each of 2.5 TB ?
You will have many sstables, in your case 32.
Each bucket of files (files that are within 50% of the average size of files in
a bucket) will contain 3 or less files.
This article provides com back ground, but it's
To get a good idea of how GC is performing turn on the GC logging in
cassandra-env.sh.
After a full cms GC event, see how big the tenured heap is. If it's not
reducing enough then GC will never get far enough ahead.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New
Finally found itŠnodetool compactionstats shows the percentage complete.
Dean
On 2/22/13 7:44 AM, Hiller, Dean dean.hil...@nrel.gov wrote:
I am used to systems running a first phase calculating how much files it
will need to go through and then logging out the percent done or X files
out of
If you are running repair, using QUORUM, and there are not dropped writes you
should not be getting DigestMismatch during reads.
If everything else looks good, but the request latency is higher than the CF
latency I would check that client load is evenly distributed. Then start
looking to see
We would like to take a node out of the ring and upgradesstables while it is
not doing any writes nor reads with the ring. Is this possible?
I am thinking from the documentation
1. nodetool drain
2. ANYTHING to stop reads here
3. Modify cassandra.yaml with
Couldn't you just disable thrift and leave gossip active?
On 2/22/13 9:01 AM, Hiller, Dean dean.hil...@nrel.gov wrote:
We would like to take a node out of the ring and upgradesstables while it
is not doing any writes nor reads with the ring. Is this possible?
I am thinking from the
So, it looks that the repair is required if we want to add new nodes in our
platform, but I don't understand why.
Bootstrapping should take care of it. But new seed nodes do not bootstrap.
Check the logs on the nodes you added to see what messages have bootstrap in
them.
Anytime you are
nodetool compactionstats
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 23/02/2013, at 3:44 AM, Hiller, Dean dean.hil...@nrel.gov wrote:
I am used to systems running a first phase calculating how much files it
Just to add though- compactionstats on an upgradesstables will only show the
currently running sstable being upgraded. Overall progress on a upgradesstables
isn't exposed anywhere yet but you can figure out how much there is to go thru
the log lines.
From: aaron morton
We are trying to answer client library specific questions on the client-dev
list, see the link at the bottom here http://cassandra.apache.org/
If you can ask a more specific question I'll answer it there.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
Bloom Filter Space Used: 2318392048
Just to be sane do a quick check of the -Filter.db files on disk for this CF.
If they are very small try a restart on the node.
Number of Keys (estimate): 1249133696
Hey a billion rows on a node, what an age we live in :)
Cheers
-
Aaron
Does this effectively create the same storage structure?
Yes.
SELECT Value FROM X WHERE RowKey = 'RowKey1' AND TimeStamp BETWEEN 100 AND
1000;
select value from X where RoWKey = 'foo' and timestamp = 100 and timestamp =
1000;
I also don't understand some of the things like WITH COMPACT
dropped this secondary index after while.
I assume you use UPDATE COLUMN FAMILY in the CLI.
How can I avoid this secondary index building on node join?
Check the schema using show schema in the cli.
Check that all nodes in the cluster have the same schema, using describe
cluster in the cli.
To stop all writes and reads disable thrift and gossip via nodetool.
This will not stop any in progress repair sessions nor disconnect fat clients
if you have them.
There are also cmd line args cassandra.start_rpc and cassandra.join_ring whihc
do the same thing.
You can also change the
Hello,
Still doing research before we potentially move one of our column
families from Size Tiered-Leveled compaction this weekend. I was doing
some research around some of the bugs that were filed against leveled
compaction in Cassandra and I found this:
Thanks, but I found out it is still running. It looks like I have about a 5
hour wait left for my upgradesstables(waited 4 hours already). I will check
the bloomfilter after that.
Out of curiosity, if I had much wider rows (ie. 900k) per row, will
compaction run
So, it turns out we don't have enough I/o going on for our upgradesstables but
it is really hitting the upper bounds of memory(8G) and our cpu is pretty low
as well.
At any rate, we are trying to remove a 2 gig bloomfilter on a columnfamily.
Can we do the following
1. Disable thrift/gossip
27 matches
Mail list logo