Víctor Hugo Oliveira Molinar wrote:
I guess Hector fits your requirements. The last release is pretty new.
But i'd suggest you to take a look at astyanax too.
thanks.
C* versions 1.1 and 1.2 seem to have been released after the latest
Hector release.
Does Hector support al the newer C*
I tried https://github.com/datastax/java-driver
with below CQL3. It works well.
CREATE TABLE my_columnfamily (
printer varchar,
computer varchar,
snapshot int,
xml text,
status varchar,
PRIMARY KEY ((printer, computer), snapshot)
)
WITH CLUSTERING ORDER BY (snapshot DESC);
CREATE
Wow. SO LCS with bloom filter fp chance of 0.1 and an index sampling rate
of 512 on a column family of 1.7billion rows each node yields 100% result
on first sstable reads? That sounds amazing. And I assume this is
cfhistograms output from a node that has been on 512 for a while? ( I
still think
About index_interval:
1) you have to rebuild stables ( not an issue if you are evaluating, doing
test writes.. Etc, not so much in production )
Are you sure of this? As I understand indexes, it's not required because
this parameter defines an interval of in-memory index sample, which is
Hello,
Any suggestion abort it?
whether you use which one, they both contact Cassandra via thrift. I'd
suggest you to take look at RingCache, it can help you to compute the
endpoint of data on Client side since Cassandra will forward your request
on coordinator node.
and another solution is
Dean, what is your row size approximately?
We've been using ii = 512 for a long time because of memory issues, but
now - as bloom filter is kept off-heap and memory is not an issue
anymore - I've reverted it to 128 to see if this improves anything. It
seems it doesn't (except that I have less
I can not find the reference that notes having to upgradesstables when you
change this. I really hope such complex assumptions are not formulating in
my head just on their own and there actually exists some kind of reliable
reference that clears this up :-) but,
# index_interval controls the
Argh, now I think that row size has nothing to do with the ii-based
index size/efficiency (I was thinking about the need of reading
index_interval / 2 entries in average from index file before finding the
proper one, but it should not have nothing to do with row size) - forget
the question;
OK, I took a look at the source code and for now it seems to me that we
both are partially right ( ;-) ), but changing index_interval does NOT
require rebuilding SSTables:
Yes, index sample file can be persisted (see
io/sstable/IndexSummary.java, serialize/deserialize methods +
Prior to 1.2 the index summaries were not saved on disk, and were thus
computed on startup while the sstable was loaded. In 1.2 they now are saved
on disk to make startup faster (
https://issues.apache.org/jira/browse/CASSANDRA-2392). That being said, if
the index_interval value used by a summary
Neat!
Thanks.
From: Sylvain Lebresne sylv...@datastax.commailto:sylv...@datastax.com
Reply-To: user@cassandra.apache.orgmailto:user@cassandra.apache.org
user@cassandra.apache.orgmailto:user@cassandra.apache.org
Date: Thursday 21 March 2013 10:10
To:
HI,
A minute or so after starting the dse service (Datastax 3.0) the daemon stops.
The system.log says:
INFO [CompactionExecutor:1] 2013-03-21 14:53:18,596 CompactionTask.java (line
230) Compacted to [/var/lib/cassandra/data/system/Loc
ationInfo/system-LocationInfo-hf-5-Data.db,]. 540 to 337
Did you also change the seeds to match the listen address?
On Thu, Mar 21, 2013 at 9:22 AM, Sloot, Hans-Peter
hans-peter.sl...@atos.net wrote:
HI,
A minute or so after starting the dse service (Datastax 3.0) the daemon
stops.
The system.log says:
INFO [CompactionExecutor:1]
You'll need to explain a bit about how you use cassandra.
Is compaction running ? Check with nodetool compactionstats.
Are you doing very large reads? Larger than thrift_framed_transport_size_in_mb
?
cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
Take a look a nodetool gossipinfo it will tell you what nodes the node thinks
are around.
If you can see something in gossip that should not be there you have a few
choices.
* if it's less than 3 days since a change to ring topology wait and see if C*
sorts it out.
* try restarting nodes
If you want to use CQL 3 please use cassandra 1.2, it has the final version of
the language.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 21/03/2013, at 12:51 AM, Ondřej Černoš cern...@gmail.com wrote:
Hey,
Not sure if I needed to change cassandra-topology.properties file on the
existing nodes.
If you are using the PropertyFileSnitch all nodes need to have the same
cassandra-topology.properties file.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
When I query for user_id = user1 and order_attr1 = 1991 I want to get the
order_num. Is this possible without super columns?
If you only have a few hundred columns you can read them all back and filter
client side.
Secondary indexes are used when you do not know the row you want to get
Should we not use removenode command instead of removenode force.
Yes.
I've passed the comment on the DS docs team.
Thanks
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 21/03/2013, at 7:25 AM, sankalp kohli
All cassandra-topology.properties are the same.
The node add appears to be successful. I can see it using nodetool status.
I'm doing a node cleanup on the old nodes and then will do a node remove,
to remove the old node. The actual node join took about 6 hours. The wiped
node(now new node) has
DEBUG [Thrift:1] 2013-03-19 00:00:53,313 ReadCallback.java (line 79) Blockfor
is 2; setting up requests to /xx.yy.zz.146,/xx.yy.zz.143,/xx.yy.zz.145
DEBUG [Thrift:1] 2013-03-19 00:00:53,334 CassandraServer.java (line 306)
get_slice
DEBUG [Thrift:1] 2013-03-19 00:00:53,334 ReadCallback.java
Can I do a multiple node nodetool cleanup on my test cluster?
On 21 Mar 2013 17:12, Jabbar Azam aja...@gmail.com wrote:
All cassandra-topology.properties are the same.
The node add appears to be successful. I can see it using nodetool status.
I'm doing a node cleanup on the old nodes and
heap of 1867M is kind of small. According to the discussion on this list,
it's advisable to have m1.xlarge.
+1
In cassadrea-env.sh set the MAX_HEAP_SIZE to 4GB, and the NEW_HEAP_SIZE to 400M
In the yaml file set
in_memory_compaction_limit_in_mb to 32
compaction_throughput_mb_per_sec to 8
I've got a 1.1.5 cluster, and a few weeks ago I removed some nodes from it. (I
was trying to upgrade nodes from AWS' large to xlarge, and for some reason that
made sense at the time, it seemed better to double my nodes and then
decommission the smaller ones, rather than to simply rebuild the
+ Did run cfhistograms, the results are interesting (Note: row cache is
disabled):
SSTables in cfhistograms is a friend here. It tells you how many sstables were
read from per read, if it's above 3 I then take a look at the data model. If
you case I would be wondering how long that row with
Yes - using NetworkTopologyStrategy
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: Thursday, March 21, 2013 10:22 AM
To: user@cassandra.apache.org
Subject: Re: Question regarding multi datacenter and LOCAL_QUORUM
DEBUG [Thrift:1] 2013-03-19 00:00:53,313 ReadCallback.java (line 79)
Astyanax is under active development at Netflix.
https://github.com/datastax/java-driver is under active development at Data
Stax.
Chose one of those IMHO.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On
Using the unsafeAssassinateEndpoint function with old IPs from JMX should
do the trick.
This was already discussed in this mailing list, search using
unsafeAssassinateEndpoint
as keyword to know all that you need to know about it.
Hope you'll be ok after that.
Alain
2013/3/21 Ben Chobot
Beside the joke, would hinted handoff really have any role in this issue? I
have been struggling to reproduce this issue using the snapshot data taken
from our cluster and following the same upgrade process from 1.1.6 to
1.1.10. I know snapshots only link to active SSTables. What if these
returned
Thanks Alain, this seems to have stopped the log messages, even though nodetool
gossipinfo still shows all the old nodes there. (And now all sharing token 50?
I dunno where that came from.) Will they eventually fall away from the cluster,
or are they there for good?
On Mar 21, 2013, at 11:53
It had only been running for 2 hours back then, but it has been a full 24
hours now and our read ping program is still showing the same read times
pretty consistently.
Dean
On 3/21/13 1:51 AM, Andras Szerdahelyi
andras.szerdahe...@ignitionone.com wrote:
Wow. SO LCS with bloom filter fp chance
Can you provide the full create keyspace statement ?
Yes – using NetworkTopologyStrategy
mmm, maybe it thinks the other nodes are down.
Cheers
-
Aaron Morton
Freelance Cassandra Consultant
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 22/03/2013, at 6:42 AM,
Aaron
Here you go:
create keyspace sipfs
with placement_strategy = 'NetworkTopologyStrategy'
and strategy_options = {AZ1 : 3, AZ2 : 3}
and durable_writes = true;
The CFs all have the following:
create column family xxx
with column_type = 'Standard'
and comparator =
I took Brandon's suggestion in CASSANDRA-5332 and upgraded to 1.1.10 before
upgrading to 1.2.2 but the issue with nodetool ring reporting machines as
down did not resolve.
On Fri, Mar 15, 2013 at 6:35 PM, Arya Goudarzi gouda...@gmail.com wrote:
Thank you very much Aaron. I recall from the logs
nodetool cleanup command removes keys which can be deleted from the node
the command is run. So I'm assuming I can run nodetool cleanup on all the
old nodes in parallel. Wouldn't do this on a live cluster as it's I/O
intensive on each node.
On 21 March 2013 17:26, Jabbar Azam aja...@gmail.com
(And now all sharing token 50? I dunno where that came from.)
Not sure about what you mean.
nodetool gossipinfo still shows all the old nodes there
They must appear with a left or remove status. Off the top of my head,
this information will remains 7 days. Not sure about it.
2013/3/21 Ben
Ah, well I'll check back in a week then. But for the record, what I meant was
that nodetool gossipinfo now has entries like:
/10.1.20.201
STATUS:LEFT,50,1364152145790
Where is shows 50 is where the token used to be, and where it still is on all
my live nodes. So it appears to me as if all my
I do not recall what the 50 means, but IIRC, the 1364152145790 is the unix
timestamp (in millisecs rather than secs) of the expire time when they _should_
go away completely.
perl -e 'print scalar(gmtime(1364152145))'
Sun Mar 24 19:09:05 2013
From: Ben Chobot
Hello,
We've had a 5-node C* cluster (version 1.1.0) running for several months.
Up until now we've mostly been writing data, but now we're starting to
service more read traffic. We're seeing far more disk I/O to service these
reads than I would have anticipated.
The CF being queried consists
Answers prefixed with [PP]
_
From: aaron morton [mailto:aa...@thelastpickle.com]
Sent: 21 March 2013 23:11
To: user@cassandra.apache.org
Subject: Re: Unable to fetch large amount of rows
+ Did run cfhistograms, the results are interesting (Note: row cache is
disabled):
SSTables in
40 matches
Mail list logo