Hi,
I would appreciate any help.
I have a cluster of 4 servers with replication factor 3 version 1.0.0
The cluster was upgraded from 0.7.8.
I am trying to upgrade to 1.0.2 and when I try to start the first upgraded
server I get the following error
ERROR [WRITE-/10.5.6.102]
On Thu, 2011-11-10 at 22:35 -0800, footh wrote:
UUID startId = new UUID(UUIDGen.createTime(start),
UUIDGen.getClockSeqAndNode());
UUID finishId = new UUID(UUIDGen.createTime(finish),
UUIDGen.getClockSeqAndNode());
You have got comparator_type = TimeUUIDType ?
~mck
--
The old law
On Sun, Nov 13, 2011 at 4:35 AM, Michael Vaknine micha...@citypath.com wrote:
I am trying to upgrade to 1.0.2 and when I try to start the first upgraded
server I get the following error
ERROR [WRITE-/10.5.6.102] 2011-11-13 10:20:37,447
AbstractCassandraDaemon.java (line 133) Fatal exception
[1] i'm not particularly worried about transient conditions so that's
ok. i think there's still the possibility of a non-transient false
positive...if 2 writes were to happen at exactly the same time (highly
unlikely), eg
1) A reads previous location (L1) from index entries
2) B reads
You are right this solved the problem.
I do not understand why version 1.0.0 was not affected since I used the same
configuration yaml file.
Thank you.
Michael Vaknine
-Original Message-
From: Brandon Williams [mailto:dri...@gmail.com]
Sent: Sunday, November 13, 2011 4:48 PM
To:
I believe https://issues.apache.org/jira/browse/CASSANDRA-2802 broke
it. I've created https://issues.apache.org/jira/browse/CASSANDRA-3489
to address this separately.
On Sun, Nov 13, 2011 at 9:37 AM, Michael Vaknine micha...@citypath.com wrote:
You are right this solved the problem.
I do not
Let's catch up. I am available in Mumbai.
Using C* in dev env. Love to share or hear experience's.
On Fri, Nov 11, 2011 at 10:25 PM, Adi adi.pan...@gmail.com wrote:
Hey GeekTalks/any other cassandra users around Mumbai/Pune,
I will be around Mumbai from last week of Nov through Third week of
https://issues.apache.org/jira/browse/CASSANDRA-3488
On Nov 12, 2011, at 9:52 AM, Jeremy Hanna wrote:
It sounds like that's just a message in compactionstats that's a no-op. This
is reporting for about an hour that it's building a secondary index on a
specific column family. Not sure if
I would like to know it also - actually is should be similar, plus there are
no dependencies to sun.misc packages.
I don't remember the discussion, but I assume the reason is that
allocateDirect() is not freeable except by waiting for soft ref
counting. This is enforced by the API in order to
Due to some application dependencies I've been holding off on a
Cassandra upgrade for a while. Now that my last application using the
old thrift client is updated I have the green light to prep my
upgrade. Since I'm on .6 the upgrade is obviously a bit trickier. Do
the standard instructions for
I need to create mapping from userId(s) to username(s) which need to
provide for fast lookups service ?
Also I need to provide a mapping from username to userId inorder to
implement search functionality in my application.
What could be a good strategy to implement this ? (I would welcome
Yes, correct, it's not going to clean itself. Using your example with
a little more detail:
1 ) A(T1) reads previous location (T0,L0) from index_entries for user U0
2 ) B(T2) reads previous location (T0,L0) from index_entries for user U0
3 ) A(T1) deletes previous location (T0,L0) from
I've done more experimentation and the behavior persists: I start with a
normal dataset which is searcheable by a secondary index. I select by
that index the entries that match a certain criterion, then delete
those. I tried two methods of deletion -- individual cf.remove() as well
as batch
On Sun, Nov 13, 2011 at 5:57 PM, Maxim Potekhin potek...@bnl.gov wrote:
I've done more experimentation and the behavior persists: I start with a
normal dataset which is searcheable by a secondary index. I select by that
index the entries that match a certain criterion, then delete those. I
Deletions in Cassandra imply the use of tombstones (see
http://wiki.apache.org/cassandra/DistributedDeletes) and under some
circumstances reads can turn O(n) with respect to the amount of
columns deleted, depending. It sounds like this is what you're seeing.
For example, suppose you're inserting
Thanks to all for valuable insight!
Two comments:
a) this is not actually time series data, but yes, each item has
a timestamp and thus chronological attribution.
b) so, what do you practically recommend? I need to delete
half a million to a million entries daily, then insert fresh data.
What's
On Sun, Nov 13, 2011 at 6:55 PM, Maxim Potekhin potek...@bnl.gov wrote:
Thanks to all for valuable insight!
Two comments:
a) this is not actually time series data, but yes, each item has
a timestamp and thus chronological attribution.
b) so, what do you practically recommend? I need to
in
the system's activity.
I create the DATE attribute and add it to each row, e.g. it's a column
{'DATE','2013'}.
I create an index on that column, along with a few others.
Now, I want to rotate the data out of my database, on daily basis. For
that, I need to
select on 'DATE' and then do
the data into buckets each representing one day in the
system's activity.
I create the DATE attribute and add it to each row, e.g. it's a column
{'DATE','2013'}.
Hmm, so why is pushing this into the row key and then deleting the
entire row not acceptable? (this is what the link I gave would
I do limit the number of rows I'm asking for in Pycassa. Queries on primary
keys still work fine,
Is it feasable in your situation to keep track of the oldest possible
data (for example, if there is a single sequential writer that rotates
old entries away it could keep a record of what the
to group the data into buckets each representing one day in the
system's activity.
I create the DATE attribute and add it to each row, e.g. it's a column
{'DATE','2013'}.
Hmm, so why is pushing this into the row key and then deleting the
entire row not acceptable? (this is what the link I gave
Thanks Peter,
I'm not sure I entirely follow. By the oldest data, do you mean the
primary key corresponding to the limit of the time horizon? Unfortunately,
unique IDs and the timstamps do not correlate in the sense that
chronologically
newer entries might have a smaller sequential ID. That's
I'm not sure I entirely follow. By the oldest data, do you mean the
primary key corresponding to the limit of the time horizon? Unfortunately,
unique IDs and the timstamps do not correlate in the sense that
chronologically
newer entries might have a smaller sequential ID. That's because the
Hello
Say I have 4 nodes: A, B, C and D and wish to have consistency level
for writes defined in such as way that writes meet the following
consistency level:
(A or B) AND C AND !D,
i.e. either of A or B will suffice and C to be included into
consistency level as well. But the write should not
24 matches
Mail list logo