Re: A proposed use case, any comments and experience is appreciated

2010-10-04 Thread Jonathan Ellis
Expiring columns are 0.7 only.

An expired column behaves like a deleted column until it is compacted away.

On Mon, Oct 4, 2010 at 8:48 AM, Utku Can Topçu u...@topcu.gen.tr wrote:
 Hi Jonathan,

 Thank you for mentioning about the expiring columns issue. I didn't know
 that it had existed.
 That's really great news.
 First of all, does the current 0.6 branch support it? If not so, is the
 patch available for the 0.6.5 somehow?
 And about the deletion issue, if all the columns in a row expire? When the
 row will be deleted, will I be seeing the row in my map inputs somehow, and
 for how long?

 Regards,
 Utku

 On Mon, Oct 4, 2010 at 3:30 PM, Jonathan Ellis jbel...@gmail.com wrote:

 A simpler approach might be to insert expiring columns into a 2nd CF
 with a TTL of one hour.

 On Mon, Oct 4, 2010 at 5:12 AM, Utku Can Topçu u...@topcu.gen.tr wrote:
  Hey All,
 
  I'm planning to run Map/Reduce on one of the ColumnFamilies. The keys
  are
  formed in such a fashion that, they are indexed in descending order by
  time.
  So I'll be analyzing the data for every hour iteratively.
 
  Since the current Hadoop integration does not support partial
  columnfamily
  analysis. I feel that, I'll need to dump the data of the last hour and
  put
  it to the hadoop cluster and do my analysis on the flat text file.
  Do you think of any other better way of getting the data of a keyrange
  into a hadoop cluster for analysis?
 
  Regards,
 
  Utku
 
 
 



 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of Riptano, the source for professional Cassandra support
 http://riptano.com





-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com


Re: A proposed use case, any comments and experience is appreciated

2010-10-04 Thread Jonathan Ellis
A simpler approach might be to insert expiring columns into a 2nd CF
with a TTL of one hour.

On Mon, Oct 4, 2010 at 5:12 AM, Utku Can Topçu u...@topcu.gen.tr wrote:
 Hey All,

 I'm planning to run Map/Reduce on one of the ColumnFamilies. The keys are
 formed in such a fashion that, they are indexed in descending order by time.
 So I'll be analyzing the data for every hour iteratively.

 Since the current Hadoop integration does not support partial columnfamily
 analysis. I feel that, I'll need to dump the data of the last hour and put
 it to the hadoop cluster and do my analysis on the flat text file.
 Do you think of any other better way of getting the data of a keyrange
 into a hadoop cluster for analysis?

 Regards,

 Utku






-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com


Re: A proposed use case, any comments and experience is appreciated

2010-10-04 Thread Utku Can Topçu
What I can understand from behaving like a deleted column is
- They'll be there for at most GCGraceSeconds?

On Mon, Oct 4, 2010 at 3:51 PM, Jonathan Ellis jbel...@gmail.com wrote:

 Expiring columns are 0.7 only.

 An expired column behaves like a deleted column until it is compacted away.

 On Mon, Oct 4, 2010 at 8:48 AM, Utku Can Topçu u...@topcu.gen.tr wrote:
  Hi Jonathan,
 
  Thank you for mentioning about the expiring columns issue. I didn't know
  that it had existed.
  That's really great news.
  First of all, does the current 0.6 branch support it? If not so, is the
  patch available for the 0.6.5 somehow?
  And about the deletion issue, if all the columns in a row expire? When
 the
  row will be deleted, will I be seeing the row in my map inputs somehow,
 and
  for how long?
 
  Regards,
  Utku
 
  On Mon, Oct 4, 2010 at 3:30 PM, Jonathan Ellis jbel...@gmail.com
 wrote:
 
  A simpler approach might be to insert expiring columns into a 2nd CF
  with a TTL of one hour.
 
  On Mon, Oct 4, 2010 at 5:12 AM, Utku Can Topçu u...@topcu.gen.tr
 wrote:
   Hey All,
  
   I'm planning to run Map/Reduce on one of the ColumnFamilies. The keys
   are
   formed in such a fashion that, they are indexed in descending order by
   time.
   So I'll be analyzing the data for every hour iteratively.
  
   Since the current Hadoop integration does not support partial
   columnfamily
   analysis. I feel that, I'll need to dump the data of the last hour and
   put
   it to the hadoop cluster and do my analysis on the flat text file.
   Do you think of any other better way of getting the data of a
 keyrange
   into a hadoop cluster for analysis?
  
   Regards,
  
   Utku
  
  
  
 
 
 
  --
  Jonathan Ellis
  Project Chair, Apache Cassandra
  co-founder of Riptano, the source for professional Cassandra support
  http://riptano.com
 
 



 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of Riptano, the source for professional Cassandra support
 http://riptano.com



Re: Dazed and confused with Cassandra on EC2 ...

2010-10-04 Thread Jedd Rashbrooke
 Hi Peter,

 Thanks again for your time and thoughts on this problem.

 We think we've got a bit ahead of the problem by just
 scaling back (quite savagely) on the rate that we try to
 hit the cluster.  Previously, with a surplus of optimism,
 we were throwing very big Hadoop jobs at Cassandra,
 including what I understand to be a worst-case usage
 (random reads).

 Now we're throttling right back on the number of parallel
 jobs that we fire from Hadoop, and we're seeing better
 performance, in terms of the boxes generally staying up
 as far as nodetool and other interactive sessions are
 concerned.

 As discussed, we've adopted quite a number of different
 approaches with GC - at the moment we've returned to:

 JVM_OPTS= \
-ea \
-Xms2G \
-Xmx3G \
-XX:+UseParNewGC \
-XX:+UseConcMarkSweepGC \
-XX:+CMSParallelRemarkEnabled \
-XX:SurvivorRatio=8 \
-XX:MaxTenuringThreshold=1 \
-XX:+HeapDumpOnOutOfMemoryError \
-Dcom.sun.management.jmxremote.port=8080 \
-Dcom.sun.management.jmxremote.ssl=false \
-Dcom.sun.management.jmxremote.authenticate=false

 ... which is much closer to the default as shipped - notable
 change is the heap size, which out of the box comes as 1G.

 There's some words on the 'Net that - the recent pages on
 Riptano's site in fact - that strongly encourage scaling left
 and right, rather than beefing up the boxes - and certainly
 we're seeing far less bother from GC using a much smaller
 heap - previously we'd been going up to 16GB, or even
 higher.  This is based on my previous positive experiences
 of getting better performance from memory hog apps (eg.
 Java) by giving them more memory.  In any case, it seems
 that using large amounts of memory on EC2 is just asking
 for trouble.

 And because it's Amazon, more smaller machines generally
 works out as the same CPU grunt per dollar, of course ..
 although the management costs go up.

 To answer your last question there - we'd been using some
 pretty beefy EC2 boxes, but now we think we'll head back
 to the 2-core 7GB medium-ish sized machines I think.

 All IO still runs like a dog no matter how much money you
 spend, sadly.

 cheers,
 Jedd.


Re: Hardware change of a node in the cluster

2010-10-04 Thread Jedd Rashbrooke
On 4 October 2010 10:58, Utku Can Topçu u...@topcu.gen.tr wrote:
 Recently I've tried to upgrade (hw upgrade) one of the nodes in my cassandra
 cluster from ec2-small to ec2-large.

 Something that bit me on this (I've done it with both
 Cassandra and Hadoop boxes, and some problems
 might be more Hadoopy related) is hostname.

 You need to change /etc/hostname and probably (unless
 you're happy to reboot again) the commandline of this
 too, to change it in the current instance.

 I found it best to drop all Cass instances (drain then stop),
 just to be a bit more confident.

 j.


Re: A proposed use case, any comments and experience is appreciated

2010-10-04 Thread Utku Can Topçu
Hi Jonathan,

Thank you for mentioning about the expiring columns issue. I didn't know
that it had existed.
That's really great news.
First of all, does the current 0.6 branch support it? If not so, is the
patch available for the 0.6.5 somehow?
And about the deletion issue, if all the columns in a row expire? When the
row will be deleted, will I be seeing the row in my map inputs somehow, and
for how long?

Regards,
Utku

On Mon, Oct 4, 2010 at 3:30 PM, Jonathan Ellis jbel...@gmail.com wrote:

 A simpler approach might be to insert expiring columns into a 2nd CF
 with a TTL of one hour.

 On Mon, Oct 4, 2010 at 5:12 AM, Utku Can Topçu u...@topcu.gen.tr wrote:
  Hey All,
 
  I'm planning to run Map/Reduce on one of the ColumnFamilies. The keys are
  formed in such a fashion that, they are indexed in descending order by
 time.
  So I'll be analyzing the data for every hour iteratively.
 
  Since the current Hadoop integration does not support partial
 columnfamily
  analysis. I feel that, I'll need to dump the data of the last hour and
 put
  it to the hadoop cluster and do my analysis on the flat text file.
  Do you think of any other better way of getting the data of a keyrange
  into a hadoop cluster for analysis?
 
  Regards,
 
  Utku
 
 
 



 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of Riptano, the source for professional Cassandra support
 http://riptano.com



Hardware change of a node in the cluster

2010-10-04 Thread Utku Can Topçu
Hey All,

Recently I've tried to upgrade (hw upgrade) one of the nodes in my cassandra
cluster from ec2-small to ec2-large.

However, there were problems and since the IP of the new instance was
different from the previous instance. The other nodes didnot recognize it in
the ring.

So what should be the best practice for a complete hardware change of one
node in the cluster while keeping the data that it has.

Regards,

Utku


Re: Hardware change of a node in the cluster

2010-10-04 Thread Gary Dusbabek
It should work this way:

1. Move your data to the new node (scp, etc.)
2. Make sure the new node is configured to use the same token as the new node.
3. Stand up the new node.
4. Turn off the old node.

If your environment is volatile, it's probably best to run `nodetool
repair` on the new node.

Gary.


On Mon, Oct 4, 2010 at 04:58, Utku Can Topçu u...@topcu.gen.tr wrote:
 Hey All,

 Recently I've tried to upgrade (hw upgrade) one of the nodes in my cassandra
 cluster from ec2-small to ec2-large.

 However, there were problems and since the IP of the new instance was
 different from the previous instance. The other nodes didnot recognize it in
 the ring.

 So what should be the best practice for a complete hardware change of one
 node in the cluster while keeping the data that it has.

 Regards,

 Utku



Re: 0.7.0 beta1 to beta2 rolling upgrade error

2010-10-04 Thread Jonathan Ellis
from the Upgrading section of NEWS.txt:

The Cassandra inter-node protocol is incompatible with 0.6.x
releases (and with 0.7 beta1), meaning you will have to bring your
cluster down prior to upgrading

On Mon, Oct 4, 2010 at 8:53 AM, Ian Rogers ian.rog...@contactclean.com wrote:

 I've tried to do a rolling upgrade of a 3-node ring from beta1 to beta2
 but got the error below.  The new node seems to come up fine - I can
 connect to it with cassandra-cli and see the keyspaces - but it doesn't
 join the ring.

 My method was:
  - stop old cassandra
  - mv old cassandra dir to new place
  - unpack
 http://mirror.lividpenguin.com/pub/apache//cassandra/0.7.0/apache-cassandra-0.7.0-beta1-bin.tar.gz

  - edit cassandra.yaml with appropriate IP addresses
  - start it


 Any ideas what's wrong? Is this possible? Any more info you need?

 Regards,

 Ian



  INFO [main] 2010-10-04 14:44:24,133 CLibrary.java (line 43) JNA not
 found. Native methods will be disabled.
  INFO [main] 2010-10-04 14:44:24,174 DatabaseDescriptor.java (line 125)
 Loading settings from file:/usr/share/cassandra/conf/cassandra.yaml
  INFO [main] 2010-10-04 14:44:24,506 DatabaseDescriptor.java (line 176)
 DiskAccessMode 'auto' determined to be standard, indexAccessMode is
 standard
  INFO [main] 2010-10-04 14:44:24,900 SSTableReader.java (line 162)
 Sampling index for /var/lib/cassandra/data/system/Schema-e-2-
  INFO [main] 2010-10-04 14:44:24,941 SSTableReader.java (line 162)
 Sampling index for /var/lib/cassandra/data/system/Schema-e-1-
  INFO [main] 2010-10-04 14:44:24,989 SSTableReader.java (line 162)
 Sampling index for /var/lib/cassandra/data/system/Migrations-e-2-
  INFO [main] 2010-10-04 14:44:24,991 SSTableReader.java (line 162)
 Sampling index for /var/lib/cassandra/data/system/Migrations-e-1-
  INFO [main] 2010-10-04 14:44:25,005 SSTableReader.java (line 162)
 Sampling index for /var/lib/cassandra/data/system/LocationInfo-e-21-
  INFO [main] 2010-10-04 14:44:25,132 DatabaseDescriptor.java (line 443)
 Loading schema version 724c8cf6-cb17-11df-9c90-ed76782a4b26
  INFO [main] 2010-10-04 14:44:26,403 SSTable.java (line 145) Deleted
 /var/lib/cassandra/data/system/LocationInfo-e-18-
  INFO [main] 2010-10-04 14:44:26,406 SSTable.java (line 145) Deleted
 /var/lib/cassandra/data/system/LocationInfo-e-19-
  INFO [main] 2010-10-04 14:44:26,409 SSTable.java (line 145) Deleted
 /var/lib/cassandra/data/system/LocationInfo-e-17-
  INFO [main] 2010-10-04 14:44:26,411 SSTable.java (line 145) Deleted
 /var/lib/cassandra/data/system/LocationInfo-e-20-
  INFO [main] 2010-10-04 14:44:26,448 CommitLog.java (line 174) Replaying
 /var/lib/cassandra/commitlog/CommitLog-1286199323936.log
  INFO [main] 2010-10-04 14:44:27,823 CommitLog.java (line 325) Finished
 reading /var/lib/cassandra/commitlog/CommitLog-1286199323936.log
  INFO [main] 2010-10-04 14:44:27,827 CommitLogSegment.java (line 50)
 Creating new commitlog segment
 /var/lib/cassandra/commitlog/CommitLog-1286199867827.log
  INFO [main] 2010-10-04 14:44:27,986 ColumnFamilyStore.java (line 459)
 switching in a fresh Memtable for LocationInfo at
 CommitLogContext(file='/var/lib/cassandra/commitlog/CommitLog-1286199867827.log',
 position=0)
  INFO [main] 2010-10-04 14:44:27,997 ColumnFamilyStore.java (line 771)
 Enqueuing flush of memtable-locationi...@1506732(17 bytes, 1 operations)
  INFO [FLUSH-WRITER-POOL:1] 2010-10-04 14:44:28,000 Memtable.java (line
 150) Writing memtable-locationi...@1506732(17 bytes, 1 operations)
  INFO [FLUSH-WRITER-POOL:1] 2010-10-04 14:44:28,322 Memtable.java (line
 157) Completed flushing
 /var/lib/cassandra/data/system/LocationInfo-e-22-Data.db
  INFO [main] 2010-10-04 14:44:28,355 CommitLog.java (line 182) Log replay
 complete
  INFO [main] 2010-10-04 14:44:28,420 StorageService.java (line 331)
 Cassandra version: 0.7.0-beta2
  INFO [main] 2010-10-04 14:44:28,421 StorageService.java (line 332) Thrift
 API version: 17.1.0
  INFO [main] 2010-10-04 14:44:28,423 SystemTable.java (line 261) Saved
 Token found: 0
  INFO [main] 2010-10-04 14:44:28,424 SystemTable.java (line 278) Saved
 ClusterName found: Test Cluster
  INFO [main] 2010-10-04 14:44:28,425 SystemTable.java (line 293) Saved
 partitioner not found. Using org.apache.cassandra.dht.RandomPartitioner
  INFO [main] 2010-10-04 14:44:28,428 ColumnFamilyStore.java (line 459)
 switching in a fresh Memtable for LocationInfo at
 CommitLogContext(file='/var/lib/cassandra/commitlog/CommitLog-1286199867827.log',
 position=276)
  INFO [main] 2010-10-04 14:44:28,429 ColumnFamilyStore.java (line 771)
 Enqueuing flush of memtable-locationi...@2883071(95 bytes, 2 operations)
  INFO [FLUSH-WRITER-POOL:1] 2010-10-04 14:44:28,430 Memtable.java (line
 150) Writing memtable-locationi...@2883071(95 bytes, 2 operations)
  INFO [FLUSH-WRITER-POOL:1] 2010-10-04 14:44:28,714 Memtable.java (line
 157) Completed flushing
 /var/lib/cassandra/data/system/LocationInfo-e-23-Data.db
  INFO [main] 2010-10-04 14:44:28,744 

Re: 0.7.0 beta1 to beta2 rolling upgrade error

2010-10-04 Thread Ian Rogers
 Thanks,  I just pushed ahead with the rolling upgrade with bootstrap 
off.  This just meant the beta1 cluster got smaller and dissappeared 
while the beta2 cluster got bigger and took over.


This is only a dev system so no writes will/should have been lost.

Ian

On 04/10/2010 14:55, Jonathan Ellis wrote:

from the Upgrading section of NEWS.txt:

 The Cassandra inter-node protocol is incompatible with 0.6.x
 releases (and with 0.7 beta1), meaning you will have to bring your
 cluster down prior to upgrading

On Mon, Oct 4, 2010 at 8:53 AM, Ian Rogersian.rog...@contactclean.com  wrote:

I've tried to do a rolling upgrade of a 3-node ring from beta1 to beta2
but got the error below.  The new node seems to come up fine - I can
connect to it with cassandra-cli and see the keyspaces - but it doesn't
join the ring.

My method was:
  - stop old cassandra
  - mv old cassandra dir to new place
  - unpack
http://mirror.lividpenguin.com/pub/apache//cassandra/0.7.0/apache-cassandra-0.7.0-beta1-bin.tar.gz

  - edit cassandra.yaml with appropriate IP addresses
  - start it


Any ideas what's wrong? Is this possible? Any more info you need?

Regards,

Ian



  INFO [main] 2010-10-04 14:44:24,133 CLibrary.java (line 43) JNA not
found. Native methods will be disabled.
  INFO [main] 2010-10-04 14:44:24,174 DatabaseDescriptor.java (line 125)
Loading settings from file:/usr/share/cassandra/conf/cassandra.yaml
  INFO [main] 2010-10-04 14:44:24,506 DatabaseDescriptor.java (line 176)
DiskAccessMode 'auto' determined to be standard, indexAccessMode is
standard
  INFO [main] 2010-10-04 14:44:24,900 SSTableReader.java (line 162)
Sampling index for /var/lib/cassandra/data/system/Schema-e-2-
  INFO [main] 2010-10-04 14:44:24,941 SSTableReader.java (line 162)
Sampling index for /var/lib/cassandra/data/system/Schema-e-1-
  INFO [main] 2010-10-04 14:44:24,989 SSTableReader.java (line 162)
Sampling index for /var/lib/cassandra/data/system/Migrations-e-2-
  INFO [main] 2010-10-04 14:44:24,991 SSTableReader.java (line 162)
Sampling index for /var/lib/cassandra/data/system/Migrations-e-1-
  INFO [main] 2010-10-04 14:44:25,005 SSTableReader.java (line 162)
Sampling index for /var/lib/cassandra/data/system/LocationInfo-e-21-
  INFO [main] 2010-10-04 14:44:25,132 DatabaseDescriptor.java (line 443)
Loading schema version 724c8cf6-cb17-11df-9c90-ed76782a4b26
  INFO [main] 2010-10-04 14:44:26,403 SSTable.java (line 145) Deleted
/var/lib/cassandra/data/system/LocationInfo-e-18-
  INFO [main] 2010-10-04 14:44:26,406 SSTable.java (line 145) Deleted
/var/lib/cassandra/data/system/LocationInfo-e-19-
  INFO [main] 2010-10-04 14:44:26,409 SSTable.java (line 145) Deleted
/var/lib/cassandra/data/system/LocationInfo-e-17-
  INFO [main] 2010-10-04 14:44:26,411 SSTable.java (line 145) Deleted
/var/lib/cassandra/data/system/LocationInfo-e-20-
  INFO [main] 2010-10-04 14:44:26,448 CommitLog.java (line 174) Replaying
/var/lib/cassandra/commitlog/CommitLog-1286199323936.log
  INFO [main] 2010-10-04 14:44:27,823 CommitLog.java (line 325) Finished
reading /var/lib/cassandra/commitlog/CommitLog-1286199323936.log
  INFO [main] 2010-10-04 14:44:27,827 CommitLogSegment.java (line 50)
Creating new commitlog segment
/var/lib/cassandra/commitlog/CommitLog-1286199867827.log
  INFO [main] 2010-10-04 14:44:27,986 ColumnFamilyStore.java (line 459)
switching in a fresh Memtable for LocationInfo at
CommitLogContext(file='/var/lib/cassandra/commitlog/CommitLog-1286199867827.log',
position=0)
  INFO [main] 2010-10-04 14:44:27,997 ColumnFamilyStore.java (line 771)
Enqueuing flush of memtable-locationi...@1506732(17 bytes, 1 operations)
  INFO [FLUSH-WRITER-POOL:1] 2010-10-04 14:44:28,000 Memtable.java (line
150) Writing memtable-locationi...@1506732(17 bytes, 1 operations)
  INFO [FLUSH-WRITER-POOL:1] 2010-10-04 14:44:28,322 Memtable.java (line
157) Completed flushing
/var/lib/cassandra/data/system/LocationInfo-e-22-Data.db
  INFO [main] 2010-10-04 14:44:28,355 CommitLog.java (line 182) Log replay
complete
  INFO [main] 2010-10-04 14:44:28,420 StorageService.java (line 331)
Cassandra version: 0.7.0-beta2
  INFO [main] 2010-10-04 14:44:28,421 StorageService.java (line 332) Thrift
API version: 17.1.0
  INFO [main] 2010-10-04 14:44:28,423 SystemTable.java (line 261) Saved
Token found: 0
  INFO [main] 2010-10-04 14:44:28,424 SystemTable.java (line 278) Saved
ClusterName found: Test Cluster
  INFO [main] 2010-10-04 14:44:28,425 SystemTable.java (line 293) Saved
partitioner not found. Using org.apache.cassandra.dht.RandomPartitioner
  INFO [main] 2010-10-04 14:44:28,428 ColumnFamilyStore.java (line 459)
switching in a fresh Memtable for LocationInfo at
CommitLogContext(file='/var/lib/cassandra/commitlog/CommitLog-1286199867827.log',
position=276)
  INFO [main] 2010-10-04 14:44:28,429 ColumnFamilyStore.java (line 771)
Enqueuing flush of memtable-locationi...@2883071(95 bytes, 2 operations)
  INFO [FLUSH-WRITER-POOL:1] 2010-10-04 14:44:28,430 Memtable.java (line
150) 

Re: Sorting by secondary index

2010-10-04 Thread Jonathan Ellis
Yes, but probably not in 0.7.0.

On Sun, Oct 3, 2010 at 12:08 PM, Petr Odut petr.o...@gmail.com wrote:
 Thanks for the info. Will be ever possible to going throught a secondary
 index (since SI is sorted by default)? My use case is to display newest
 comments, users, etc. SI from my point of view perfectly fits here.

 Thanks :)
 Petr Odut.

 Dne 2010 10 1 18:26 Jonathan Ellis jbel...@gmail.com napsal(a):
 No, additional expressions (the GTE here) only affect what rows come
 back and do not affect sort order.

 On Fri, Oct 1, 2010 at 10:55 AM, Petr Odut petr.o...@gmail.com wrote:
 OK,
 I have a query with...

 --

 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of Riptano, the source for professional Ca...



-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com


Re: Sorting by secondary index

2010-10-04 Thread Petr Odut
partly good news, thanks

Petr Odut

On Mon, Oct 4, 2010 at 5:40 PM, Jonathan Ellis jbel...@gmail.com wrote:

 Yes, but probably not in 0.7.0.

 On Sun, Oct 3, 2010 at 12:08 PM, Petr Odut petr.o...@gmail.com wrote:
  Thanks for the info. Will be ever possible to going throught a secondary
  index (since SI is sorted by default)? My use case is to display newest
  comments, users, etc. SI from my point of view perfectly fits here.
 
  Thanks :)
  Petr Odut.
 
  Dne 2010 10 1 18:26 Jonathan Ellis jbel...@gmail.com napsal(a):
  No, additional expressions (the GTE here) only affect what rows come
  back and do not affect sort order.
 
  On Fri, Oct 1, 2010 at 10:55 AM, Petr Odut petr.o...@gmail.com wrote:
  OK,
  I have a query with...
 
  --
 
  Jonathan Ellis
  Project Chair, Apache Cassandra
  co-founder of Riptano, the source for professional Ca...



 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder of Riptano, the source for professional Cassandra support
 http://riptano.com



first step with Cassandra

2010-10-04 Thread Himanshu Jani
Hello all,

I am here with a very basic question.
I am very new to Cassandra slowly migrating from RDBMS world. Is there any
worked example that I can use it to learn Cassandra? I have lots of
documentation around Cassandra but any simple worked example will be of
great help in understanding Cassandra and working with it, like setting up
Cassandra on single node with creating a simple data store and accessing,
step by step guide and sample code.

Thanks very much and best regards
Himanshu

-- 
Even IMPOSSIBLE says I M POSSIBLE


Re: first step with Cassandra

2010-10-04 Thread Jonathan Ellis
Did you see twissandra linked from
http://wiki.apache.org/cassandra/ArticlesAndPresentations ?

(Twissandra is targetted at 0.7 now, btw.)

On Mon, Oct 4, 2010 at 10:56 AM, Himanshu Jani himanshu.j...@gmail.com wrote:
 Hello all,

 I am here with a very basic question.
 I am very new to Cassandra slowly migrating from RDBMS world. Is there any
 worked example that I can use it to learn Cassandra? I have lots of
 documentation around Cassandra but any simple worked example will be of
 great help in understanding Cassandra and working with it, like setting up
 Cassandra on single node with creating a simple data store and accessing,
 step by step guide and sample code.

 Thanks very much and best regards
 Himanshu

 --
 Even IMPOSSIBLE says I M POSSIBLE




-- 
Jonathan Ellis
Project Chair, Apache Cassandra
co-founder of Riptano, the source for professional Cassandra support
http://riptano.com


Re: first step with Cassandra

2010-10-04 Thread Petr Odut
http://wiki.apache.org/cassandra/UseCases

On Mon, Oct 4, 2010 at 5:56 PM, Himanshu Jani himanshu.j...@gmail.comwrote:

 Hello all,

 I am here with a very basic question.
 I am very new to Cassandra slowly migrating from RDBMS world. Is there any
 worked example that I can use it to learn Cassandra? I have lots of
 documentation around Cassandra but any simple worked example will be of
 great help in understanding Cassandra and working with it, like setting up
 Cassandra on single node with creating a simple data store and accessing,
 step by step guide and sample code.

 Thanks very much and best regards
 Himanshu

 --
 Even IMPOSSIBLE says I M POSSIBLE




-- 
Petr Odut [petr.o...@gmail.com]


Re: first step with Cassandra

2010-10-04 Thread Juho Mäkinen
I posted a real life example how we used cassandra to store data for a
facebook chat like application. Check it out at
http://www.juhonkoti.net/2010/09/25/example-how-to-model-your-data-into-nosql-with-cassandra

 - Juho Mäkinen

On Mon, Oct 4, 2010 at 7:04 PM, Petr Odut petr.o...@gmail.com wrote:
 http://wiki.apache.org/cassandra/UseCases

 On Mon, Oct 4, 2010 at 5:56 PM, Himanshu Jani himanshu.j...@gmail.com
 wrote:

 Hello all,

 I am here with a very basic question.
 I am very new to Cassandra slowly migrating from RDBMS world. Is there any
 worked example that I can use it to learn Cassandra? I have lots of
 documentation around Cassandra but any simple worked example will be of
 great help in understanding Cassandra and working with it, like setting up
 Cassandra on single node with creating a simple data store and accessing,
 step by step guide and sample code.

 Thanks very much and best regards
 Himanshu

 --
 Even IMPOSSIBLE says I M POSSIBLE



 --
 Petr Odut [petr.o...@gmail.com]