Getting all schema in 1.2.0-beta-1

2012-11-03 Thread Edward Capriolo
Using 1.2.0-beta1. I am noticing that there is no longer a single way
to get all the schema. It seems like non-compact storage can be seen
with show schema, but other tables are not visible. Is this by design,
bug, or operator error?

http://pastebin.com/PdSDsdTz


Re: Insert via CQL

2012-11-03 Thread Sylvain Lebresne
On Sat, Nov 3, 2012 at 1:40 AM, Eric Evans eev...@acunu.com wrote:
 On Fri, Nov 2, 2012 at 8:09 PM, Vivek Mishra mishra.v...@gmail.com wrote:
 any idea, how to insert into a column family for a column of type blob via
 cql query?

 Yes, most of them involve binary data that is hex-encoded ascii. :)

Unless you are using prepared statement. In which case you just send
you blob as binary. But for non-prepared queries, you do have to
hex-encode indeed (meaning that if you do use blobs, especially
largish one, I highly recommend using prepared statements).

--
Sylvain


Re: repair, compaction, and tombstone rows

2012-11-03 Thread Sylvain Lebresne
On Fri, Nov 2, 2012 at 10:46 AM, horschi hors...@gmail.com wrote:
 might I ask why repair cannot simply ignore anything that is older than
 gc-grace? (like Aaron proposed)

Well, actually the merkle tree computation could probably ignore
gcable tombstones without much problem, which might not be such a bad
idea and would probably solve much of your problem. However, when we
stream the ranges that needs to be repaired, we stream sub-part of the
sstables without deserializing them, so we can't exclude the gcable
tombstones in that phase (that is, it's a technical reason, but
deserializing data would be inefficient). Meaning that if we won't
guarantee that you won't ever stream gcable tombstones.

But excluding gcable tombstones from the merkle-tree computation is a
good idea. Would you mind opening a JIRA ticket?

--
Sylvain


Re: repair, compaction, and tombstone rows

2012-11-03 Thread horschi
Sure, created CASSANDRA-4905. I understand that these tombstones will be
still streamed though. Thats fine with me.

Do you mind if I ask where you stand on making...
- ... ExpiringColumn not create any tombstones? Imo this could be safely
done if the columns TTL is = gcgrace. That way it is ensured that repair
ran and any previous un-TTLed columns were overwritten.
- ... ExpiringColumn not add local timestamp to digest?

Cheers,
Christian


On Sat, Nov 3, 2012 at 8:37 PM, Sylvain Lebresne sylv...@datastax.comwrote:

 On Fri, Nov 2, 2012 at 10:46 AM, horschi hors...@gmail.com wrote:
  might I ask why repair cannot simply ignore anything that is older than
  gc-grace? (like Aaron proposed)

 Well, actually the merkle tree computation could probably ignore
 gcable tombstones without much problem, which might not be such a bad
 idea and would probably solve much of your problem. However, when we
 stream the ranges that needs to be repaired, we stream sub-part of the
 sstables without deserializing them, so we can't exclude the gcable
 tombstones in that phase (that is, it's a technical reason, but
 deserializing data would be inefficient). Meaning that if we won't
 guarantee that you won't ever stream gcable tombstones.

 But excluding gcable tombstones from the merkle-tree computation is a
 good idea. Would you mind opening a JIRA ticket?

 --
 Sylvain



Re: min_compaction_threshold/max_compaction_threshold

2012-11-03 Thread Vivek Mishra
Here is the column family script:

CREATE COLUMNFAMILY CompositeUserDataTypes (tweetBody text ,tweetDate
timestamp ,STUDENT_ID bigint ,SEMESTER text ,STUDENT_NAME text
,CGPA int ,DIGITAL_SIGNATURE blob ,ENROLMENT_DATE timestamp ,HEIGHT
double ,MONTHLY_FEE double ,JOINING_DATE_TIME timestamp ,CALENDAR
timestamp ,IS_EXCEPTIONAL boolean ,ENROLMENT_TIME timestamp
,YEARS_SPENT int ,AGE int ,SQL_DATE timestamp ,ROLL_NUMBER bigint
,UNIQUE_ID bigint ,BIG_INT int ,SQL_TIME timestamp ,PERCENTAGE
float ,SQL_TIMESTAMP timestamp  , PRIMARY KEY(STUDENT_ID ,UNIQUE_ID
,STUDENT_NAME ,IS_EXCEPTIONAL ,AGE ,SEMESTER ,DIGITAL_SIGNATURE
,CGPA ,PERCENTAGE ,HEIGHT ,ENROLMENT_DATE ,ENROLMENT_TIME
,JOINING_DATE_TIME ,YEARS_SPENT ,ROLL_NUMBER ,MONTHLY_FEE
,SQL_DATE ,SQL_TIMESTAMP ,SQL_TIME ,BIG_INT ,CALENDAR )) WITH
replicate_on_write=true AND max_compaction_threshold=64 AND
min_compaction_threshold=16 AND comment='User Column Family'


-Vivek

On Sun, Nov 4, 2012 at 10:17 AM, Vivek Mishra mishra.v...@gmail.com wrote:

 Hi,
 I am trying to create column family with 
 *max_compaction_thresholdhttp://www.datastax.com/docs/1.1/configuration/storage_configuration#max-compaction-threshold
 and *min_compaction_threshold as storage properties, but somehow i am
 getting:


 Bad Request: max_compaction_threshold is not a valid keyword argument for
 CREATE TABLE
 Bad Request: min_compaction_threshold is not a valid keyword argument for
 CREATE TABLE


 As per
 http://www.datastax.com/docs/1.1/references/cql/cql_storage_options#cql-3-column-family-storage-parameters

 It should work for Cassandra 1.1.5 and Cassandra 1.1.6. Any idea?



 -Vivek