Re: Can not connect with cqlsh to something different than localhost

2014-12-08 Thread Vivek Mishra
Two things:
1. Try telnet 192.168.111.136 9042 and see if it connects?
2. check for hostname in /etc/hosts, if it is mapped correctly.

-Vivek

On Mon, Dec 8, 2014 at 4:19 PM, Richard Snowden richard.t.snow...@gmail.com
 wrote:

 This did not work either. I changed /etc/cassandra.yaml and restarted 
 Cassandra (I even restarted the machine to make 100% sure).

 What I tried:

 1) listen_address: localhost
- connection OK (but of course I can't connect from outside the VM to 
 localhost)

 2) Set listen_interface: eth0
- connection refused

 3) Set listen_address: 192.168.111.136
- connection refused


 What to do?


  Try:
  $ netstat -lnt
  and see which interface port 9042 is listening on. You will likely need to
  update cassandra.yaml to change the interface. By default, Cassandra is
  listening on localhost so your local cqlsh session works.

  On Sun, 7 Dec 2014 23:44 Richard Snowden richard.t.snow...@gmail.com
  wrote:

   I am running Cassandra 2.1.2 in an Ubuntu VM.
  
   cqlsh or cqlsh localhost works fine.
  
   But I can not connect from outside the VM (firewall, etc. disabled).
  
   Even when I do cqlsh 192.168.111.136 in my VM I get connection refused.
   This is strange because when I check my network config I can see that
   192.168.111.136 is my IP:
  
   root@ubuntu:~# ifconfig
  
   eth0  Link encap:Ethernet  HWaddr 00:0c:29:02:e0:de
 inet addr:192.168.111.136  Bcast:192.168.111.255
   Mask:255.255.255.0
 inet6 addr: fe80::20c:29ff:fe02:e0de/64 Scope:Link
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:16042 errors:0 dropped:0 overruns:0 frame:0
 TX packets:8638 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:21307125 (21.3 MB)  TX bytes:709471 (709.4 KB)
  
   loLink encap:Local Loopback
 inet addr:127.0.0.1  Mask:255.0.0.0
 inet6 addr: ::1/128 Scope:Host
 UP LOOPBACK RUNNING  MTU:65536  Metric:1
 RX packets:550 errors:0 dropped:0 overruns:0 frame:0
 TX packets:550 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:0
 RX bytes:148053 (148.0 KB)  TX bytes:148053 (148.0 KB)
  
  
   root@ubuntu:~# cqlsh 192.168.111.136 9042
   Connection error: ('Unable to connect to any servers', {'192.168.111.136':
   error(111, Tried connecting to [('192.168.111.136', 9042)]. Last error:
   Connection refused)})
  
  
   What to do?
  




Re: node keeps dying

2014-09-25 Thread Vivek Mishra
Increase heap size with Cassandra and try
On 25/09/2014 3:02 am, Prem Yadav ipremya...@gmail.com wrote:

 BTW, thanks Michael.
 I am surprised why I didn't search for Cassandra oom before.
 I got some good links that discuss that. Will try to optimize and see how
it goes.


 On Wed, Sep 24, 2014 at 10:27 PM, Prem Yadav ipremya...@gmail.com wrote:

 Well its not the Linux OOM killer. The system is running with all
default settings.

 Total memory 7GB- Cassandra gets assigned 2GB
 2 core processors.
 Two rings with 3 nodes in each ring.

 On Wed, Sep 24, 2014 at 9:53 PM, Michael Shuler mich...@pbandjelly.org
wrote:

 On 09/24/2014 11:32 AM, Prem Yadav wrote:

 this is an issue that has happened a few times. We are using DSE 4.0


 I believe this is Apache Cassandra 2.0.5, which is better info for this
list.

 One of the Cassandra nodes is detected as dead by the opscenter even
 though I can see the process is up.

 the logs show heap space error:

   INFO [RMI TCP Connection(18270)-172.31.49.189] 2014-09-24
08:31:05,340
 StorageService.java (line 2538) Starting repair command #30766,
 repairing 1 ranges for keyspace keyspace
 ERROR [BatchlogTasks:1] 2014-09-24 08:48:54,780 CassandraDaemon.java
 (line 196) Exception in thread Thread[BatchlogTasks:1,5,main]
 java.lang.OutOfMemoryError: Java heap space
  at java.util.ArrayList.init(Unknown Source)


 OOM.

 System environment and configuration modification details might be
helpful for others to give you advice. Searching for cassandra oom gave
me a few good links to read, and knowing some details about your nodes
might be really helpful. Additionally, CASSANDRA-7507 [0] suggests that an
OOM leaving the process running in an unclean state is not desired, and the
process should be killed.

 Several of the search links provide details on how to capture and dig
around a heap dump to aid in troubleshooting.

 [0] https://issues.apache.org/jira/browse/CASSANDRA-7507
 --
 Kind regards,
 Michael





Re: CQL performance inserting multiple cluster keys under same partition key

2014-08-26 Thread Vivek Mishra
AFAIK, it is not. With CAS it should br
On 26/08/2014 10:21 pm, Jaydeep Chovatia chovatia.jayd...@gmail.com
wrote:

 Hi,

 I have question on inserting multiple cluster keys under same partition
 key.

 Ex:

 CREATE TABLE Employee (
   deptId int,
   empId int,
   name   varchar,
   address varchar,
   salary int,
   PRIMARY KEY(deptId, empId)
 );

 BEGIN *UNLOGGED *BATCH
   INSERT INTO Employee (deptId, empId, name, address, salary) VALUES (1,
 10, 'testNameA', 'testAddressA', 2);
   INSERT INTO Employee (deptId, empId, name, address, salary) VALUES (1,
 20, 'testNameB', 'testAddressB', 3);
 APPLY BATCH;

 Here we are inserting two cluster keys (10 and 20) under same partition
 key (1).
 Q1) Is this batch transaction atomic and isolated? If yes then is there
 any performance overhead with this syntax?
 Q2) Is this CQL syntax can be considered equivalent of Thrift
 batch_mutate?

 -jaydeep



Re: Thrift vs CQL3 performance

2014-07-29 Thread Vivek Mishra
Checkout differentiators between thrift n cql3. Storage engine is same but
they differs in metadata. Prior to 2.0 it was mix of both. I would still
suggest the same. Checkout earlier threads around map collection support,
bulk loading, dynamic column support comparison b/w thrift and cql3. Ease
of use is just a simple aspect.
On 28/07/2014 9:21 pm, bi kro hlqvu...@gmail.com wrote:

  Hi every one,

 I'm newcomer to Cassandra, so I would like to know about performance
 between Thrift (Hector) vs CQL3, specially about the speed (Thrift based on
 RPC, CQL3 based on binary protocol).

 Currently I'm using Cassandra 1.2 , which version CQL3 of
 JavaDriver-Datastax is stable for it?

 Thanks very much



Re: Caffinitas Mapper - Java object mapper for Apache Cassandra

2014-07-25 Thread Vivek Mishra
How is it different than kundera?
On 20/07/2014 9:03 pm, Robert Stupp sn...@snazy.de wrote:

 Hi all,

 I've just released the first beta version of Caffinitas Mapper.

 Caffinitas Mapper is an advanced Java object mapper for Apache Cassandra
 NoSQL database. It offers an annotation based declaration model with a wide
 range of built-in features like JPA style inheritance with table-per-class
 and single-table model. Composites can be mapped using either Apache
 Cassandra’s new UserType or as distinct columns in a table. Cassandra
 collections, user type and tuple type are directly supported - collections
 can be loaded lazily. Entity instances can be automatically denormalized in
 other entity instances. CREATE TABLE/TYPE and ALTER TABLE/TYPE CQL DDL
 statements can be generated programmatically. Custom types can be
 integrated using a Converter API. All Cassandra consistency levels, serial
 consistency and batch statements are supported.

 All Apache Cassandra versions 1.2, 2.0 and 2.1 as well as all DataStax
 Community and Enterprise editions based on these Cassandra versions are
 supported. Java 6 is required during runtime.

 Support for legacy, Thrift style models, is possible with Caffinitas
 Mapper since it supports CompositeType and DynamicCompositeType out of the
 box. A special map-style-entity type has been especially designed to access
 schema-less data models.

 Caffinitas Mapper is open source and licensed using the Apache License,
 Version 2.0.



 Website  Documentation: http://caffinitas.org/
 API-Docs: http://caffinitas.org/mapper/apidocs/
 Source Repository: https://bitbucket.org/caffinitas/mapper/
 Issues: https://caffinitas.atlassian.net/
 Mailing List: https://groups.google.com/d/forum/caffinitas-mapper




Re: How to prevent writing to a Keyspace?

2014-07-21 Thread Vivek Mishra
Create different user and assign role and privileges. Create a user like
guest and grant select only to that user. That way user cannot modify data
in specific keyspace or column family.

http://www.datastax.com/documentation/cql/3.0/cql/cql_reference/grant_r.html

-Vivek


On Mon, Jul 21, 2014 at 7:57 AM, Lu, Boying boying...@emc.com wrote:

 Thanks a lot J



 But I think authorization and authentication do little help here.



 Once we allow an user to read the keyspace, how can we prevent him from
 writing DB

 without Cassandra’s help?



 Is there any way to support ‘read-only’ some keyspace in Cassandra ? e.g.
 set some specific strategy?



 Boying



 *From:* Vivek Mishra [mailto:mishra.v...@gmail.com]
 *Sent:* 2014年7月17日 18:35
 *To:* user@cassandra.apache.org
 *Subject:* Re: How to prevent writing to a Keyspace?



 Think about managing it via authorization and authentication support



 On Thu, Jul 17, 2014 at 4:00 PM, Lu, Boying boying...@emc.com wrote:

 Hi, All,



 I need to make a Cassandra keyspace to be read-only.

 Does anyone know how to do that?



 Thanks



 Boying







Re: How to prevent writing to a Keyspace?

2014-07-17 Thread Vivek Mishra
Think about managing it via authorization and authentication support


On Thu, Jul 17, 2014 at 4:00 PM, Lu, Boying boying...@emc.com wrote:

 Hi, All,



 I need to make a Cassandra keyspace to be read-only.

 Does anyone know how to do that?



 Thanks



 Boying





Fwd: Fw: Webinar - NoSQL Landscape and a Solution to Polyglot Persistence

2014-05-11 Thread Vivek Mishra
Check out this webinar for

1) Nosql  Landscape
2) Building Kundera powered app
3) Polyglot persistence!

-Vivek

-Vivek

  On Thursday, May 8, 2014 5:52 PM, Vivek Mishra vivek.mis...@impetus.co.in
wrote:


 *From:* Pankaj Bagzai
*Sent:* Wednesday, April 30, 2014 8:50 PM
*To:* df-all; Account Management; Asheesh Mangla; Gerard Das; Larry
Pearson; Anand Raman; Anand Venugopal; Presales-Support; Mike Harden; Ray
Cade
*Subject:* Webinar - NoSQL Landscape and a Solution to Polyglot Persistence

Please share with your network.

Best Regards,
Pankaj Bagzai


Having trouble reading this email? View it in your
browser.https://www.leadformix.com/ef1/preview_campaign.php?em=sfimpe...@impetus.co.incmpid=498831269e56653d1086



  Webinar




 [image: Impetus
Technologies]http://www.impetus.com/?utm_source=Invite1utm_medium=Emailutm_campaign=PolyglotwebinarMay2014t=1


  NoSQL Landscape and a Solution to Polyglot Persistence

May 9‚ 2014 (9:30 am PT/ 12:30 pm ET)
Duration: 45 mins
 
http://www.impetus.com/webinar?eventid=78utm_source=Invite1utm_medium=Emailutm_campaign=PolyglotwebinarMay2014t=1


Hi Pankaj,

Is your organization planning to migrate to / acquire a NoSQL technology
but struggling to do so?

Does your team need to invest in evaluating many NoSQL options?

Polyglot use of NoSQL with / without RDBMS can further complicate the NoSQL
adoption process.

Before making a commitment, it is important to consider the business
opportunity and the technology need that various NoSQL databases can
support. Technology selection is often governed by the ease of working and
APIs offered.

Join Impetus experts where they will share a solution to these challenges
based on the experience from creating a widely adopted polyglot
client/object-mapper for NoSQL datastores and also working through the
NoSQL technology landscape for several customers.

During this webinar you will learn about:
   •
 When and why you should consider NoSQL
   •
 Considerations when migrating to NoSQLs or a combination with RDBMS
   •
 NoSQL options and APIs available
   •
 A fast and low cost solution to Polyglot Persistence



  *Register Here
http://www.impetus.com/webinar?eventid=78utm_source=Invite1utm_medium=Emailutm_campaign=PolyglotwebinarMay2014t=1*
 *Share this webinar*
[image: Share on
Linkedin]https://www.linkedin.com/shareArticle?mini=trueurl=http://www.impetus.com/webinar?eventid=77title=Real-time%20Streaming%20Analytics:%20Business%20Value,%20Use%20Cases%20and%20Architectural%20Considerationssummary=As%20IT%20and%20line-of-business%20executives%20begin%20to%20operationalize%20Hadoop%20and%20MPP%20based%20batch%20Big%20Data%20analytics,%20it%27s%20time%20to%20prepare%20for%20the%20next%20wave%20of%20innovation%20in%20data%20processing.%0A%0AJoin%20this%20webinar%20on%20analytics%20over%20real-time%20streaming%20data.source=[image:
Share on 
Twitter]https://twitter.com/home?status=Hear%20Impetus%20Experts%20on%20%27%23NoSQL%20Landscape%20and%20a%20Solution%20to%20Polyglot%20Persistence%27%20May%209%20%23Bigdata%20http://www.impetus.com/webinar?eventid=78[image:
Facebook]https://www.facebook.com/sharer/sharer.php?u=http://www.impetus.com/webinar?eventid=78

   Speakers-

*Vivek Mishra*
Lead Engineer, Big Data RD
(Impetus Technologies)


*Chhavi Gangwal*
Lead Engineer, Big Data RD
(Impetus Technologies)


*Larry Pearson*
VP of Marketing
(Impetus Technologies)
[image: ---]
 Related webcasts


  •
 Leveraging NoSQL to Implement Real-time Data
Architectureshttp://www.impetus.com/webinar?eventid=72utm_source=Invite1utm_medium=Emailutm_campaign=PolyglotwebinarMay2014t=1


  •
 Resolving the Big Data ROI
Dilemmahttp://www.impetus.com/webinar?eventid=71utm_source=Invite1utm_medium=Emailutm_campaign=PolyglotwebinarMay2014t=1


  •
 Real-time Predictive Analytics for
Manufacturinghttp://www.impetus.com/webinar?eventid=70utm_source=Invite1utm_medium=Emailutm_campaign=PolyglotwebinarMay2014t=1










Impetus Technologies, Inc. - 720 University Avenue, Suite 130 ,Los Gatos,
CA 95032, USA


--






NOTE: This message may contain information that is confidential,
proprietary, privileged or otherwise protected by law. The message is
intended solely for the named addressee. If received in error, please
destroy and notify the sender. Any use of this email is prohibited when
received in error. Impetus does not represent, warrant and/or guarantee,
that the integrity of this communication has been maintained nor that the
communication is free of errors, virus, interception or interference.


Fwd: {kundera-discuss} Kundera-2.11.1-Patch Released

2014-05-01 Thread Vivek Mishra
-- Forwarded message --
From: Chhavi Gangwal chhavigang...@gmail.com
Date: Thu, May 1, 2014 at 1:32 PM
Subject: {kundera-discuss} Kundera-2.11.1-Patch Released
To: kundera-disc...@googlegroups.com


Hi All,

We are happy to announce the Kundera-2.11.1 patch release.

Kundera is a JPA 2.0 compliant, object-datastore mapping library for NoSQL
datastores. The idea behind Kundera is to make working with NoSQL databases
drop-dead simple and fun. It currently supports Cassandra, HBase, MongoDB,
Redis, OracleNoSQL, Neo4j,ElasticSearch,CouchDB and relational databases.


*Major Changes in 2.11.1:*
==
1)  Support added for Cassandra-2.x version.
2)  Support for composite partition keys in Kundera-Cassandra

*Github Bug Fixes :*
=
https://github.com/impetus-opensource/Kundera/issues/583https://github.com/impetus-opensource/Kundera/issues/496
https://github.com/impetus-opensource/Kundera/issues/582https://github.com/impetus-opensource/Kundera/issues/500
https://github.com/impetus-opensource/Kundera/issues/570https://github.com/impetus-opensource/Kundera/issues/537
https://github.com/impetus-opensource/Kundera/issues/568https://github.com/impetus-opensource/Kundera/issues/536
https://github.com/impetus-opensource/Kundera/issues/567https://github.com/impetus-opensource/Kundera/issues/226
https://github.com/impetus-opensource/Kundera/issues/566https://github.com/impetus-opensource/Kundera/issues/530
https://github.com/impetus-opensource/Kundera/issues/563https://github.com/impetus-opensource/Kundera/issues/519
https://github.com/impetus-opensource/Kundera/issues/561https://github.com/impetus-opensource/Kundera/issues/482
https://github.com/impetus-opensource/Kundera/issues/554https://github.com/impetus-opensource/Kundera/issues/512
https://github.com/impetus-opensource/Kundera/issues/553https://github.com/impetus-opensource/Kundera/issues/510
https://github.com/impetus-opensource/Kundera/issues/552https://github.com/impetus-opensource/Kundera/issues/506
https://github.com/impetus-opensource/Kundera/issues/550https://github.com/impetus-opensource/Kundera/issues/385
https://github.com/impetus-opensource/Kundera/issues/541https://github.com/impetus-opensource/Kundera/issues/501
https://github.com/impetus-opensource/Kundera/issues/490https://github.com/impetus-opensource/Kundera/issues/483
https://github.com/impetus-opensource/Kundera/issues/260https://github.com/impetus-opensource/Kundera/issues/483


*How to Download:*

To download, use or contribute to Kundera, visit:
http://github.com/impetus-opensource/Kundera

Latest release of Kundera's tag is 2.11.1 whose maven libraries are now
available at:
https://oss.sonatype.org/content/repositories/releases/com/impetus.

2.11.1 release of Kundera is compatible with Cassandra2.x which includes
JDK 1.7  as one of its pre-requisites.

The older versions of Cassandra(1.x) can be used with archived versions of
Kundera and its current release's branch - Kundera-Cassandra-1.x hosted at
:
https://github.com/impetus-opensource/Kundera/tree/Kundera-Cassandra1.x
whose maven libraries are also available at :
https://oss.sonatype.org/content/repositories/releases/com/impetus


Sample code and examples for using Kundera can be found here:
https://github.com/impetus-opensource/Kundera/tree/trunk/src/kundera-tests

*Troubleshooting *:
===
 In case you are using 2.11.1 version Kundera with Cassandra make sure you
have JDK 1.7 installed.

Please share you *feedback *with us by filling a simple *survey*:
http://www.surveymonkey.com/s/BMB9PWG

Thank you all for your contributions and using Kundera!

Regards,
Kundera Team

-- 
You received this message because you are subscribed to the Google Groups
kundera-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kundera-discuss+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


: Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
This is what i am getting with Cassandra 2.0.7 with Thrift.


Caused by: org.apache.thrift.transport.TTransportException: Read a negative
frame size (-2113929216)!
at
org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at
org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
at
org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
at
org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)

Any pointer/suggestions?

-Vivek


Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
It's a simple cql3 query to create keyspace.

-Vivek


On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink clohf...@blackbirdit.comwrote:

 Did you send an enormous write or batch write and it wrapped?  Or is your
 client trying to use non-framed transport?

 Chris

 On Apr 25, 2014, at 2:50 PM, Vivek Mishra mishra.v...@gmail.com wrote:

  This is what i am getting with Cassandra 2.0.7 with Thrift.
 
 
  Caused by: org.apache.thrift.transport.TTransportException: Read a
 negative frame size (-2113929216)!
at
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at
 org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
at
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
at
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
at
 org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 
  Any pointer/suggestions?
 
  -Vivek




Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
datastax java driver 2.0.1




On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink clohf...@blackbirdit.comwrote:

 what client are you using?

 On Apr 25, 2014, at 3:01 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 It's a simple cql3 query to create keyspace.

 -Vivek


 On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink 
 clohf...@blackbirdit.comwrote:

 Did you send an enormous write or batch write and it wrapped?  Or is your
 client trying to use non-framed transport?

 Chris

 On Apr 25, 2014, at 2:50 PM, Vivek Mishra mishra.v...@gmail.com wrote:

  This is what i am getting with Cassandra 2.0.7 with Thrift.
 
 
  Caused by: org.apache.thrift.transport.TTransportException: Read a
 negative frame size (-2113929216)!
at
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at
 org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
at
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
at
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
at
 org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 
  Any pointer/suggestions?
 
  -Vivek






Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
TSocket socket = new TSocket(host, Integer.parseInt(port));
TTransport transport = new TFramedTransport(socket);
TProtocol protocol = new TBinaryProtocol(transport, true, true);
cassandra_client = new Cassandra.Client(protocol);


cassandra_client.execute_cql3_query(

ByteBuffer.wrap(queryBuilder.toString().getBytes(Constants.CHARSET_UTF8)),
Compression.NONE,
ConsistencyLevel.ONE);



On Sat, Apr 26, 2014 at 5:19 AM, Alex Popescu al...@datastax.com wrote:

 Can you share the relevant code snippet that leads to this exception?


 On Fri, Apr 25, 2014 at 4:47 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 datastax java driver 2.0.1




 On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink 
 clohf...@blackbirdit.comwrote:

 what client are you using?

 On Apr 25, 2014, at 3:01 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 It's a simple cql3 query to create keyspace.

 -Vivek


 On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink clohf...@blackbirdit.com
  wrote:

 Did you send an enormous write or batch write and it wrapped?  Or is
 your client trying to use non-framed transport?

 Chris

 On Apr 25, 2014, at 2:50 PM, Vivek Mishra mishra.v...@gmail.com
 wrote:

  This is what i am getting with Cassandra 2.0.7 with Thrift.
 
 
  Caused by: org.apache.thrift.transport.TTransportException: Read a
 negative frame size (-2113929216)!
at
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at
 org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
at
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
at
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
at
 org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 
  Any pointer/suggestions?
 
  -Vivek







 --

 :- a)


 Alex Popescu
 Sen. Product Manager @ DataStax
 @al3xandru



Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
Yes i know. But i am not sure why is it failing, simply having Thrift jar
and cassandra-thrift in classpath doesn't fails. But as soon as i get
datastax one in classpath, it started failing. Point is even if i am having
both in classpath, switching b/w thrift and Datastax should work.

-Vivek


On Sat, Apr 26, 2014 at 5:36 AM, Benedict Elliott Smith 
belliottsm...@datastax.com wrote:

 Vivek,

 The error you are seeing is a thrift error, but you say you are using the
 Java driver which does not operate over thrift: are you perhaps trying to
 connect the datastax driver to the thrift protocol port? The two protocols
 are not compatible, you must connect to the native_transport_port (by
 default 9042)


 On 26 April 2014 00:47, Vivek Mishra mishra.v...@gmail.com wrote:

 datastax java driver 2.0.1




 On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink 
 clohf...@blackbirdit.comwrote:

 what client are you using?

 On Apr 25, 2014, at 3:01 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 It's a simple cql3 query to create keyspace.

 -Vivek


 On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink clohf...@blackbirdit.com
  wrote:

 Did you send an enormous write or batch write and it wrapped?  Or is
 your client trying to use non-framed transport?

 Chris

 On Apr 25, 2014, at 2:50 PM, Vivek Mishra mishra.v...@gmail.com
 wrote:

  This is what i am getting with Cassandra 2.0.7 with Thrift.
 
 
  Caused by: org.apache.thrift.transport.TTransportException: Read a
 negative frame size (-2113929216)!
at
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at
 org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
at
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
at
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
at
 org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 
  Any pointer/suggestions?
 
  -Vivek








Re: : Read a negative frame size (-2113929216)!

2014-04-25 Thread Vivek Mishra
Just to add, it works fine with Cassandra 1.x and Datastax 1.x

-Vivek


On Sat, Apr 26, 2014 at 10:02 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Yes i know. But i am not sure why is it failing, simply having Thrift jar
 and cassandra-thrift in classpath doesn't fails. But as soon as i get
 datastax one in classpath, it started failing. Point is even if i am having
 both in classpath, switching b/w thrift and Datastax should work.

 -Vivek


 On Sat, Apr 26, 2014 at 5:36 AM, Benedict Elliott Smith 
 belliottsm...@datastax.com wrote:

 Vivek,

 The error you are seeing is a thrift error, but you say you are using the
 Java driver which does not operate over thrift: are you perhaps trying to
 connect the datastax driver to the thrift protocol port? The two protocols
 are not compatible, you must connect to the native_transport_port (by
 default 9042)


 On 26 April 2014 00:47, Vivek Mishra mishra.v...@gmail.com wrote:

 datastax java driver 2.0.1




 On Sat, Apr 26, 2014 at 1:35 AM, Chris Lohfink clohf...@blackbirdit.com
  wrote:

 what client are you using?

 On Apr 25, 2014, at 3:01 PM, Vivek Mishra mishra.v...@gmail.com
 wrote:

 It's a simple cql3 query to create keyspace.

 -Vivek


 On Sat, Apr 26, 2014 at 1:28 AM, Chris Lohfink 
 clohf...@blackbirdit.com wrote:

 Did you send an enormous write or batch write and it wrapped?  Or is
 your client trying to use non-framed transport?

 Chris

 On Apr 25, 2014, at 2:50 PM, Vivek Mishra mishra.v...@gmail.com
 wrote:

  This is what i am getting with Cassandra 2.0.7 with Thrift.
 
 
  Caused by: org.apache.thrift.transport.TTransportException: Read a
 negative frame size (-2113929216)!
at
 org.apache.thrift.transport.TFramedTransport.readFrame(TFramedTransport.java:133)
at
 org.apache.thrift.transport.TFramedTransport.read(TFramedTransport.java:101)
at
 org.apache.thrift.transport.TTransport.readAll(TTransport.java:84)
at
 org.apache.thrift.protocol.TBinaryProtocol.readAll(TBinaryProtocol.java:362)
at
 org.apache.thrift.protocol.TBinaryProtocol.readI32(TBinaryProtocol.java:284)
at
 org.apache.thrift.protocol.TBinaryProtocol.readMessageBegin(TBinaryProtocol.java:191)
at
 org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:69)
 
  Any pointer/suggestions?
 
  -Vivek









Re: Unable to complete request: one or more nodes were unavailable.

2014-04-16 Thread Vivek Mishra
Hi,
Mine is a simple case. Running on single node only. Keyspace is:

create keyspace twitter with replication = {'class':'SimpleStrategy',
'replication_factor' : 3}

-Vivek


On Wed, Apr 16, 2014 at 1:27 AM, Tupshin Harper tups...@tupshin.com wrote:

 Please provide your keyspace definition,  and the output of nodetool
 ring

 -Tupshin
 On Apr 15, 2014 3:52 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Hi,
 I am trying Cassandra light weight transaction support with Cassandra
 2.0.4

 cqlsh:twitter create table user(user_id text primary key, namef text);
 cqlsh:twitter insert into user(user_id,namef) values('v','ff') if not
 exists;

 *Unable to complete request: one or more nodes were unavailable.*

 Any suggestions?

 -Vivek




Re: Unable to complete request: one or more nodes were unavailable.

2014-04-16 Thread Vivek Mishra
Thanks Mark. does this mean with RF=3, all 3 nodes must be up and running
for CAS updates?

-Vivek


On Wed, Apr 16, 2014 at 6:22 PM, Mark Reddy mark.re...@boxever.com wrote:

 create keyspace twitter with replication = {'class':'SimpleStrategy',
 'replication_factor' : 3}


 Your replication factor is your issue here, you have a single node and a
 RF=3. For a single node setup your RF should be 1. You can find more info
 about replication here:
 http://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architectureDataDistributeReplication_c.html


 On Wed, Apr 16, 2014 at 1:44 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 Mine is a simple case. Running on single node only. Keyspace is:

 create keyspace twitter with replication = {'class':'SimpleStrategy',
 'replication_factor' : 3}

 -Vivek


 On Wed, Apr 16, 2014 at 1:27 AM, Tupshin Harper tups...@tupshin.comwrote:

 Please provide your keyspace definition,  and the output of nodetool
 ring

 -Tupshin
 On Apr 15, 2014 3:52 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Hi,
 I am trying Cassandra light weight transaction support with Cassandra
 2.0.4

 cqlsh:twitter create table user(user_id text primary key, namef text);
 cqlsh:twitter insert into user(user_id,namef) values('v','ff') if not
 exists;

 *Unable to complete request: one or more nodes were unavailable.*

 Any suggestions?

 -Vivek






Re: Unable to complete request: one or more nodes were unavailable.

2014-04-16 Thread Vivek Mishra
Thanks Mark and Tuphsin.

So on single node, if i set consistency level to SERIAL and create a
keyspace with RF=1? Would that work?


-Vivek


On Wed, Apr 16, 2014 at 6:32 PM, Mark Reddy mark.re...@boxever.com wrote:

 The Paxos protocol used for CAS operations will always use at least a
 consistency level effectively equivalent to QUORUM (called SERIAL) when
 writing, even if you explicitly specify a lower level, e.g. ANY or ONE.
 Setting consistency level to ALL will make the write execute on all
 replicas if the condition is met, but the comparison itself is executed
 against a QUORUM number of nodes. As a result, a write operation with ALL
 consistency level that fails to meet the specified check may not throw an
 Exception, even if some replica nodes are not accessible.



 On Wed, Apr 16, 2014 at 2:00 PM, Tupshin Harper tups...@tupshin.comwrote:

 No, but you do need a quorum of nodes.


 http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_config_consistency_c.html

 SERIAL
 A write must be written conditionally to the commit log and memory table
 on a quorum of replica nodes.

 Used to achievelinearizable 
 consistencyhttp://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_tunable_consistency_c.html#concept_ds_f4h_hwx_zjfor
 lightweight transactions by preventing unconditional updates.
  On Apr 16, 2014 5:56 AM, Vivek Mishra mishra.v...@gmail.com wrote:

 Thanks Mark. does this mean with RF=3, all 3 nodes must be up and
 running for CAS updates?

 -Vivek


 On Wed, Apr 16, 2014 at 6:22 PM, Mark Reddy mark.re...@boxever.comwrote:

 create keyspace twitter with replication = {'class':'SimpleStrategy',
 'replication_factor' : 3}


 Your replication factor is your issue here, you have a single node and
 a RF=3. For a single node setup your RF should be 1. You can find more info
 about replication here:
 http://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architectureDataDistributeReplication_c.html


 On Wed, Apr 16, 2014 at 1:44 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 Mine is a simple case. Running on single node only. Keyspace is:

 create keyspace twitter with replication = {'class':'SimpleStrategy',
 'replication_factor' : 3}

 -Vivek


 On Wed, Apr 16, 2014 at 1:27 AM, Tupshin Harper 
 tups...@tupshin.comwrote:

 Please provide your keyspace definition,  and the output of nodetool
 ring

 -Tupshin
 On Apr 15, 2014 3:52 PM, Vivek Mishra mishra.v...@gmail.com
 wrote:

 Hi,
 I am trying Cassandra light weight transaction support with
 Cassandra 2.0.4

 cqlsh:twitter create table user(user_id text primary key, namef
 text);
 cqlsh:twitter insert into user(user_id,namef) values('v','ff') if
 not exists;

 *Unable to complete request: one or more nodes were unavailable.*

 Any suggestions?

 -Vivek








Re: Unable to complete request: one or more nodes were unavailable.

2014-04-16 Thread Vivek Mishra
Thanks, i think got the point.CAS doesn't make much sense on single node.

-Vivek


On Wed, Apr 16, 2014 at 6:37 PM, Tupshin Harper tups...@tupshin.com wrote:

 It will work for correctness, but give you a very inaccurate view of
 performance,.

 -Tupshin
 On Apr 16, 2014 6:05 AM, Vivek Mishra mishra.v...@gmail.com wrote:

 Thanks Mark and Tuphsin.

 So on single node, if i set consistency level to SERIAL and create a
 keyspace with RF=1? Would that work?


 -Vivek


 On Wed, Apr 16, 2014 at 6:32 PM, Mark Reddy mark.re...@boxever.comwrote:

 The Paxos protocol used for CAS operations will always use at least a
 consistency level effectively equivalent to QUORUM (called SERIAL) when
 writing, even if you explicitly specify a lower level, e.g. ANY or ONE.
 Setting consistency level to ALL will make the write execute on all
 replicas if the condition is met, but the comparison itself is executed
 against a QUORUM number of nodes. As a result, a write operation with ALL
 consistency level that fails to meet the specified check may not throw an
 Exception, even if some replica nodes are not accessible.



 On Wed, Apr 16, 2014 at 2:00 PM, Tupshin Harper tups...@tupshin.comwrote:

 No, but you do need a quorum of nodes.


 http://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_config_consistency_c.html

 SERIAL
 A write must be written conditionally to the commit log and memory
 table on a quorum of replica nodes.

 Used to achievelinearizable 
 consistencyhttp://www.datastax.com/documentation/cassandra/2.0/cassandra/dml/dml_tunable_consistency_c.html#concept_ds_f4h_hwx_zjfor
 lightweight transactions by preventing unconditional updates.
  On Apr 16, 2014 5:56 AM, Vivek Mishra mishra.v...@gmail.com wrote:

 Thanks Mark. does this mean with RF=3, all 3 nodes must be up and
 running for CAS updates?

 -Vivek


 On Wed, Apr 16, 2014 at 6:22 PM, Mark Reddy mark.re...@boxever.comwrote:

 create keyspace twitter with replication = {'class':'SimpleStrategy',
 'replication_factor' : 3}


 Your replication factor is your issue here, you have a single node
 and a RF=3. For a single node setup your RF should be 1. You can find 
 more
 info about replication here:
 http://www.datastax.com/documentation/cassandra/2.0/cassandra/architecture/architectureDataDistributeReplication_c.html


 On Wed, Apr 16, 2014 at 1:44 PM, Vivek Mishra 
 mishra.v...@gmail.comwrote:

 Hi,
 Mine is a simple case. Running on single node only. Keyspace is:

 create keyspace twitter with replication =
 {'class':'SimpleStrategy', 'replication_factor' : 3}

 -Vivek


 On Wed, Apr 16, 2014 at 1:27 AM, Tupshin Harper tups...@tupshin.com
  wrote:

 Please provide your keyspace definition,  and the output of
 nodetool ring

 -Tupshin
 On Apr 15, 2014 3:52 PM, Vivek Mishra mishra.v...@gmail.com
 wrote:

 Hi,
 I am trying Cassandra light weight transaction support with
 Cassandra 2.0.4

 cqlsh:twitter create table user(user_id text primary key, namef
 text);
 cqlsh:twitter insert into user(user_id,namef) values('v','ff') if
 not exists;

 *Unable to complete request: one or more nodes were unavailable.*

 Any suggestions?

 -Vivek









Unable to complete request: one or more nodes were unavailable.

2014-04-15 Thread Vivek Mishra
Hi,
I am trying Cassandra light weight transaction support with Cassandra 2.0.4

cqlsh:twitter create table user(user_id text primary key, namef text);
cqlsh:twitter insert into user(user_id,namef) values('v','ff') if not
exists;

*Unable to complete request: one or more nodes were unavailable.*

Any suggestions?

-Vivek


Re: Timeuuid inserted with now(), how to get the value back in Java client?

2014-04-01 Thread Vivek Mishra
You would get UUID object from cassandra API. Then you may use
uuid.timestamp() to get time stamp for the same

-Vivek


On Tue, Apr 1, 2014 at 9:55 PM, Theo Hultberg t...@iconara.net wrote:

 no, there's no way. you should generate the TIMEUUID on the client side so
 that you have it.

 T#


 On Sat, Mar 29, 2014 at 1:01 AM, Andy Atj2 andya...@gmail.com wrote:

 I'm writing a Java client to a Cassandra db.

 One of the main primary keys is a timeuuid.

 I plan to do INSERTs using now() and have Cassandra generate the value of
 the timeuuid.

 After the INSERT, I need the Cassandra-generated timeuuid value. Is there
 an easy wsay to get it, without having to re-query for the record I just
 inserted, hoping to get only one record back? Remember, I don't have the PK.

 Eg, in every other db there's a way to get the generated PK back. In sql
 it's @@identity, in oracle its...etc etc.

 I know Cassandra is not an RDBMS. All I want is the value Cassandra just
 generated.

 Thanks,
 Andy





Re: {kundera-discuss} Kundera 2.11 released

2014-03-24 Thread Vivek Mishra
fyi.


On Mon, Mar 24, 2014 at 11:56 PM, Vivek Mishra
vivek.mis...@impetus.co.inwrote:

  Hi All,



 We are happy to announce the Kundera 2.11 release.



 Kundera is a JPA 2.0 compliant, object-datastore mapping library for NoSQL
 datastores. The idea behind Kundera is to make working with NoSQL databases
 drop-dead simple and fun. It currently supports Cassandra, HBase, MongoDB,
 Redis, OracleNoSQL, Neo4j,ElasticSearch,CouchDB and relational databases.



 Major Changes:

 ==

 1)  Support added for Cassandra datastax java driver.

 2)  Support added for in clause with setParameter on collection
 object.



 Github Bug Fixes

 =

 https://github.com/impetus-opensource/Kundera/issues/542

 https://github.com/impetus-opensource/Kundera/issues/538

 https://github.com/impetus-opensource/Kundera/issues/537

 https://github.com/impetus-opensource/Kundera/issues/536

 https://github.com/impetus-opensource/Kundera/issues/530

 https://github.com/impetus-opensource/Kundera/issues/520

 https://github.com/impetus-opensource/Kundera/issues/519

 https://github.com/impetus-opensource/Kundera/issues/512

 https://github.com/impetus-opensource/Kundera/issues/510

 https://github.com/impetus-opensource/Kundera/issues/506

 https://github.com/impetus-opensource/Kundera/issues/501

 https://github.com/impetus-opensource/Kundera/issues/500

 https://github.com/impetus-opensource/Kundera/issues/496

 https://github.com/impetus-opensource/Kundera/issues/483

 https://github.com/impetus-opensource/Kundera/issues/482

 https://github.com/impetus-opensource/Kundera/issues/385

 https://github.com/impetus-opensource/Kundera/issues/226

 https://github.com/impetus-opensource/Kundera/issues/151



 How to Download:

 To download, use or contribute to Kundera, visit:

 http://github.com/impetus-opensource/Kundera



 Latest released tag version is 2.11 Kundera maven libraries are now
 available at:
 https://oss.sonatype.org/content/repositories/releases/com/impetus



 Sample codes and examples for using Kundera can be found here:

 https://github.com/impetus-opensource/Kundera/tree/trunk/src/kundera-tests



 Survey/Feedback:

 http://www.surveymonkey.com/s/BMB9PWG



 Thank you all for your contributions and using Kundera!



 *PS: Group artifact Id has been changed with 2.9.1 release onward. Please
 refer
 https://github.com/impetus-opensource/Kundera/blob/trunk/src/README.md#note
 https://github.com/impetus-opensource/Kundera/blob/trunk/src/README.md#note
 for the same.*



 Regards,

 Kundera Team

 --






 NOTE: This message may contain information that is confidential,
 proprietary, privileged or otherwise protected by law. The message is
 intended solely for the named addressee. If received in error, please
 destroy and notify the sender. Any use of this email is prohibited when
 received in error. Impetus does not represent, warrant and/or guarantee,
 that the integrity of this communication has been maintained nor that the
 communication is free of errors, virus, interception or interference.

 --
 You received this message because you are subscribed to the Google Groups
 kundera-discuss group.
 To unsubscribe from this group and stop receiving emails from it, send an
 email to kundera-discuss+unsubscr...@googlegroups.com.
 For more options, visit https://groups.google.com/d/optout.



Re: Cassandra blob storage

2014-03-18 Thread Vivek Mishra
@Mohit
Bit confused with your reply. For what use cases you find Cassandra useful
then?

-Vivek


On Tue, Mar 18, 2014 at 11:41 PM, Mohit Anchlia mohitanch...@gmail.comwrote:

 For large volume big data scenarios we don't recommend using Cassandra as
 a blob storage simply because of intensive IO involved during compation,
 repair etc. Cassandra store is only well suited for metadata type storage.
 However, if you are fairly low volume then it's a different story, but if
 you have low volume why use Cassandra :)


 On Tue, Mar 18, 2014 at 10:55 AM, Brian O'Neill b...@alumni.brown.eduwrote:

 You may want to look at:
 https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store

 -brian

 ---

 Brian O'Neill

 Chief Technology Officer


 *Health Market Science*

 *The Science of Better Results*

 2700 Horizon Drive * King of Prussia, PA * 19406

 M: 215.588.6024 * @boneill42 http://www.twitter.com/boneill42  *

 healthmarketscience.com


 This information transmitted in this email message is for the intended
 recipient only and may contain confidential and/or privileged material. If
 you received this email in error and are not the intended recipient, or the
 person responsible to deliver it to the intended recipient, please contact
 the sender at the email above and delete this email and any attachments and
 destroy any copies thereof. Any review, retransmission, dissemination,
 copying or other use of, or taking any action in reliance upon, this
 information by persons or entities other than the intended recipient is
 strictly prohibited.




 From: prem yadav ipremya...@gmail.com
 Reply-To: user@cassandra.apache.org
 Date: Tuesday, March 18, 2014 at 1:41 PM
 To: user@cassandra.apache.org
 Subject: Cassandra blob storage

 Hi,
 I have been spending some time looking into whether large files(100mb)
 can be stores in Cassandra. As per Cassandra faq:


 *Currently Cassandra isn't optimized specifically for large file or BLOB
 storage. However, files of around 64Mb and smaller can be easily stored in
 the database without splitting them into smaller chunks. This is primarily
 due to the fact that Cassandra's public API is based on Thrift, which
 offers no streaming abilities; any value written or fetched has to fit in
 to memory. *

 Does the above statement still hold? Thrift supports framed data
 transport, does that change the above statement. If not, why does
 casssandra not adopt the Thrift framed data transfer support?

 Thanks





Re: Cassandra Java Client

2014-02-26 Thread Vivek Mishra
Kundera does support CQL3. Work for supporting datastax java driver is
under development.

https://github.com/impetus-opensource/Kundera/issues/385

-Vivek


On Wed, Feb 26, 2014 at 6:34 PM, DuyHai Doan doanduy...@gmail.com wrote:

 Short answer : yes

 Long anwser:  depending on whether you want to access Cassandra using
 Thrift of the native CQL3 protocole, different options are available. For
 Thrift access, lots of choices (Hector, Astyanax...). For CQL3 right now
 the only Java client so far is the one provided by Datastax

 Does Cassandra itself (i.e. the apache-cassandra-* jars) not contain any
 CQL clients?

  No, the apache jars only ship the server-related components.




 On Wed, Feb 26, 2014 at 2:00 PM, Timmy Turner timm.t...@gmail.com wrote:

 Hi,

 is the DataStax Java Driver for Apache Cassandra (
 https://github.com/datastax/java-driver) the official/recommended Java
 Client to use for accessing Cassandra from Java?

 Does Cassandra itself (i.e. the apache-cassandra-* jars) not contain any
 CQL clients?


 Thanks!





Re: Exception in cassandra logs while processing the message

2014-02-17 Thread Vivek Mishra
looks like thrift inter operability issue. Seems column family or data
created via CQL3 and using Thrift based API to read it.

Else, recreate your schema and try.

-Vivek


On Mon, Feb 17, 2014 at 1:50 PM, ankit tyagi ankittyagi.mn...@gmail.comwrote:

 Hello,

 anyone has the idea regarding this exception.

 Regards,
 Ankit Tyagi


 On Fri, Feb 14, 2014 at 7:02 PM, ankit tyagi 
 ankittyagi.mn...@gmail.comwrote:

 Hello,

 I am seeing below exception in my cassandra
 logs(/var/log/cassandra/system.log).

 INFO [ScheduledTasks:1] 2014-02-13 13:13:57,641 GCInspector.java (line
 119) GC for ParNew: 273 ms for 1 collections, 2319121816 used; max is 445
 6448000
  INFO [ScheduledTasks:1] 2014-02-13 13:14:02,695 GCInspector.java (line
 119) GC for ParNew: 214 ms for 1 collections, 2315368976 used; max is 445
 6448000
  INFO [OptionalTasks:1] 2014-02-13 13:14:08,093 MeteredFlusher.java (line
 64) flushing high-traffic column family CFS(Keyspace='comsdb', ColumnFa
 mily='product_update') (estimated 213624220 bytes)
  INFO [OptionalTasks:1] 2014-02-13 13:14:08,093 ColumnFamilyStore.java
 (line 626) Enqueuing flush of Memtable-product_update@1067619242
 (31239028/
 213625108 serialized/live bytes, 222393 ops)
  INFO [FlushWriter:94] 2014-02-13 13:14:08,127 Memtable.java (line 400)
 Writing Memtable-product_update@1067619242(31239028/213625108 serialized/
 live bytes, 222393 ops)
  INFO [ScheduledTasks:1] 2014-02-13 13:14:08,696 GCInspector.java (line
 119) GC for ParNew: 214 ms for 1 collections, 2480175160 used; max is 445
 6448000
 * INFO [FlushWriter:94] 2014-02-13 13:14:10,836 Memtable.java (line 438)
 Completed flushing /cassandra1/data/comsdb/product_update/comsdb-product_*
 *update-ic-416-Data.db (15707248 bytes) for commitlog position
 ReplayPosition(segmentId=1391568233618, position=13712751)*
 *ERROR [Thrift:13] 2014-02-13 13:15:45,694 CustomTThreadPoolServer.java
 (line 213) Thrift error occurred during processing of message.*
 *org.apache.thrift.TException: Negative length: -2147418111*
 at
 org.apache.thrift.protocol.TBinaryProtocol.checkReadLength(TBinaryProtocol.java:388)
 at
 org.apache.thrift.protocol.TBinaryProtocol.readBinary(TBinaryProtocol.java:363)
 at
 org.apache.cassandra.thrift.Cassandra$batch_mutate_args.read(Cassandra.java:20304)
 at
 org.apache.thrift.ProcessFunction.process(ProcessFunction.java:21)
 at
 org.apache.thrift.TBaseProcessor.process(TBaseProcessor.java:34)
 at
 org.apache.cassandra.thrift.CustomTThreadPoolServer$WorkerProcess.run(CustomTThreadPoolServer.java:199)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 at java.lang.Thread.run(Thread.java:679)
 ERROR [Thrift:103] 2014-02-13 13:21:25,719 CustomTThreadPoolServer.java
 (line 213) Thrift error occurred during processing of message.
 org.apache.thrift.TException: Negative length: -2147418111


 Below is my cassandra version and hector client version, which is being
 used currently.

 Cassandra-version: 1.2.11
 Hector-client: 1.0-2

 Any lead would be appreciated though we are planning to move cassandra
 2.0 version with java-driver but it may take some time meanwhile need to
 find the root cause and resolve this issue.


 Regards,
 Ankit Tyagi





Re: TimedOutException in Java but not in cqlsh

2014-02-14 Thread Vivek Mishra
Check for consisteny level and socket timeout setting on client side.

-Vivek


On Fri, Feb 14, 2014 at 2:36 PM, Cyril Scetbon cyril.scet...@free.frwrote:

 After a few tests, it does not depend on the query. Whatever cql3 query I
 do, I always get the same exception. If someone sees something ...
 --
 Cyril SCETBON

 On 13 Feb 2014, at 17:22, Cyril Scetbon cyril.scet...@free.fr wrote:

  Hi,
 
  I get a weird issue with cassandra 1.2.13. As written in the subject, a
 query executed by class CqlPagingRecordReader raises a TimedOutException
 exception in Java but I don't have any error when I use it with cqlsh.
 What's the difference between those 2 ways ? Does cqlsh bypass some
 configuration compared to Java ?
 
  You can find my sample code at http://pastebin.com/vbAFyAys (don't take
 care of the way it's coded cause it's just a sample code). FYI, I can't
 reproduce it on another cluster. Here is the output of the 2 different ways
 (java and cqlsh) I used http://pastebin.com/umMNXJRw
 
  Thanks
  --
  Cyril SCETBON
 




Fwd: {kundera-discuss} Kundera 2.10 released

2014-01-31 Thread Vivek Mishra
fyi

-- Forwarded message --
From: Vivek Mishra vivek.mis...@impetus.co.in
Date: Sat, Feb 1, 2014 at 1:18 AM
Subject: {kundera-discuss} Kundera 2.10 released
To: kundera-disc...@googlegroups.com kundera-disc...@googlegroups.com


Hi All,

We are happy to announce the Kundera 2.10 release.

Kundera is a JPA 2.0 compliant, object-datastore mapping library for NoSQL
datastores. The idea behind Kundera is to make working with NoSQL databases
drop-dead simple and fun. It currently supports Cassandra, HBase, MongoDB,
Redis, OracleNoSQL, Neo4j,ElasticSearch,CouchDB and relational databases.

Major Changes:
==
1) Support added for bean validation.


Github Bug Fixes:
===
https://github.com/impetus-opensource/Kundera/issues/208
https://github.com/impetus-opensource/Kundera/issues/380
https://github.com/impetus-opensource/Kundera/issues/408
https://github.com/impetus-opensource/Kundera/issues/453
https://github.com/impetus-opensource/Kundera/issues/454
https://github.com/impetus-opensource/Kundera/issues/456
https://github.com/impetus-opensource/Kundera/issues/460
https://github.com/impetus-opensource/Kundera/issues/465
https://github.com/impetus-opensource/Kundera/issues/476
https://github.com/impetus-opensource/Kundera/issues/478
https://github.com/impetus-opensource/Kundera/issues/479
https://github.com/impetus-opensource/Kundera/issues/484
https://github.com/impetus-opensource/Kundera/issues/494
https://github.com/impetus-opensource/Kundera/issues/509
https://github.com/impetus-opensource/Kundera/issues/514
https://github.com/impetus-opensource/Kundera/issues/516
https://github.com/impetus-opensource/Kundera/issues/517
https://github.com/impetus-opensource/Kundera/issues/518


How to Download:
To download, use or contribute to Kundera, visit:
http://github.com/impetus-opensource/Kundera

Latest released tag version is 2.10 Kundera maven libraries are now
available at:
https://oss.sonatype.org/content/repositories/releases/com/impetus

Sample codes and examples for using Kundera can be found here:
https://github.com/impetus-opensource/Kundera/tree/trunk/src/kundera-tests

Survey/Feedback:
http://www.surveymonkey.com/s/BMB9PWG

Thank you all for your contributions and using Kundera!

PS: Group artifactId has been changed with 2.9.1 release onward. Please
refer
https://github.com/impetus-opensource/Kundera/blob/trunk/src/README.md#notefor
the same.









NOTE: This message may contain information that is confidential,
proprietary, privileged or otherwise protected by law. The message is
intended solely for the named addressee. If received in error, please
destroy and notify the sender. Any use of this email is prohibited when
received in error. Impetus does not represent, warrant and/or guarantee,
that the integrity of this communication has been maintained nor that the
communication is free of errors, virus, interception or interference.

--
You received this message because you are subscribed to the Google Groups
kundera-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kundera-discuss+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


one or more nodes were unavailable.

2014-01-20 Thread Vivek Mishra
Hi,
Trying CAS feature of cassandra 2.x and somehow getting given below error:


cqlsh:sample insert into User(user_id,first_name) values(
fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
Unable to complete request: one or more nodes were unavailable.
cqlsh:training


cqlsh:sample insert into User(user_id,first_name) values(
fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1')

It works fine.

Any idea?

-Vivek


Re: one or more nodes were unavailable.

2014-01-20 Thread Vivek Mishra
Single node and default consistency. Running via cqsh


On Tue, Jan 21, 2014 at 1:47 AM, sankalp kohli kohlisank...@gmail.comwrote:

 Also do you have any nodes down...because it is possible to reach write
 consistency and not do CAS because some machines are down.


 On Mon, Jan 20, 2014 at 12:16 PM, sankalp kohli kohlisank...@gmail.comwrote:

 What consistency level are you using?


 On Mon, Jan 20, 2014 at 7:16 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 Trying CAS feature of cassandra 2.x and somehow getting given below
 error:


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
 Unable to complete request: one or more nodes were unavailable.
 cqlsh:training


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1')

 It works fine.

 Any idea?

 -Vivek







Re: one or more nodes were unavailable.

2014-01-20 Thread Vivek Mishra
1 have downloaded cassandra 2.x and set up on single machine. Started
Cassandra server and connecting via cqlsh. Created a column family and
inserting a single record into it(via cqlsh).

Wondering why it gives No node available

Even though simple insert queries(without CAS) works!

-Vivek


On Tue, Jan 21, 2014 at 11:33 AM, Drew Kutcharian d...@venarc.com wrote:

 If you are trying this out on a single node, make sure you set the
 replication_factor of the keyspace to one.


 On Jan 20, 2014, at 7:41 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Single node and default consistency. Running via cqsh


 On Tue, Jan 21, 2014 at 1:47 AM, sankalp kohli kohlisank...@gmail.comwrote:

 Also do you have any nodes down...because it is possible to reach write
 consistency and not do CAS because some machines are down.


 On Mon, Jan 20, 2014 at 12:16 PM, sankalp kohli 
 kohlisank...@gmail.comwrote:

 What consistency level are you using?


 On Mon, Jan 20, 2014 at 7:16 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 Trying CAS feature of cassandra 2.x and somehow getting given below
 error:


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1') if not exists;
 Unable to complete request: one or more nodes were unavailable.
 cqlsh:training


 cqlsh:sample insert into User(user_id,first_name) values(
 fe08e810-81e4-11e3-9470-c3aa8ce77cc4,'vivek1')

 It works fine.

 Any idea?

 -Vivek









Re: Help on Designing Cassandra table for my usecase

2014-01-10 Thread Vivek Mishra
@Naresh
Too many indices or indices with high cardinality should be discouraged and
are always performance issues. A set will not contain duplicate values.

-Vivek


On Fri, Jan 10, 2014 at 5:48 PM, Naresh Yadav nyadav@gmail.com wrote:

 @Thunder
 I just came to know about 
 (CASSANDRA-4511https://issues.apache.org/jira/browse/CASSANDRA-4511)
 which allows Index on Collections and that will be part of release 2.1.
 I hope in that case my problem will be solved by changing your designed
 table with tag column as settext and defining secondary index on it. Is
 there any risk of performance problem of this design keeping in mind huge
 data ???


 Naresh

 On Fri, Jan 10, 2014 at 10:26 AM, Naresh Yadav nyadav@gmail.comwrote:

 @Thunder thanks for suggesting design but my main problem is
 indexing/quering dynamic Tag on each row that is main context of each row
 and most of queries will include that..

 As an alternative to cassandra, i tried Apache Blur, in blur table i am
 able to store exact same data and all queries also worked..so blur  allows
 dynamic indexing  of tag column BUT moving away from cassandra, i am
 loosing its strength because of that i am not confident on this decision as
 data will be huge in my case.

 Please guide me on this with better suggestions.

 Thanks
 Naresh

 On Fri, Jan 10, 2014 at 2:33 AM, Thunder Stumpges 
 thunder.stump...@gmail.com wrote:

 Well I think you have essentially time-series data, which C* should
 handle well, however I think your Tag column is going to cause troubles.
 C* does have collection columns, but they are not indexable nor usable in
 WHERE clause. Your example has both the uniqueness of the data (primary
 key) and query filtering on potentially multiple Tag columns. That is not
 supported in C* AFAIK.If it were a single Tag, that could be a column that
 is Indexed possibly.

 Ignoring that issue with the many different Tags, You could model the
 table as:

 CREATE TABLE metric_data (
   metric text,
   time text,
   period text,
   tag text,
   value int,
   PRIMARY KEY( (metric,time), period, tag)
 )

 That would make a composite partitioning key on metric and time meaning
 you'd always have to pass those (or else randomly page via TOKEN through
 all rows). After specifying metric and time, you could optionally also
 specify period and/or tag, and results would be ordered (clustered) by
 period. This would satisfy your queries a,b, and d but not c (as you did
 not specify time). If Time was a granularity column, does it even make
 sense to return records across differing time values? What does it mean to
 return the 4 month rows and 1 year row in your example? Could you issue N
 queries in this case (where N is a small number of each of your time
 granularities) ?

 I'm not sure how close that gets you, or if you can re-work your concept
 of Tag at all.
 Good luck.
 Thunder



 On Thu, Jan 9, 2014 at 10:45 AM, Hannu Kröger hkro...@gmail.com wrote:

 To my eye that looks something what the traditional analytics systems
 do. You can check out e.g. Acunu Analytics which uses Cassandra as a
 backend.

 Cheers,
 Hannu


 2014/1/9 Naresh Yadav nyadav@gmail.com

 Hi all,

 I have a use case with huge data which i am not able to design in
 cassandra.

 Table name : MetricResult

 Sample Data :

 Metric=Sales, Time=Month,  Period=Jan-10, Tag=U.S.A, Tag=Pen,
 Value=10
 Metric=Sales, Time=Month, Period=Jan-10, Tag=U.S.A, Tag=Pencil,
 Value=20
 Metric=Sales, Time=Month, Period=Feb-10, Tag=U.S.A, Tag=Pen,
 Value=30
 Metric=Sales, Time=Month, Period=Feb-10, Tag=U.S.A, Tag=Pencil,
 Value=10
 Metric=Sales, Time=Month, Period=Feb-10, Tag=India,
Value=90
 Metric=Sales, Time=Year, Period=2010,   Tag=U.S.A,
Value=70
 Metric=Cost,  Time=Year, Period=2010,Tag=CPU,
 Value=8000
 Metric=Cost,  Time=Year,  Period=2010,Tag=RAM,
 Value=4000
 Metric=Cost,  Time=Year  Period=2011, Tag=CPU,
 Value=9000
 Metric=Resource, Time=Week Period=Week1-2013,
 Value=100

 So in above case i have case of
  TimeSeries data  i.e Time,Period column
  Dynamic columns i.e Tag column
  Indexing on dynamic columns i.e Tag column
  Aggregations SUM, AVERAGE
  Same value comes again for a Metric, Time, Period, Tag then
 overwrite it

 Queries i need to support :
 --
 a)Give data for Metric=Sales AND Time=Month
O/P : 5 rows
 b)Give data for Metric=Sales AND Time=Month AND Period=Jan-10
O/P : 2 rows
 c)Give data for Metric=Sales AND Tag=U.S.A
O/P : 5 rows
 d)Give data for Metric=Sales AND Period=Jan-10 AND Tag=U.S.A AND
 Tag=Pen
O/P :1 row


 This table can have TB's of data and for a Metric,Period can have
 millions of rows.

 Please give suggestion to design/model this table in Cassandra. If
 some limitation in Cassandra then suggest best technology to handle this.


 Thanks
 Naresh












Nodetool ring

2014-01-02 Thread Vivek Mishra
Hi,
I am trying to understand  Owns here.  AFAIK, it is range(part of
keyspace). Not able to understand why is it shown as 100%? Is it because of
effective ownership?

Address RackStatus State   LoadOwnsToken
-3074457345618258503
x.x.x.x  3   Up Normal  91.28 MB100.00%
3074457345618258702
x.x.x.x  1   Up Normal  83.45 MB100.00%
-9223372036854775708
x.x.x.x  2   Up Normal  90.11 MB100.00%
-3074457345618258503


Any suggestions?

-Vivek


Re: Nodetool ring

2014-01-02 Thread Vivek Mishra
Thanks for your quick reply. Even with 2 data center  with 3 data nodes
each i am seeing 100% on both data center nodes.

-Vivek


On Fri, Jan 3, 2014 at 12:07 AM, Robert Coli rc...@eventbrite.com wrote:

 On Thu, Jan 2, 2014 at 10:20 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 I am trying to understand  Owns here.  AFAIK, it is range(part of
 keyspace). Not able to understand why is it shown as 100%? Is it because of
 effective ownership?


 When RF=N, effective ownership for each node is 100%.

 This is almost certainly what you are seeing, given a 3 node cluster
 (which probably has RF=3...).

 =Rob



Re: Nodetool ring

2014-01-02 Thread Vivek Mishra
Yes.


On Fri, Jan 3, 2014 at 12:57 AM, Robert Coli rc...@eventbrite.com wrote:

 On Thu, Jan 2, 2014 at 10:48 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Thanks for your quick reply. Even with 2 data center  with 3 data nodes
 each i am seeing 100% on both data center nodes.


 Do you have RF=3 in both?

 =Rob




Broken pipe with Thrift

2013-12-23 Thread Vivek Mishra
Hi,
I have a 6 node, 2DC cluster setup. I have configured consistency level to
QUORUM.  But very often i am getting Broken pipe
com.impetus.client.cassandra.CassandraClientBase
(CassandraClientBase.java:1926) - Error while executing native CQL
query Caused by: .
org.apache.thrift.transport.TTransportExceptionjava.net.SocketException:
Broken pipe
   at
org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport
java:147)
at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java
:156)
at
org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
at
org.apache.cassandra.thrift.Cassandra$Client.send_execute_cql3_query(Cas
sandra.java:1556)
at
org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandr
a.java:1546)


I am simply reading few records from a column family(not huge amount of
data)

Connection pooling and socket time out is properly configured. I have even
modified
read_request_timeout_in_ms
request_timeout_in_ms
write_request_timeout_in_ms  in cassandra.yaml to higher value.


any idea? Is it an issue at server side or with client API?

-Vivek


Re: Broken pipe with Thrift

2013-12-23 Thread Vivek Mishra
Also to add. It works absolutely fine on single node.

-Vivek


On Tue, Dec 24, 2013 at 12:15 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 I have a 6 node, 2DC cluster setup. I have configured consistency level to
 QUORUM.  But very often i am getting Broken pipe
 com.impetus.client.cassandra.CassandraClientBase
 (CassandraClientBase.java:1926) - Error while executing native CQL
 query Caused by: .
 org.apache.thrift.transport.TTransportExceptionjava.net.SocketException:
 Broken pipe
at
 org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport
 java:147)
 at org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.jav
 a:156)
 at
 org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
 at
 org.apache.cassandra.thrift.Cassandra$Client.send_execute_cql3_query(Cas
 sandra.java:1556)
 at
 org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandr
 a.java:1546)


 I am simply reading few records from a column family(not huge amount of
 data)

 Connection pooling and socket time out is properly configured. I have even
 modified
 read_request_timeout_in_ms
 request_timeout_in_ms
 write_request_timeout_in_ms  in cassandra.yaml to higher value.


 any idea? Is it an issue at server side or with client API?

 -Vivek



Re: Broken pipe with Thrift

2013-12-23 Thread Vivek Mishra
Hi Steven,
Thanks for your reply. We are using version 1.2.9.

-Vivek


On Tue, Dec 24, 2013 at 12:27 PM, Steven A Robenalt
srobe...@stanford.eduwrote:

 Hi Vivek,

 Which release are you using? We had an issue with 2.0.2 that was solved by
 a fix in 2.0.3.


 On Mon, Dec 23, 2013 at 10:47 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Also to add. It works absolutely fine on single node.

 -Vivek


 On Tue, Dec 24, 2013 at 12:15 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 I have a 6 node, 2DC cluster setup. I have configured consistency level
 to QUORUM.  But very often i am getting Broken pipe
 com.impetus.client.cassandra.CassandraClientBase
 (CassandraClientBase.java:1926) - Error while executing native CQL
 query Caused by: .
 org.apache.thrift.transport.TTransportExceptionjava.net.SocketException:
 Broken pipe
at
 org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport
 java:147)
 at
 org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java
 :156)
 at
 org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
 at
 org.apache.cassandra.thrift.Cassandra$Client.send_execute_cql3_query(Cas
 sandra.java:1556)
 at
 org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandr
 a.java:1546)


 I am simply reading few records from a column family(not huge amount of
 data)

 Connection pooling and socket time out is properly configured. I have
 even modified
 read_request_timeout_in_ms
 request_timeout_in_ms
 write_request_timeout_in_ms  in cassandra.yaml to higher value.


 any idea? Is it an issue at server side or with client API?

 -Vivek





 --
 Steve Robenalt
 Software Architect
 HighWire | Stanford University
 425 Broadway St, Redwood City, CA 94063

 srobe...@stanford.edu
 http://highwire.stanford.edu








Re: Broken pipe with Thrift

2013-12-23 Thread Vivek Mishra
Hi Steven,
One question, which is confusing , it's a server side issue or client side?

-Vivek




On Tue, Dec 24, 2013 at 12:30 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi Steven,
 Thanks for your reply. We are using version 1.2.9.

 -Vivek


 On Tue, Dec 24, 2013 at 12:27 PM, Steven A Robenalt srobe...@stanford.edu
  wrote:

 Hi Vivek,

 Which release are you using? We had an issue with 2.0.2 that was solved
 by a fix in 2.0.3.


 On Mon, Dec 23, 2013 at 10:47 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Also to add. It works absolutely fine on single node.

 -Vivek


 On Tue, Dec 24, 2013 at 12:15 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 I have a 6 node, 2DC cluster setup. I have configured consistency level
 to QUORUM.  But very often i am getting Broken pipe
 com.impetus.client.cassandra.CassandraClientBase
 (CassandraClientBase.java:1926) - Error while executing native CQL
 query Caused by: .
 org.apache.thrift.transport.TTransportExceptionjava.net.SocketException:
 Broken pipe
at
 org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport
 java:147)
 at
 org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.jav
 a:156)
 at
 org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
 at
 org.apache.cassandra.thrift.Cassandra$Client.send_execute_cql3_query(Ca
 ssandra.java:1556)
 at
 org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassand
 ra.java:1546)


 I am simply reading few records from a column family(not huge amount of
 data)

 Connection pooling and socket time out is properly configured. I have
 even modified
 read_request_timeout_in_ms
 request_timeout_in_ms
 write_request_timeout_in_ms  in cassandra.yaml to higher value.


 any idea? Is it an issue at server side or with client API?

 -Vivek





 --
 Steve Robenalt
 Software Architect
  HighWire | Stanford University
 425 Broadway St, Redwood City, CA 94063

 srobe...@stanford.edu
 http://highwire.stanford.edu









Fwd: {kundera-discuss} RE: Kundera 2.9 released

2013-12-13 Thread Vivek Mishra
fyi.

-- Forwarded message --
From: Vivek Mishra vivek.mis...@impetus.co.in
Date: Fri, Dec 13, 2013 at 8:54 PM
Subject: {kundera-discuss} RE: Kundera 2.9 released
To: kundera-disc...@googlegroups.com kundera-disc...@googlegroups.com, 
u...@hbase.apache.org u...@hbase.apache.org


Hi All,

We are happy to announce the release of Kundera 2.9 .

Kundera is a JPA 2.0 compliant, object-datastore mapping library for NoSQL
datastores. The idea behind Kundera is to make working with NoSQL databases
drop-dead simple and fun. It currently supports Cassandra, HBase, MongoDB,
Redis, OracleNoSQL, Neo4j,ElasticSearch,CouchDB and relational databases.

Major Changes:
==
1) Support for Secondary table
2) Support Abstract entity.

Github Bug Fixes:
===
https://github.com/impetus-opensource/Kundera/issues/455
https://github.com/impetus-opensource/Kundera/issues/448
https://github.com/impetus-opensource/Kundera/issues/447
https://github.com/impetus-opensource/Kundera/issues/443
https://github.com/impetus-opensource/Kundera/pull/442
https://github.com/impetus-opensource/Kundera/issues/404
https://github.com/impetus-opensource/Kundera/issues/388
https://github.com/impetus-opensource/Kundera/issues/283
https://github.com/impetus-opensource/Kundera/issues/263
https://github.com/impetus-opensource/Kundera/issues/120
https://github.com/impetus-opensource/Kundera/issues/103

How to Download:
To download, use or contribute to Kundera, visit:
http://github.com/impetus-opensource/Kundera

Latest released tag version is 2.9 Kundera maven libraries are now
available at:
https://oss.sonatype.org/content/repositories/releases/com/impetus

Sample codes and examples for using Kundera can be found here:
https://github.com/impetus-opensource/Kundera/tree/trunk/src/kundera-tests

Survey/Feedback:
http://www.surveymonkey.com/s/BMB9PWG

Thank you all for your contributions and using Kundera!

Sincerely,
Kundera Team








NOTE: This message may contain information that is confidential,
proprietary, privileged or otherwise protected by law. The message is
intended solely for the named addressee. If received in error, please
destroy and notify the sender. Any use of this email is prohibited when
received in error. Impetus does not represent, warrant and/or guarantee,
that the integrity of this communication has been maintained nor that the
communication is free of errors, virus, interception or interference.

--
You received this message because you are subscribed to the Google Groups
kundera-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kundera-discuss+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: Exactly one wide row per node for a given CF?

2013-12-03 Thread Vivek Mishra
So Basically you want to create a cluster of multiple unique keys, but data
which belongs to one unique should be colocated. correct?

-Vivek


On Tue, Dec 3, 2013 at 10:39 AM, onlinespending onlinespend...@gmail.comwrote:

 Subject says it all. I want to be able to randomly distribute a large set
 of records but keep them clustered in one wide row per node.

 As an example, lets say I’ve got a collection of about 1 million records
 each with a unique id. If I just go ahead and set the primary key (and
 therefore the partition key) as the unique id, I’ll get very good random
 distribution across my server cluster. However, each record will be its own
 row. I’d like to have each record belong to one large wide row (per server
 node) so I can have them sorted or clustered on some other column.

 If I say have 5 nodes in my cluster, I could randomly assign a value of 1
 - 5 at the time of creation and have the partition key set to this value.
 But this becomes troublesome if I add or remove nodes. What effectively I
 want is to partition on the unique id of the record modulus N (id % N;
 where N is the number of nodes).

 I have to imagine there’s a mechanism in Cassandra to simply randomize the
 partitioning without even using a key (and then clustering on some column).

 Thanks for any help.


Re: Wide rows (time series data) and ORM

2013-10-23 Thread Vivek Mishra
Can Kundera work with wide rows in an ORM manner?

What specifically you looking for? Composite column based implementation
can be built using Kundera.
With Recent CQL3 developments, Kundera supports most of these. I think POJO
needs to be aware of number of fields needs to be persisted(Same as CQL3)

-Vivek


On Wed, Oct 23, 2013 at 12:48 AM, Les Hartzman lhartz...@gmail.com wrote:

 As I'm becoming more familiar with Cassandra I'm still trying to shift my
 thinking from relational to NoSQL.

 Can Kundera work with wide rows in an ORM manner? In other words, can you
 actually design a POJO that fits the standard recipe for JPA usage? Would
 the queries return collections of the POJO to handle wide row data?

 I had considered using Spring and JPA for Cassandra, but it appears that
 other than basic configuration issues for Cassandra, to use Spring and JPA
 on a Cassandra database seems like an effort in futility if Cassandra is
 used as a NoSQL database instead of mimicking an RDBMS solution.

 If anyone can shed any light on this, I'd appreciate it.

 Thanks.

 Les




Re: Wide rows (time series data) and ORM

2013-10-23 Thread Vivek Mishra
Hi,
CREATE TABLE sensor_data (
sensor_id   text,
date   text,
data_time_stamptimestamp,
reading  int,
PRIMARY KEY ( (sensor_id, date),
data_time_stamp) );

Yes, you can create a POJO for this and map exactly with one row as a POJO
object.

Please have a look at:
https://github.com/impetus-opensource/Kundera/wiki/Using-Compound-keys-with-Kundera

There are users built production system using Kundera, please refer :
https://github.com/impetus-opensource/Kundera/wiki/Kundera-in-Production-Deployments


I am working as a core commitor in Kundera, please do let me know if you
have any query.

Sincerely,
-Vivek



On Wed, Oct 23, 2013 at 10:41 PM, Les Hartzman lhartz...@gmail.com wrote:

 Hi Vivek,

 What I'm looking for are a couple of things as I'm gaining an
 understanding of Cassandra. With wide rows and time series data, how do you
 (or can you) handle this data in an ORM manner? Now I understand that with
 CQL3, doing a select * from time_series_data will return the data as
 multiple rows. So does handling this data equal the way you would deal with
 any mapping of objects to results in a relational manner? Would you still
 use a JPA approach or is there a Cassandra/CQL3-specific way of interacting
 with the database?

 I expect to use a compound key for partitioning/clustering. For example
 I'm planning on creating a table as follows:
   CREATE TABLE sensor_data (
 sensor_id   text,
 date   text,
 data_time_stamptimestamp,
 reading  int,
 PRIMARY KEY ( (sensor_id, date),
 data_time_stamp) );
 The 'date' field will be day-specific so that for each day there will be a
 new row created.

 So will I be able to define a POJO, SensorData, with the fields show above
 and basically process each 'row' returned by CQL as another SensorData
 object?

 Thanks.

 Les



 On Wed, Oct 23, 2013 at 1:22 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Can Kundera work with wide rows in an ORM manner?

 What specifically you looking for? Composite column based implementation
 can be built using Kundera.
 With Recent CQL3 developments, Kundera supports most of these. I think
 POJO needs to be aware of number of fields needs to be persisted(Same as
 CQL3)

 -Vivek


 On Wed, Oct 23, 2013 at 12:48 AM, Les Hartzman lhartz...@gmail.comwrote:

 As I'm becoming more familiar with Cassandra I'm still trying to shift
 my thinking from relational to NoSQL.

 Can Kundera work with wide rows in an ORM manner? In other words, can
 you actually design a POJO that fits the standard recipe for JPA usage?
 Would the queries return collections of the POJO to handle wide row data?

 I had considered using Spring and JPA for Cassandra, but it appears that
 other than basic configuration issues for Cassandra, to use Spring and JPA
 on a Cassandra database seems like an effort in futility if Cassandra is
 used as a NoSQL database instead of mimicking an RDBMS solution.

 If anyone can shed any light on this, I'd appreciate it.

 Thanks.

 Les






Fwd: {kundera-discuss} Kundera 2.8 released

2013-10-21 Thread Vivek Mishra
fyi.

-- Forwarded message --
From: Vivek Mishra vivek.mis...@impetus.co.in
Date: Tue, Oct 22, 2013 at 1:33 AM
Subject: {kundera-discuss} Kundera 2.8 released
To: kundera-disc...@googlegroups.com kundera-disc...@googlegroups.com


Hi All,

We are happy to announce the release of Kundera 2.8 .

Kundera is a JPA 2.0 compliant, object-datastore mapping library for NoSQL
datastores. The idea behind Kundera is to make working with NoSQL databases
drop-dead simple and fun. It currently supports Cassandra, HBase, MongoDB,
Redis, OracleNoSQL, Neo4j,ElasticSearch,CouchDB and relational databases.

Major Changes:
==
1) Support for CouchDB as datastore.
2) Support for MappedSuperclass and JPA Inheritence strategy.

Github Bug Fixes:
===

https://github.com/impetus-opensource/Kundera/pull/409
https://github.com/impetus-opensource/Kundera/issues/396
https://github.com/impetus-opensource/Kundera/issues/379
https://github.com/impetus-opensource/Kundera/issues/340
https://github.com/impetus-opensource/Kundera/issues/327
https://github.com/impetus-opensource/Kundera/issues/320
https://github.com/impetus-opensource/Kundera/issues/261
https://github.com/impetus-opensource/Kundera/pull/142
https://github.com/impetus-opensource/Kundera/issues/55
https://github.com/impetus-opensource/Kundera/issues/420
https://github.com/impetus-opensource/Kundera/issues/414
https://github.com/impetus-opensource/Kundera/issues/411
https://github.com/impetus-opensource/Kundera/issues/401
https://github.com/impetus-opensource/Kundera/issues/378
https://github.com/impetus-opensource/Kundera/issues/354
https://github.com/impetus-opensource/Kundera/issues/315
https://github.com/impetus-opensource/Kundera/issues/298
https://github.com/impetus-opensource/Kundera/issues/204
https://github.com/impetus-opensource/Kundera/issues/179
https://github.com/impetus-opensource/Kundera/issues/128
https://github.com/impetus-opensource/Kundera/issues/432
https://github.com/impetus-opensource/Kundera/issues/422


How to Download:
To download, use or contribute to Kundera, visit:
http://github.com/impetus-opensource/Kundera

Latest released tag version is 2.8 Kundera maven libraries are now
available at:
https://oss.sonatype.org/content/repositories/releases/com/impetus

Sample codes and examples for using Kundera can be found here:
https://github.com/impetus-opensource/Kundera/tree/trunk/kundera-tests

Survey/Feedback:
http://www.surveymonkey.com/s/BMB9PWG

Thank you all for your contributions and using Kundera!


Sincerely,
Kundera Team








NOTE: This message may contain information that is confidential,
proprietary, privileged or otherwise protected by law. The message is
intended solely for the named addressee. If received in error, please
destroy and notify the sender. Any use of this email is prohibited when
received in error. Impetus does not represent, warrant and/or guarantee,
that the integrity of this communication has been maintained nor that the
communication is free of errors, virus, interception or interference.

--
You received this message because you are subscribed to the Google Groups
kundera-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kundera-discuss+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: COPY command times out

2013-10-11 Thread Vivek Mishra
If not getting any exception.

Reason for Request did not complete within rpc_timeout.  = socket time
out

As per http://www.datastax.com/docs/1.1/references/cql/COPY( COPY from CSV
section)

COPY FROM is intended for importing small datasets (a few million rows or
less) into Cassandra. For importing larger datasets, use Cassandra Bulk
Loader http://www.datastax.com/docs/1.1/references/bulkloader#bulkloader or
the sstable2json /
json2sstablehttp://www.datastax.com/docs/1.1/references/sstable2json#sstable2json
utility.


-Vivek




On Fri, Oct 11, 2013 at 3:02 PM, Petter von Dolwitz (Hem) 
petter.von.dolw...@gmail.com wrote:

 Hi,

 I'm trying to import CSV data using the COPY ... FROM command. After
 importing 10% of my 2.5 GB csv file the operation aborts with the message:

 Request did not complete within rpc_timeout.
 Aborting import at record #504631 (line 504632). Previously-inserted
 values still present.

 There are no exceptions in the log. I'm using Cassandra 2.0.1 on ubuntu
 using a two machine setup with 4 cores, 15 GB RAM each.

 The table design incorporates many secondary indexes (which someone
 discouraged me to use).

 Can anybody tells me what is going on?

 Thanks,
 Petter





Re: Bulk Loader in cassandra : String as row keys in cassandra

2013-10-11 Thread Vivek Mishra
but i have changed my **key_validation_class=AsciiType** in order to make
**string as row keys**

why not key_validation_class=UTF8Type ?

-Vivek


On Fri, Oct 11, 2013 at 3:55 PM, ashish sanadhya sanadhyaa...@gmail.comwrote:

 I have done with bulk loader with key_validation_class=LexicalUUIDType for
 new row with the help of this [code][1] but i have changed my
 **key_validation_class=AsciiType** in order to make **string as row keys**

   create column family Users1
   with key_validation_class=AsciiType
and comparator=AsciiType
   AND column_metadata = [
   {column_name: timestamp1, validation_class: AsciiType}
   {column_name: symbol, validation_class: AsciiType}
   {column_name: Bid_Price, validation_class:AsciiType}
   {column_name: Ask_Price, validation_class:AsciiType}
   ];


 i have tried all possible changes to code in order to make row keys as
 string type but getting an error or even without **usersWriter.newRow** not
 able to write into sstable


   while ((line = reader.readLine()) != null)
 {
  if (entry.parse(line, lineNumber))
 {
 //usersWriter.newRow(uuid);
 usersWriter.newRow(String.valueOf(lineNumber));
 usersWriter.addColumn(bytes(symbol),
 bytes(entry.symbol), timestamp);
 usersWriter.addColumn(bytes(timestamp1),
 bytes(entry.timestamp1), timestamp);
 usersWriter.addColumn(bytes(Bid_Price),
 bytes(entry.Bid_Price), timestamp);
 usersWriter.addColumn(bytes(Ask_Price),
 bytes(entry.Ask_Price), timestamp);
 }
 lineNumber++;
 }

   getting an error as expected it is only taking **ByteBuffer**

  usersWriter.newRow(String.valueOf(lineNumber));
^
  required: ByteBuffer
  found: String
  reason: actual argument String cannot be converted to ByteBuffer by
 method invocation  conversion

 Any help to make string as row keys in sstable for the above column family
 definition.thanks.






   [1]:
 http://www.datastax.com/wp-content/uploads/2011/08/DataImportExample.java



Re: Bulk Loader in cassandra : String as row keys in cassandra

2013-10-11 Thread Vivek Mishra
Also, please use ByteBufferUtils for byte conversions.


On Fri, Oct 11, 2013 at 4:17 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 but i have changed my **key_validation_class=AsciiType** in order to make
 **string as row keys**

 why not key_validation_class=UTF8Type ?

 -Vivek


 On Fri, Oct 11, 2013 at 3:55 PM, ashish sanadhya 
 sanadhyaa...@gmail.comwrote:

 I have done with bulk loader with key_validation_class=LexicalUUIDType
 for new row with the help of this [code][1] but i have changed my
 **key_validation_class=AsciiType** in order to make **string as row keys**

   create column family Users1
   with key_validation_class=AsciiType
and comparator=AsciiType
   AND column_metadata = [
   {column_name: timestamp1, validation_class: AsciiType}
   {column_name: symbol, validation_class: AsciiType}
   {column_name: Bid_Price, validation_class:AsciiType}
   {column_name: Ask_Price, validation_class:AsciiType}
   ];


 i have tried all possible changes to code in order to make row keys as
 string type but getting an error or even without **usersWriter.newRow** not
 able to write into sstable


   while ((line = reader.readLine()) != null)
 {
  if (entry.parse(line, lineNumber))
 {
 //usersWriter.newRow(uuid);
 usersWriter.newRow(String.valueOf(lineNumber));
 usersWriter.addColumn(bytes(symbol),
 bytes(entry.symbol), timestamp);
 usersWriter.addColumn(bytes(timestamp1),
 bytes(entry.timestamp1), timestamp);
 usersWriter.addColumn(bytes(Bid_Price),
 bytes(entry.Bid_Price), timestamp);
 usersWriter.addColumn(bytes(Ask_Price),
 bytes(entry.Ask_Price), timestamp);
 }
 lineNumber++;
 }

   getting an error as expected it is only taking **ByteBuffer**

  usersWriter.newRow(String.valueOf(lineNumber));
^
  required: ByteBuffer
  found: String
  reason: actual argument String cannot be converted to ByteBuffer by
 method invocation  conversion

 Any help to make string as row keys in sstable for the above column
 family definition.thanks.






   [1]:
 http://www.datastax.com/wp-content/uploads/2011/08/DataImportExample.java





Re: Bulk Loader in cassandra : String as row keys in cassandra

2013-10-11 Thread Vivek Mishra
I am not able to get your meaning for *string as row keys ? *
*
*
Row key values will be of type key_validation_class  only
*
*

On Fri, Oct 11, 2013 at 4:25 PM, ashish sanadhya sanadhyaa...@gmail.comwrote:

 Hi vivek key_validation_class=UTF8Type will do ,but i certainly want
 *string as row keys, *so will it work ?? *
 *


 On Fri, Oct 11, 2013 at 4:17 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Also, please use ByteBufferUtils for byte conversions.


 On Fri, Oct 11, 2013 at 4:17 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 but i have changed my **key_validation_class=AsciiType** in order to
 make **string as row keys**

 why not key_validation_class=UTF8Type ?

 -Vivek


 On Fri, Oct 11, 2013 at 3:55 PM, ashish sanadhya sanadhyaa...@gmail.com
  wrote:

 I have done with bulk loader with key_validation_class=LexicalUUIDType
 for new row with the help of this [code][1] but i have changed my
 **key_validation_class=AsciiType** in order to make **string as row keys**

   create column family Users1
   with key_validation_class=AsciiType
and comparator=AsciiType
   AND column_metadata = [
   {column_name: timestamp1, validation_class: AsciiType}
   {column_name: symbol, validation_class: AsciiType}
   {column_name: Bid_Price, validation_class:AsciiType}
   {column_name: Ask_Price, validation_class:AsciiType}
   ];


 i have tried all possible changes to code in order to make row keys as
 string type but getting an error or even without **usersWriter.newRow** not
 able to write into sstable


   while ((line = reader.readLine()) != null)
 {
  if (entry.parse(line, lineNumber))
 {
 //usersWriter.newRow(uuid);
 usersWriter.newRow(String.valueOf(lineNumber));
 usersWriter.addColumn(bytes(symbol),
 bytes(entry.symbol), timestamp);
 usersWriter.addColumn(bytes(timestamp1),
 bytes(entry.timestamp1), timestamp);
 usersWriter.addColumn(bytes(Bid_Price),
 bytes(entry.Bid_Price), timestamp);
 usersWriter.addColumn(bytes(Ask_Price),
 bytes(entry.Ask_Price), timestamp);
 }
 lineNumber++;
 }

   getting an error as expected it is only taking **ByteBuffer**

  usersWriter.newRow(String.valueOf(lineNumber));
^
  required: ByteBuffer
  found: String
  reason: actual argument String cannot be converted to ByteBuffer
 by method invocation  conversion

 Any help to make string as row keys in sstable for the above column
 family definition.thanks.






   [1]:
 http://www.datastax.com/wp-content/uploads/2011/08/DataImportExample.java







Re: Bulk Loader in cassandra : String as row keys in cassandra

2013-10-11 Thread Vivek Mishra
Change key_validation_class to UTF8Type and

usersWriter.newRow(ByteBufferUtil.bytes(String.valueOf(lineNumber)));



On Fri, Oct 11, 2013 at 4:42 PM, ashish sanadhya sanadhyaa...@gmail.comwrote:

 Here i mean that key_validation_class=AsciiType or 
 key_validation_class=UTF8Type
 but I am unable to create an sstable for this column family

 create column family Users1
   with key_validation_class=UTF8Type

and comparator=AsciiType
   AND column_metadata = [
   {column_name: timestamp1, validation_class: AsciiType}
   {column_name: symbol, validation_class: AsciiType}
   {column_name: Bid_Price, validation_class:AsciiType}
   {column_name: Ask_Price, validation_class:AsciiType}
   ];

 how do i get from this usersWriter.newRow(String.
 valueOf(lineNumber));  ?
 thanks.



 On Fri, Oct 11, 2013 at 4:30 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 I am not able to get your meaning for *string as row keys ? *
 *
 *
 Row key values will be of type key_validation_class  only
 *
 *

 On Fri, Oct 11, 2013 at 4:25 PM, ashish sanadhya 
 sanadhyaa...@gmail.comwrote:

 Hi vivek key_validation_class=UTF8Type will do ,but i certainly want
 *string as row keys, *so will it work ?? *
 *


 On Fri, Oct 11, 2013 at 4:17 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Also, please use ByteBufferUtils for byte conversions.


 On Fri, Oct 11, 2013 at 4:17 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 but i have changed my **key_validation_class=AsciiType** in order to
 make **string as row keys**

 why not key_validation_class=UTF8Type ?

 -Vivek


 On Fri, Oct 11, 2013 at 3:55 PM, ashish sanadhya 
 sanadhyaa...@gmail.com wrote:

 I have done with bulk loader with
 key_validation_class=LexicalUUIDType for new row with the help of this
 [code][1] but i have changed my **key_validation_class=AsciiType** in 
 order
 to make **string as row keys**

   create column family Users1
   with key_validation_class=AsciiType
and comparator=AsciiType
   AND column_metadata = [
   {column_name: timestamp1, validation_class: AsciiType}
   {column_name: symbol, validation_class: AsciiType}
   {column_name: Bid_Price, validation_class:AsciiType}
   {column_name: Ask_Price, validation_class:AsciiType}
   ];


 i have tried all possible changes to code in order to make row keys
 as string type but getting an error or even without 
 **usersWriter.newRow**
 not able to write into sstable


   while ((line = reader.readLine()) != null)
 {
  if (entry.parse(line, lineNumber))
 {
 //usersWriter.newRow(uuid);
 usersWriter.newRow(String.valueOf(lineNumber));
 usersWriter.addColumn(bytes(symbol),
 bytes(entry.symbol), timestamp);
 usersWriter.addColumn(bytes(timestamp1),
 bytes(entry.timestamp1), timestamp);
 usersWriter.addColumn(bytes(Bid_Price),
 bytes(entry.Bid_Price), timestamp);
 usersWriter.addColumn(bytes(Ask_Price),
 bytes(entry.Ask_Price), timestamp);
 }
 lineNumber++;
 }

   getting an error as expected it is only taking **ByteBuffer**

  usersWriter.newRow(String.valueOf(lineNumber));
^
  required: ByteBuffer
  found: String
  reason: actual argument String cannot be converted to ByteBuffer
 by method invocation  conversion

 Any help to make string as row keys in sstable for the above column
 family definition.thanks.






   [1]:
 http://www.datastax.com/wp-content/uploads/2011/08/DataImportExample.java









Using cassandra-cli with Client-server encryption

2013-10-08 Thread Vivek Mishra
Hi,
I am trying to use cassandra-cli with client-server encryption enabled. But
somehow getting handshake failure error(given below):

org.apache.thrift.transport.TTransportException:
javax.net.ssl.SSLHandshakeException: Received fatal alert: handshake_failure
at
org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:147)
at
org.apache.thrift.transport.TFramedTransport.flush(TFramedTransport.java:156)
at org.apache.thrift.TServiceClient.sendBase(TServiceClient.java:65)
at
org.apache.cassandra.thrift.Cassandra$Client.send_describe_cluster_name(Cassandra.java:1095)
at
org.apache.cassandra.thrift.Cassandra$Client.describe_cluster_name(Cassandra.java:1088)
at org.apache.cassandra.cli.CliMain.connect(CliMain.java:147)
at org.apache.cassandra.cli.CliMain.main(CliMain.java:246)
Caused by: javax.net.ssl.SSLHandshakeException: Received fatal alert:
handshake_failure
at sun.security.ssl.Alerts.getSSLException(Alerts.java:192)
at sun.security.ssl.Alerts.getSSLException(Alerts.java:154)
at sun.security.ssl.SSLSocketImpl.recvAlert(SSLSocketImpl.java:1911)
at sun.security.ssl.SSLSocketImpl.readRecord(SSLSocketImpl.java:1027)
at
sun.security.ssl.SSLSocketImpl.performInitialHandshake(SSLSocketImpl.java:1262)
at sun.security.ssl.SSLSocketImpl.writeRecord(SSLSocketImpl.java:680)
at sun.security.ssl.AppOutputStream.write(AppOutputStream.java:85)
at
org.apache.thrift.transport.TIOStreamTransport.write(TIOStreamTransport.java:145)



I am trying to get it working on local machine.

-Vivek


Re: Segmentation fault when trying to store into cassandra...

2013-09-30 Thread Vivek Mishra
Java version issue?
Using sun jdk or open jdk?

-Vivek


On Tue, Oct 1, 2013 at 6:16 AM, Krishna Chaitanya bnsk1990r...@gmail.comwrote:

 Hello,
I modified a network probe which collects network packets to
 store them into cassandra. So there are many packets that are coming in, I
 capture the packets in the program and store them into cassandra. I am
 using libQtCassandra library. The program is crashing with segmentation
 fault as soon as I run it. Can someone help as to what all can go wrong
 here?? Could there be a problem with row/col keys or is it some
 configuration parameter or the speed at which the packets or coming? I am
 not able to figure it out. Thank you.

 --
 Regards,
 BNSK*.
 *



Collection type column

2013-09-27 Thread Vivek Mishra
Hi,
I understand that collection type column are supported via cql3 only. Can
anybody please share how actually such mutations happen?

I can see that actual column value is clubbed with column name in form of a
ColumnGroupMap(type Collection). But not able to identify how it works
internally within Cassandra?

Any thoughts?

-Vivek


AssertionError: sstableloader

2013-09-19 Thread Vivek Mishra
Hi,
I am trying to use sstableloader to load some external data and getting
given below error:
Established connection to initial hosts
Opening sstables and calculating sections to stream
Streaming relevant part of
/home/impadmin/source/Examples/data/Demo/Users/Demo-Users-ja-1-Data.db to [/
127.0.0.1]
progress: [/127.0.0.1 1/1 (100%)] [total: 100% - 0MB/s (avg:
0MB/s)]Exception in thread STREAM-OUT-/127.0.0.1
java.lang.AssertionError: Reference counter -1 for
/home/impadmin/source/Examples/data/Demo/Users/Demo-Users-ja-1-Data.db
at
org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1017)
at org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:120)
at
org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
at
org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
at
org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
at
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
at
org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
at java.lang.Thread.run(Thread.java:722)


Any pointers?

-Vivek


Re: AssertionError: sstableloader

2013-09-19 Thread Vivek Mishra
More to add on this:

This is happening for column families created via CQL3 with collection type
columns and without WITH COMPACT STORAGE.


On Fri, Sep 20, 2013 at 12:51 AM, Yuki Morishita mor.y...@gmail.com wrote:

 Sounds like a bug.
 Would you mind filing JIRA at
 https://issues.apache.org/jira/browse/CASSANDRA?

 Thanks,

 On Thu, Sep 19, 2013 at 2:12 PM, Vivek Mishra mishra.v...@gmail.com
 wrote:
  Hi,
  I am trying to use sstableloader to load some external data and getting
  given below error:
  Established connection to initial hosts
  Opening sstables and calculating sections to stream
  Streaming relevant part of
  /home/impadmin/source/Examples/data/Demo/Users/Demo-Users-ja-1-Data.db to
  [/127.0.0.1]
  progress: [/127.0.0.1 1/1 (100%)] [total: 100% - 0MB/s (avg:
  0MB/s)]Exception in thread STREAM-OUT-/127.0.0.1
 java.lang.AssertionError:
  Reference counter -1 for
  /home/impadmin/source/Examples/data/Demo/Users/Demo-Users-ja-1-Data.db
  at
 
 org.apache.cassandra.io.sstable.SSTableReader.releaseReference(SSTableReader.java:1017)
  at
 org.apache.cassandra.streaming.StreamWriter.write(StreamWriter.java:120)
  at
 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:73)
  at
 
 org.apache.cassandra.streaming.messages.FileMessage$1.serialize(FileMessage.java:45)
  at
 
 org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:44)
  at
 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:384)
  at
 
 org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:357)
  at java.lang.Thread.run(Thread.java:722)
 
 
  Any pointers?
 
  -Vivek



 --
 Yuki Morishita
  t:yukim (http://twitter.com/yukim)



Re: Versioning in cassandra

2013-09-03 Thread Vivek Mishra
create secondary index over parentid.
OR
make it part of clustering key

-Vivek


On Tue, Sep 3, 2013 at 10:42 PM, dawood abdullah
muhammed.daw...@gmail.comwrote:

 Jan,

 The solution you gave works spot on, but there is one more requirement I
 forgot to mention. Following is my table structure

 CREATE TABLE file (
   id text,
   contenttype text,
   createdby text,
   createdtime timestamp,
   description text,
   name text,
   parentid text,
   version timestamp,
   PRIMARY KEY (id, version)

 ) WITH CLUSTERING ORDER BY (version DESC);


 The query (select * from file where id = 'xxx' limit 1;) provided solves
 the problem of finding the latest version file. But I have one more
 requirement of finding all the latest version files having parentid say
 'yyy'.

 Please suggest how can this query be achieved.

 Dawood



 On Tue, Sep 3, 2013 at 12:43 AM, dawood abdullah 
 muhammed.daw...@gmail.com wrote:

 In my case version can be timestamp as well. What do you suggest version
 number to be, do you see any problems if I keep version as counter /
 timestamp ?


 On Tue, Sep 3, 2013 at 12:22 AM, Jan Algermissen 
 jan.algermis...@nordsc.com wrote:


 On 02.09.2013, at 20:44, dawood abdullah muhammed.daw...@gmail.com
 wrote:

  Requirement is like I have a column family say File
 
  create table file(id text primary key, fname text, version int,
 mimetype text, content text);
 
  Say, I have few records inserted, when I modify an existing record
 (content is modified) a new version needs to be created. As I need to have
 provision to revert to back any old version whenever required.
 

 So, can version be a timestamp? Or does it need to be an integer?

 In the former case, make use of C*'s ordering like so:

 CREATE TABLE file (
file_id text,
version timestamp,
fname text,

PRIMARY KEY (file_id,version)
 ) WITH CLUSTERING ORDER BY (version DESC);

 Get the latest file version with

 select * from file where file_id = 'xxx' limit 1;

 If it has to be an integer, use counter columns.

 Jan


  Regards,
  Dawood
 
 
  On Mon, Sep 2, 2013 at 10:47 PM, Jan Algermissen 
 jan.algermis...@nordsc.com wrote:
  Hi Dawood,
 
  On 02.09.2013, at 16:36, dawood abdullah muhammed.daw...@gmail.com
 wrote:
 
   Hi
   I have a requirement of versioning to be done in Cassandra.
  
   Following is my column family definition
  
   create table file_details(id text primary key, fname text, version
 int, mimetype text);
  
   I have a secondary index created on fname column.
  
   Whenever I do an insert for the same 'fname', the version should be
 incremented. And when I retrieve a row with fname it should return me the
 latest version row.
  
   Is there a better way to do in Cassandra? Please suggest what
 approach needs to be taken.
 
  Can you explain more about your use case?
 
  If the version need not be a small number, but could be a timestamp,
 you could make use of C*'s ordering feature , have the database set the new
 version as a timestamp and retrieve the latest one with a simple LIMIT 1
 query. (I'll explain more when this is an option for you).
 
  Jan
 
  P.S. Me being a REST/HTTP head, an alarm rings when I see 'version'
 next to 'mimetype' :-) What exactly are you versioning here? Maybe we can
 even change the situation from a functional POV?
 
 
  
   Regards,
  
   Dawood
  
  
  
  
 
 






Re: Versioning in cassandra

2013-09-03 Thread Vivek Mishra
My bad. I did miss out to read latest version part.

-Vivek


On Tue, Sep 3, 2013 at 11:20 PM, dawood abdullah
muhammed.daw...@gmail.comwrote:

 I have tried with both the options creating secondary index and also tried
 adding parentid to primary key, but I am getting all the files with
 parentid 'yyy', what I want is the latest version of file with the
 combination of parentid, fileid. Say below are the records inserted in the
 file table:

 insert into file (id, parentid, version, contenttype, description, name)
 values ('f1', 'd1', '2011-03-04', 'pdf', 'f1 file', 'file1');
 insert into file (id, parentid, version, contenttype, description, name)
 values ('f1', 'd1', '2011-03-05', 'pdf', 'f1 file', 'file1');
 insert into file (id, parentid, version, contenttype, description, name)
 values ('f2', 'd1', '2011-03-05', 'pdf', 'f1 file', 'file1');
 insert into file (id, parentid, version, contenttype, description, name)
 values ('f2', 'd1', '2011-03-06', 'pdf', 'f1 file', 'file1');

 I want to write a query which returns me second and last record and not
 the first and third record, because for the first and third record there
 exists a latest version, for the combination of id and parentid.

 I am confused If at all this is achievable, please suggest.

 Dawood



 On Tue, Sep 3, 2013 at 10:58 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 create secondary index over parentid.
 OR
 make it part of clustering key

 -Vivek


 On Tue, Sep 3, 2013 at 10:42 PM, dawood abdullah 
 muhammed.daw...@gmail.com wrote:

 Jan,

 The solution you gave works spot on, but there is one more requirement I
 forgot to mention. Following is my table structure

 CREATE TABLE file (
   id text,
   contenttype text,
   createdby text,
   createdtime timestamp,
   description text,
   name text,
   parentid text,
   version timestamp,
   PRIMARY KEY (id, version)

 ) WITH CLUSTERING ORDER BY (version DESC);


 The query (select * from file where id = 'xxx' limit 1;) provided solves
 the problem of finding the latest version file. But I have one more
 requirement of finding all the latest version files having parentid say
 'yyy'.

 Please suggest how can this query be achieved.

 Dawood



 On Tue, Sep 3, 2013 at 12:43 AM, dawood abdullah 
 muhammed.daw...@gmail.com wrote:

 In my case version can be timestamp as well. What do you suggest
 version number to be, do you see any problems if I keep version as counter
 / timestamp ?


 On Tue, Sep 3, 2013 at 12:22 AM, Jan Algermissen 
 jan.algermis...@nordsc.com wrote:


 On 02.09.2013, at 20:44, dawood abdullah muhammed.daw...@gmail.com
 wrote:

  Requirement is like I have a column family say File
 
  create table file(id text primary key, fname text, version int,
 mimetype text, content text);
 
  Say, I have few records inserted, when I modify an existing record
 (content is modified) a new version needs to be created. As I need to have
 provision to revert to back any old version whenever required.
 

 So, can version be a timestamp? Or does it need to be an integer?

 In the former case, make use of C*'s ordering like so:

 CREATE TABLE file (
file_id text,
version timestamp,
fname text,

PRIMARY KEY (file_id,version)
 ) WITH CLUSTERING ORDER BY (version DESC);

 Get the latest file version with

 select * from file where file_id = 'xxx' limit 1;

 If it has to be an integer, use counter columns.

 Jan


  Regards,
  Dawood
 
 
  On Mon, Sep 2, 2013 at 10:47 PM, Jan Algermissen 
 jan.algermis...@nordsc.com wrote:
  Hi Dawood,
 
  On 02.09.2013, at 16:36, dawood abdullah muhammed.daw...@gmail.com
 wrote:
 
   Hi
   I have a requirement of versioning to be done in Cassandra.
  
   Following is my column family definition
  
   create table file_details(id text primary key, fname text, version
 int, mimetype text);
  
   I have a secondary index created on fname column.
  
   Whenever I do an insert for the same 'fname', the version should
 be incremented. And when I retrieve a row with fname it should return me
 the latest version row.
  
   Is there a better way to do in Cassandra? Please suggest what
 approach needs to be taken.
 
  Can you explain more about your use case?
 
  If the version need not be a small number, but could be a timestamp,
 you could make use of C*'s ordering feature , have the database set the 
 new
 version as a timestamp and retrieve the latest one with a simple LIMIT 1
 query. (I'll explain more when this is an option for you).
 
  Jan
 
  P.S. Me being a REST/HTTP head, an alarm rings when I see 'version'
 next to 'mimetype' :-) What exactly are you versioning here? Maybe we can
 even change the situation from a functional POV?
 
 
  
   Regards,
  
   Dawood
  
  
  
  
 
 








Re: Versioning in cassandra

2013-09-03 Thread Vivek Mishra
create table file(id text , parentid text,contenttype text,version
timestamp, descr text, name text, PRIMARY KEY(id,version) ) WITH CLUSTERING
ORDER BY (version DESC);

insert into file (id, parentid, version, contenttype, descr, name) values
('f2', 'd1', '2011-03-06', 'pdf', 'f2 file', 'file1');
insert into file (id, parentid, version, contenttype, descr, name) values
('f2', 'd1', '2011-03-05', 'pdf', 'f2 file', 'file1');
insert into file (id, parentid, version, contenttype, descr, name) values
('f1', 'd1', '2011-03-05', 'pdf', 'f1 file', 'file1');
insert into file (id, parentid, version, contenttype, descr, name) values
('f1', 'd1', '2011-03-04', 'pdf', 'f1 file', 'file1');
create index on file(parentid);


select * from file where id='f1' and parentid='d1' limit 1;

select * from file where parentid='d1' limit 1;


Will it work for you?

-Vivek




On Tue, Sep 3, 2013 at 11:29 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 My bad. I did miss out to read latest version part.

 -Vivek


 On Tue, Sep 3, 2013 at 11:20 PM, dawood abdullah 
 muhammed.daw...@gmail.com wrote:

 I have tried with both the options creating secondary index and also
 tried adding parentid to primary key, but I am getting all the files with
 parentid 'yyy', what I want is the latest version of file with the
 combination of parentid, fileid. Say below are the records inserted in the
 file table:

 insert into file (id, parentid, version, contenttype, description, name)
 values ('f1', 'd1', '2011-03-04', 'pdf', 'f1 file', 'file1');
 insert into file (id, parentid, version, contenttype, description, name)
 values ('f1', 'd1', '2011-03-05', 'pdf', 'f1 file', 'file1');
 insert into file (id, parentid, version, contenttype, description, name)
 values ('f2', 'd1', '2011-03-05', 'pdf', 'f1 file', 'file1');
 insert into file (id, parentid, version, contenttype, description, name)
 values ('f2', 'd1', '2011-03-06', 'pdf', 'f1 file', 'file1');

 I want to write a query which returns me second and last record and not
 the first and third record, because for the first and third record there
 exists a latest version, for the combination of id and parentid.

 I am confused If at all this is achievable, please suggest.

 Dawood



 On Tue, Sep 3, 2013 at 10:58 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 create secondary index over parentid.
 OR
 make it part of clustering key

 -Vivek


 On Tue, Sep 3, 2013 at 10:42 PM, dawood abdullah 
 muhammed.daw...@gmail.com wrote:

 Jan,

 The solution you gave works spot on, but there is one more requirement
 I forgot to mention. Following is my table structure

 CREATE TABLE file (
   id text,
   contenttype text,
   createdby text,
   createdtime timestamp,
   description text,
   name text,
   parentid text,
   version timestamp,
   PRIMARY KEY (id, version)

 ) WITH CLUSTERING ORDER BY (version DESC);


 The query (select * from file where id = 'xxx' limit 1;) provided
 solves the problem of finding the latest version file. But I have one more
 requirement of finding all the latest version files having parentid say
 'yyy'.

 Please suggest how can this query be achieved.

 Dawood



 On Tue, Sep 3, 2013 at 12:43 AM, dawood abdullah 
 muhammed.daw...@gmail.com wrote:

 In my case version can be timestamp as well. What do you suggest
 version number to be, do you see any problems if I keep version as counter
 / timestamp ?


 On Tue, Sep 3, 2013 at 12:22 AM, Jan Algermissen 
 jan.algermis...@nordsc.com wrote:


 On 02.09.2013, at 20:44, dawood abdullah muhammed.daw...@gmail.com
 wrote:

  Requirement is like I have a column family say File
 
  create table file(id text primary key, fname text, version int,
 mimetype text, content text);
 
  Say, I have few records inserted, when I modify an existing record
 (content is modified) a new version needs to be created. As I need to 
 have
 provision to revert to back any old version whenever required.
 

 So, can version be a timestamp? Or does it need to be an integer?

 In the former case, make use of C*'s ordering like so:

 CREATE TABLE file (
file_id text,
version timestamp,
fname text,

PRIMARY KEY (file_id,version)
 ) WITH CLUSTERING ORDER BY (version DESC);

 Get the latest file version with

 select * from file where file_id = 'xxx' limit 1;

 If it has to be an integer, use counter columns.

 Jan


  Regards,
  Dawood
 
 
  On Mon, Sep 2, 2013 at 10:47 PM, Jan Algermissen 
 jan.algermis...@nordsc.com wrote:
  Hi Dawood,
 
  On 02.09.2013, at 16:36, dawood abdullah muhammed.daw...@gmail.com
 wrote:
 
   Hi
   I have a requirement of versioning to be done in Cassandra.
  
   Following is my column family definition
  
   create table file_details(id text primary key, fname text,
 version int, mimetype text);
  
   I have a secondary index created on fname column.
  
   Whenever I do an insert for the same 'fname', the version should
 be incremented. And when I retrieve a row with fname it should return me
 the latest version

Fwd: {kundera-discuss} Kundera 2.7 released

2013-09-03 Thread Vivek Mishra
fyip.

-- Forwarded message --
From: Vivek Mishra vivek.mis...@impetus.co.in
Date: Wed, Sep 4, 2013 at 6:15 AM
Subject: {kundera-discuss} Kundera 2.7 released
To: kundera-disc...@googlegroups.com kundera-disc...@googlegroups.com


Hi All,

We are happy to announce the release of Kundera 2.7 .

Kundera is a JPA 2.0 compliant, object-datastore mapping library for NoSQL
datastores. The idea behind Kundera is to make working with NoSQL databases
drop-dead simple and fun. It currently supports Cassandra, HBase, MongoDB,
Redis, OracleNoSQL, Neo4j,ElasticSearch and relational databases.


Major Changes:

1) Support for pagination over Mongodb.
2) Added elastic search as datastore and fallback indexing mechanism.

Github Bug Fixes:

https://github.com/impetus-opensource/Kundera/issues/234
https://github.com/impetus-opensource/Kundera/issues/215
https://github.com/impetus-opensource/Kundera/issues/201
https://github.com/impetus-opensource/Kundera/issues/333
https://github.com/impetus-opensource/Kundera/issues/362
https://github.com/impetus-opensource/Kundera/issues/350
https://github.com/impetus-opensource/Kundera/issues/365

How to Download:
To download, use or contribute to Kundera, visit:
http://github.com/impetus-opensource/Kundera

Latest released tag version is 2.7 Kundera maven libraries are now
available at:
https://oss.sonatype.org/content/repositories/releases/com/impetus

Sample codes and examples for using Kundera can be found here:
https://github.com/impetus-opensource/Kundera/tree/trunk/kundera-tests

Survey/Feedback:
http://www.surveymonkey.com/s/BMB9PWG

Thank you all for your contributions and using Kundera!


Sincerely,
Kundera Team








NOTE: This message may contain information that is confidential,
proprietary, privileged or otherwise protected by law. The message is
intended solely for the named addressee. If received in error, please
destroy and notify the sender. Any use of this email is prohibited when
received in error. Impetus does not represent, warrant and/or guarantee,
that the integrity of this communication has been maintained nor that the
communication is free of errors, virus, interception or interference.

--
You received this message because you are subscribed to the Google Groups
kundera-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kundera-discuss+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Re: CQL Thrift

2013-08-30 Thread Vivek Mishra
And surprisingly if i alter table as :

alter table user add first_name text;
alter table user add last_name text;

It gives me back column with values, but still no indexes.

Thrift and CQL3 depends on same storage engine. Do they really maintain
different metadata for same column family?

-Vivek



On Fri, Aug 30, 2013 at 11:08 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 If i a create a table with CQL3 as

 create table user(user_id text PRIMARY KEY, first_name text, last_name
 text, emailid text);

 and create index as:
 create index on user(first_name);

 then inserted some data as:
 insert into user(user_id,first_name,last_name,emailId)
 values('@mevivs','vivek','mishra','vivek.mis...@impetus.co.in');


 Then if update same column family using Cassandra-cli as:

 update column family user with key_validation_class='UTF8Type' and
 column_metadata=[{column_name:last_name, validation_class:'UTF8Type',
 index_type:KEYS},{column_name:first_name, validation_class:'UTF8Type',
 index_type:KEYS}];


 Now if i connect via cqlsh and explore user table, i can see column
 first_name,last_name are not part of table structure anymore. Here is the
 output:

 CREATE TABLE user (
   key text PRIMARY KEY
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};

 cqlsh:cql3usage select * from user;

  user_id
 -
  @mevivs





 I understand that, CQL3 and thrift interoperability is an issue. But this
 looks to me a very basic scenario.



 Any suggestions? Or If anybody can explain a reason behind this?

 -Vivek







Re: CQL Thrift

2013-08-30 Thread Vivek Mishra
Hi,
I understand that, but i want to understand the reason behind
such behavior?  Is it because of maintaining different metadata objects for
CQL3 and thrift?

Any suggestion?

-Vivek


On Fri, Aug 30, 2013 at 11:15 PM, Jon Haddad j...@jonhaddad.com wrote:

 If you're going to work with CQL, work with CQL.  If you're going to work
 with Thrift, work with Thrift.  Don't mix.

 On Aug 30, 2013, at 10:38 AM, Vivek Mishra mishra.v...@gmail.com wrote:

 Hi,
 If i a create a table with CQL3 as

 create table user(user_id text PRIMARY KEY, first_name text, last_name
 text, emailid text);

 and create index as:
 create index on user(first_name);

 then inserted some data as:
 insert into user(user_id,first_name,last_name,emailId)
 values('@mevivs','vivek','mishra','vivek.mis...@impetus.co.in');


 Then if update same column family using Cassandra-cli as:

 update column family user with key_validation_class='UTF8Type' and
 column_metadata=[{column_name:last_name, validation_class:'UTF8Type',
 index_type:KEYS},{column_name:first_name, validation_class:'UTF8Type',
 index_type:KEYS}];


 Now if i connect via cqlsh and explore user table, i can see column
 first_name,last_name are not part of table structure anymore. Here is the
 output:

 CREATE TABLE user (
   key text PRIMARY KEY
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};

 cqlsh:cql3usage select * from user;

  user_id
 -
  @mevivs





 I understand that, CQL3 and thrift interoperability is an issue. But this
 looks to me a very basic scenario.



 Any suggestions? Or If anybody can explain a reason behind this?

 -Vivek








Re: CQL Thrift

2013-08-30 Thread Vivek Mishra
CQL is too limiting and negates the power of storing arbitrary data types
in dynamic columns.

I agree but partly. You can always create column family with key, column
and value and store any number of arbitrary columns as column name in
column and it's corresponding value with value.  I find it much easier.

Coming back to original question, i think differentiator is the column
metadata is treated in thrift and CQL3. What i do not understand is, for
same column family if maintaining two set of metadata
objects(CqlMetadata,CFDef), why updating anyone would cause trouble for
another!

-Vivek


On Fri, Aug 30, 2013 at 11:23 PM, Peter Lin wool...@gmail.com wrote:


 my bias perspective, I find the sweet spot is thrift for insert/update and
 CQL for select queries.

 CQL is too limiting and negates the power of storing arbitrary data types
 in dynamic columns.


 On Fri, Aug 30, 2013 at 1:45 PM, Jon Haddad j...@jonhaddad.com wrote:

 If you're going to work with CQL, work with CQL.  If you're going to work
 with Thrift, work with Thrift.  Don't mix.

 On Aug 30, 2013, at 10:38 AM, Vivek Mishra mishra.v...@gmail.com wrote:

 Hi,
 If i a create a table with CQL3 as

 create table user(user_id text PRIMARY KEY, first_name text, last_name
 text, emailid text);

 and create index as:
 create index on user(first_name);

 then inserted some data as:
 insert into user(user_id,first_name,last_name,emailId)
 values('@mevivs','vivek','mishra','vivek.mis...@impetus.co.in');


 Then if update same column family using Cassandra-cli as:

 update column family user with key_validation_class='UTF8Type' and
 column_metadata=[{column_name:last_name, validation_class:'UTF8Type',
 index_type:KEYS},{column_name:first_name, validation_class:'UTF8Type',
 index_type:KEYS}];


 Now if i connect via cqlsh and explore user table, i can see column
 first_name,last_name are not part of table structure anymore. Here is the
 output:

 CREATE TABLE user (
   key text PRIMARY KEY
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};

 cqlsh:cql3usage select * from user;

  user_id
 -
  @mevivs





 I understand that, CQL3 and thrift interoperability is an issue. But this
 looks to me a very basic scenario.



 Any suggestions? Or If anybody can explain a reason behind this?

 -Vivek









Re: CQL Thrift

2013-08-30 Thread Vivek Mishra
True for newly build platform(s), but what about existing apps build using
thrift? As per http://
www.datastax.com/dev/blog/thrift-to-cql3http://www.datastax.com/dev/blog/thrift-to-cql3
it
should be easy.

I am just curious to understand the real reason behind such behavior.

-Vivek



On Fri, Aug 30, 2013 at 11:28 PM, Jon Haddad j...@jonhaddad.com wrote:

 Just curious - what do you need to do that requires thrift?  We've build
 our entire platform using CQL3 and we haven't hit any issues.

 On Aug 30, 2013, at 10:53 AM, Peter Lin wool...@gmail.com wrote:


 my bias perspective, I find the sweet spot is thrift for insert/update and
 CQL for select queries.

 CQL is too limiting and negates the power of storing arbitrary data types
 in dynamic columns.


 On Fri, Aug 30, 2013 at 1:45 PM, Jon Haddad j...@jonhaddad.com wrote:

 If you're going to work with CQL, work with CQL.  If you're going to work
 with Thrift, work with Thrift.  Don't mix.

 On Aug 30, 2013, at 10:38 AM, Vivek Mishra mishra.v...@gmail.com wrote:

 Hi,
 If i a create a table with CQL3 as

 create table user(user_id text PRIMARY KEY, first_name text, last_name
 text, emailid text);

 and create index as:
 create index on user(first_name);

 then inserted some data as:
 insert into user(user_id,first_name,last_name,emailId)
 values('@mevivs','vivek','mishra','vivek.mis...@impetus.co.in');


 Then if update same column family using Cassandra-cli as:

 update column family user with key_validation_class='UTF8Type' and
 column_metadata=[{column_name:last_name, validation_class:'UTF8Type',
 index_type:KEYS},{column_name:first_name, validation_class:'UTF8Type',
 index_type:KEYS}];


 Now if i connect via cqlsh and explore user table, i can see column
 first_name,last_name are not part of table structure anymore. Here is the
 output:

 CREATE TABLE user (
   key text PRIMARY KEY
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};

 cqlsh:cql3usage select * from user;

  user_id
 -
  @mevivs





 I understand that, CQL3 and thrift interoperability is an issue. But this
 looks to me a very basic scenario.



 Any suggestions? Or If anybody can explain a reason behind this?

 -Vivek










Re: CQL Thrift

2013-08-30 Thread Vivek Mishra
If you talk about comparator. Yes, that's a valid point and not possible
with CQL3.

-Vivek


On Fri, Aug 30, 2013 at 11:31 PM, Peter Lin wool...@gmail.com wrote:


 I use dynamic columns all the time and they vary in type.

 With CQL you can define a default type, but you can't insert specific
 types of data for column name and value. It forces you to use all bytes or
 all strings, which would require coverting it to other types.

 thrift is much more powerful in that respect.

 not everyone needs to take advantage of the full power of dynamic columns.


 On Fri, Aug 30, 2013 at 1:58 PM, Jon Haddad j...@jonhaddad.com wrote:

 Just curious - what do you need to do that requires thrift?  We've build
 our entire platform using CQL3 and we haven't hit any issues.

 On Aug 30, 2013, at 10:53 AM, Peter Lin wool...@gmail.com wrote:


 my bias perspective, I find the sweet spot is thrift for insert/update
 and CQL for select queries.

 CQL is too limiting and negates the power of storing arbitrary data types
 in dynamic columns.


 On Fri, Aug 30, 2013 at 1:45 PM, Jon Haddad j...@jonhaddad.com wrote:

 If you're going to work with CQL, work with CQL.  If you're going to
 work with Thrift, work with Thrift.  Don't mix.

 On Aug 30, 2013, at 10:38 AM, Vivek Mishra mishra.v...@gmail.com
 wrote:

 Hi,
 If i a create a table with CQL3 as

 create table user(user_id text PRIMARY KEY, first_name text, last_name
 text, emailid text);

 and create index as:
 create index on user(first_name);

 then inserted some data as:
 insert into user(user_id,first_name,last_name,emailId)
 values('@mevivs','vivek','mishra','vivek.mis...@impetus.co.in');


 Then if update same column family using Cassandra-cli as:

 update column family user with key_validation_class='UTF8Type' and
 column_metadata=[{column_name:last_name, validation_class:'UTF8Type',
 index_type:KEYS},{column_name:first_name, validation_class:'UTF8Type',
 index_type:KEYS}];


 Now if i connect via cqlsh and explore user table, i can see column
 first_name,last_name are not part of table structure anymore. Here is the
 output:

 CREATE TABLE user (
   key text PRIMARY KEY
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};

 cqlsh:cql3usage select * from user;

  user_id
 -
  @mevivs





 I understand that, CQL3 and thrift interoperability is an issue. But
 this looks to me a very basic scenario.



 Any suggestions? Or If anybody can explain a reason behind this?

 -Vivek











Re: CQL Thrift

2013-08-30 Thread Vivek Mishra
Did you try to explore CQL3 collection support for the same? You can
definitely save on number of rows with that.

Point which i am trying to make out is, you can achieve it via CQL3 (
Jonathan's blog :
http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows)

I agree with you that still thrift may have some valid points to prove, but
considering latest development around new Cassandra features, i think CQL3
is the path to follow.


-Vivek


On Sat, Aug 31, 2013 at 12:15 AM, Peter Lin wool...@gmail.com wrote:


 you could dynamically create new tables at runtime and insert rows into
 the new table, but is that better than using thrift and putting it into a
 regular dynamic column with the exact name type and value type?

 that would mean if there's 20 dynamic columns of different types, you'd
 have to execute 21 queries to rebuild the data. That's basically the same
 as using EVA tables in relational databases.

 Having used that approach in the past to build temporal databases, it
 doesn't scale well.



 On Fri, Aug 30, 2013 at 2:40 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 create a column family as:

 create table dynamicTable(key text, nameAsDouble double, valueAsBlob
 blob);

 insert into dynamicTable(key, nameAsDouble, valueAsBlob) values ( key, 
 double(102.211),
 textAsBlob('valueInBytes').

 Do you think, it will work in case column name are double?

 -Vivek


 On Sat, Aug 31, 2013 at 12:03 AM, Peter Lin wool...@gmail.com wrote:


 In the interest of education and discussion.

 I didn't mean to say CQL3 doesn't support dynamic columns. The example
 from the page shows default type defined in the create statement.

 create column family data
 with key_validation_class=Int32Type
  and comparator=DateType
  and default_validation_class=FloatType;


 If I try to insert a dynamic column that uses double for column name and
 string for column value, it will throw an error. The kind of use case I'm
 talking about defines a minimum number of static columns. Most of the
 columns that are added at runtime are different name and value type. This
 is specific to my use case.

 Having said that, I believe it would be possible to provide that kind
 of feature in CQL, but the trade off is it deviates from SQL. The grammar
 would have to allow type declaration in the columns list and functions in
 the values. Something like

 insert into mytable (KEY, doubleType(newcol1), string(newcol2)) values
 ('abc123', some string, double(102.211))

 doubleType(newcol1) and string(newcol2) are dynamic columns.

 I know many people find thrift hard to grok and struggle with it, but
 I'm a firm believer in taking time to learn. Every developer should take
 time to read cassandra source code and the source code for the driver
 they're using.



 On Fri, Aug 30, 2013 at 2:18 PM, Jonathan Ellis jbel...@gmail.comwrote:


 http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows


 On Fri, Aug 30, 2013 at 12:53 PM, Peter Lin wool...@gmail.com wrote:


 my bias perspective, I find the sweet spot is thrift for insert/update
 and CQL for select queries.

 CQL is too limiting and negates the power of storing arbitrary data
 types in dynamic columns.


 On Fri, Aug 30, 2013 at 1:45 PM, Jon Haddad j...@jonhaddad.com wrote:

 If you're going to work with CQL, work with CQL.  If you're going to
 work with Thrift, work with Thrift.  Don't mix.

 On Aug 30, 2013, at 10:38 AM, Vivek Mishra mishra.v...@gmail.com
 wrote:

 Hi,
 If i a create a table with CQL3 as

 create table user(user_id text PRIMARY KEY, first_name text,
 last_name text, emailid text);

 and create index as:
 create index on user(first_name);

 then inserted some data as:
 insert into user(user_id,first_name,last_name,emailId)
 values('@mevivs','vivek','mishra','vivek.mis...@impetus.co.in');


 Then if update same column family using Cassandra-cli as:

 update column family user with key_validation_class='UTF8Type' and
 column_metadata=[{column_name:last_name, validation_class:'UTF8Type',
 index_type:KEYS},{column_name:first_name, validation_class:'UTF8Type',
 index_type:KEYS}];


 Now if i connect via cqlsh and explore user table, i can see column
 first_name,last_name are not part of table structure anymore. Here is the
 output:

 CREATE TABLE user (
   key text PRIMARY KEY
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};

 cqlsh:cql3usage select * from user;

  user_id
 -
  @mevivs





 I understand that, CQL3 and thrift interoperability is an issue. But
 this looks to me a very basic scenario.



 Any suggestions? Or If anybody can explain a reason behind this?

 -Vivek

Re: CQL Thrift

2013-08-30 Thread Vivek Mishra
@lhazlewood

https://issues.apache.org/jira/browse/CASSANDRA-5959

Begin batch

 multiple insert statements.

apply batch

It doesn't work for you?

-Vivek
On Sat, Aug 31, 2013 at 12:21 AM, Les Hazlewood lhazlew...@apache.orgwrote:

 On Fri, Aug 30, 2013 at 10:58 AM, Jon Haddad j...@jonhaddad.com wrote:

 Just curious - what do you need to do that requires thrift?  We've build
 our entire platform using CQL3 and we haven't hit any issues.


 Here's one thing: If you're using wide rows and you want to do anything
 other than just append individual columns to the row, then CQL3 (as it
 functions currently) is way too slow.

 I just created the following Jira issue 5 minutes ago because we've been
 fighting with this issue for the last 2 days. Our workaround was to swap
 out CQL3 + DataStax Java Driver in favor of Astyanax for this particular
 use case:

 https://issues.apache.org/jira/browse/CASSANDRA-5959

 Cheers,

 --
 Les Hazlewood | @lhazlewood
 CTO, Stormpath | http://stormpath.com | @goStormpath | 888.391.5282



CQL Thrift

2013-08-30 Thread Vivek Mishra
Hi,
If i a create a table with CQL3 as

create table user(user_id text PRIMARY KEY, first_name text, last_name
text, emailid text);

and create index as:
create index on user(first_name);

then inserted some data as:
insert into user(user_id,first_name,last_name,emailId)
values('@mevivs','vivek','mishra','vivek.mis...@impetus.co.in');


Then if update same column family using Cassandra-cli as:

update column family user with key_validation_class='UTF8Type' and
column_metadata=[{column_name:last_name, validation_class:'UTF8Type',
index_type:KEYS},{column_name:first_name, validation_class:'UTF8Type',
index_type:KEYS}];


Now if i connect via cqlsh and explore user table, i can see column
first_name,last_name are not part of table structure anymore. Here is the
output:

CREATE TABLE user (
  key text PRIMARY KEY
) WITH
  bloom_filter_fp_chance=0.01 AND
  caching='KEYS_ONLY' AND
  comment='' AND
  dclocal_read_repair_chance=0.00 AND
  gc_grace_seconds=864000 AND
  read_repair_chance=0.10 AND
  replicate_on_write='true' AND
  populate_io_cache_on_flush='false' AND
  compaction={'class': 'SizeTieredCompactionStrategy'} AND
  compression={'sstable_compression': 'SnappyCompressor'};

cqlsh:cql3usage select * from user;

 user_id
-
 @mevivs





I understand that, CQL3 and thrift interoperability is an issue. But this
looks to me a very basic scenario.



Any suggestions? Or If anybody can explain a reason behind this?

-Vivek


Re: CQL Thrift

2013-08-30 Thread Vivek Mishra
create a column family as:

create table dynamicTable(key text, nameAsDouble double, valueAsBlob blob);

insert into dynamicTable(key, nameAsDouble, valueAsBlob) values (
key, double(102.211),
textAsBlob('valueInBytes').

Do you think, it will work in case column name are double?

-Vivek


On Sat, Aug 31, 2013 at 12:03 AM, Peter Lin wool...@gmail.com wrote:


 In the interest of education and discussion.

 I didn't mean to say CQL3 doesn't support dynamic columns. The example
 from the page shows default type defined in the create statement.

 create column family data
 with key_validation_class=Int32Type
  and comparator=DateType
  and default_validation_class=FloatType;


 If I try to insert a dynamic column that uses double for column name and
 string for column value, it will throw an error. The kind of use case I'm
 talking about defines a minimum number of static columns. Most of the
 columns that are added at runtime are different name and value type. This
 is specific to my use case.

 Having said that, I believe it would be possible to provide that kind of
 feature in CQL, but the trade off is it deviates from SQL. The grammar
 would have to allow type declaration in the columns list and functions in
 the values. Something like

 insert into mytable (KEY, doubleType(newcol1), string(newcol2)) values
 ('abc123', some string, double(102.211))

 doubleType(newcol1) and string(newcol2) are dynamic columns.

 I know many people find thrift hard to grok and struggle with it, but I'm
 a firm believer in taking time to learn. Every developer should take time
 to read cassandra source code and the source code for the driver they're
 using.



 On Fri, Aug 30, 2013 at 2:18 PM, Jonathan Ellis jbel...@gmail.com wrote:


 http://www.datastax.com/dev/blog/does-cql-support-dynamic-columns-wide-rows


 On Fri, Aug 30, 2013 at 12:53 PM, Peter Lin wool...@gmail.com wrote:


 my bias perspective, I find the sweet spot is thrift for insert/update
 and CQL for select queries.

 CQL is too limiting and negates the power of storing arbitrary data
 types in dynamic columns.


 On Fri, Aug 30, 2013 at 1:45 PM, Jon Haddad j...@jonhaddad.com wrote:

 If you're going to work with CQL, work with CQL.  If you're going to
 work with Thrift, work with Thrift.  Don't mix.

 On Aug 30, 2013, at 10:38 AM, Vivek Mishra mishra.v...@gmail.com
 wrote:

 Hi,
 If i a create a table with CQL3 as

 create table user(user_id text PRIMARY KEY, first_name text, last_name
 text, emailid text);

 and create index as:
 create index on user(first_name);

 then inserted some data as:
 insert into user(user_id,first_name,last_name,emailId)
 values('@mevivs','vivek','mishra','vivek.mis...@impetus.co.in');


 Then if update same column family using Cassandra-cli as:

 update column family user with key_validation_class='UTF8Type' and
 column_metadata=[{column_name:last_name, validation_class:'UTF8Type',
 index_type:KEYS},{column_name:first_name, validation_class:'UTF8Type',
 index_type:KEYS}];


 Now if i connect via cqlsh and explore user table, i can see column
 first_name,last_name are not part of table structure anymore. Here is the
 output:

 CREATE TABLE user (
   key text PRIMARY KEY
 ) WITH
   bloom_filter_fp_chance=0.01 AND
   caching='KEYS_ONLY' AND
   comment='' AND
   dclocal_read_repair_chance=0.00 AND
   gc_grace_seconds=864000 AND
   read_repair_chance=0.10 AND
   replicate_on_write='true' AND
   populate_io_cache_on_flush='false' AND
   compaction={'class': 'SizeTieredCompactionStrategy'} AND
   compression={'sstable_compression': 'SnappyCompressor'};

 cqlsh:cql3usage select * from user;

  user_id
 -
  @mevivs





 I understand that, CQL3 and thrift interoperability is an issue. But
 this looks to me a very basic scenario.



 Any suggestions? Or If anybody can explain a reason behind this?

 -Vivek









 --
 Jonathan Ellis
 Project Chair, Apache Cassandra
 co-founder, http://www.datastax.com
 @spyced





Re: How to perform range queries efficiently?

2013-08-28 Thread Vivek Mishra
Create a column family of compositeType (or PRIMARY KEY) as (user_id,age,
salary).

Then you will be able to query use eq operator  over partition key and as
well over clustering key:

You may also exclude salary as a secondary index rather than part of
cluster key(e.g. age,salary)

I am sure based on your query usage, you can opt for either a composite key
or may mix composite key with secondary index !

Have a look at:
http://www.datastax.com/dev/blog/introduction-to-composite-columns-part-1

Hope it helps.


-Vivek


On Wed, Aug 28, 2013 at 5:49 PM, Sávio Teles savio.te...@lupa.inf.ufg.brwrote:

 I can populate again. We are modelling the data yet! Tks.


 2013/8/28 Vivek Mishra mishra.v...@gmail.com

 Just saw that you already have data populated, so i guess modifying for
 composite key may not work for you.

 -Vivek


 On Tue, Aug 27, 2013 at 11:55 PM, Sávio Teles 
 savio.te...@lupa.inf.ufg.br wrote:

 Vivek, using a composite key, how would be the query?


 2013/8/27 Vivek Mishra mishra.v...@gmail.com

 For such queries, looks like you may create a composite key as
 (user_id,age, salary).

 Too much indexing always kills(irrespective of RDBMS or NoSQL).
 Remember every search request on secondary indexes will be passed on each
 node in ring.

 -Vivek


 On Tue, Aug 27, 2013 at 11:11 PM, Sávio Teles 
 savio.te...@lupa.inf.ufg.br wrote:

 Use a database that is designed for efficient range queries? ;D


 Is there no way to do this with Cassandra? Like using Hive, Sorl...


 2013/8/27 Robert Coli rc...@eventbrite.com

 On Fri, Aug 23, 2013 at 5:53 AM, Sávio Teles 
 savio.te...@lupa.inf.ufg.br wrote:

 I need to perform range query efficiently.

 ...

 This query takes a long time to run. Any ideas to perform it
 efficiently?


 Use a database that is designed for efficient range queries? ;D

 =Rob





 --
 Atenciosamente,
 Sávio S. Teles de Oliveira
 voice: +55 62 9136 6996
 http://br.linkedin.com/in/savioteles
  Mestrando em Ciências da Computação - UFG
 Arquiteto de Software
 Laboratory for Ubiquitous and Pervasive Applications (LUPA) - UFG





 --
 Atenciosamente,
 Sávio S. Teles de Oliveira
 voice: +55 62 9136 6996
 http://br.linkedin.com/in/savioteles
  Mestrando em Ciências da Computação - UFG
 Arquiteto de Software
 Laboratory for Ubiquitous and Pervasive Applications (LUPA) - UFG





 --
 Atenciosamente,
 Sávio S. Teles de Oliveira
 voice: +55 62 9136 6996
 http://br.linkedin.com/in/savioteles
  Mestrando em Ciências da Computação - UFG
 Arquiteto de Software
 Laboratory for Ubiquitous and Pervasive Applications (LUPA) - UFG



Re: How to perform range queries efficiently?

2013-08-27 Thread Vivek Mishra
For such queries, looks like you may create a composite key as
(user_id,age, salary).

Too much indexing always kills(irrespective of RDBMS or NoSQL). Remember
every search request on secondary indexes will be passed on each node in
ring.

-Vivek

On Tue, Aug 27, 2013 at 11:11 PM, Sávio Teles
savio.te...@lupa.inf.ufg.brwrote:

 Use a database that is designed for efficient range queries? ;D


 Is there no way to do this with Cassandra? Like using Hive, Sorl...


 2013/8/27 Robert Coli rc...@eventbrite.com

 On Fri, Aug 23, 2013 at 5:53 AM, Sávio Teles savio.te...@lupa.inf.ufg.br
  wrote:

 I need to perform range query efficiently.

 ...

 This query takes a long time to run. Any ideas to perform it
 efficiently?


 Use a database that is designed for efficient range queries? ;D

 =Rob





 --
 Atenciosamente,
 Sávio S. Teles de Oliveira
 voice: +55 62 9136 6996
 http://br.linkedin.com/in/savioteles
  Mestrando em Ciências da Computação - UFG
 Arquiteto de Software
 Laboratory for Ubiquitous and Pervasive Applications (LUPA) - UFG



Re: How to perform range queries efficiently?

2013-08-27 Thread Vivek Mishra
Just saw that you already have data populated, so i guess modifying for
composite key may not work for you.

-Vivek


On Tue, Aug 27, 2013 at 11:55 PM, Sávio Teles
savio.te...@lupa.inf.ufg.brwrote:

 Vivek, using a composite key, how would be the query?


 2013/8/27 Vivek Mishra mishra.v...@gmail.com

 For such queries, looks like you may create a composite key as
 (user_id,age, salary).

 Too much indexing always kills(irrespective of RDBMS or NoSQL). Remember
 every search request on secondary indexes will be passed on each node in
 ring.

 -Vivek


 On Tue, Aug 27, 2013 at 11:11 PM, Sávio Teles 
 savio.te...@lupa.inf.ufg.br wrote:

 Use a database that is designed for efficient range queries? ;D


 Is there no way to do this with Cassandra? Like using Hive, Sorl...


 2013/8/27 Robert Coli rc...@eventbrite.com

 On Fri, Aug 23, 2013 at 5:53 AM, Sávio Teles 
 savio.te...@lupa.inf.ufg.br wrote:

 I need to perform range query efficiently.

 ...

 This query takes a long time to run. Any ideas to perform it
 efficiently?


 Use a database that is designed for efficient range queries? ;D

 =Rob





 --
 Atenciosamente,
 Sávio S. Teles de Oliveira
 voice: +55 62 9136 6996
 http://br.linkedin.com/in/savioteles
  Mestrando em Ciências da Computação - UFG
 Arquiteto de Software
 Laboratory for Ubiquitous and Pervasive Applications (LUPA) - UFG





 --
 Atenciosamente,
 Sávio S. Teles de Oliveira
 voice: +55 62 9136 6996
 http://br.linkedin.com/in/savioteles
  Mestrando em Ciências da Computação - UFG
 Arquiteto de Software
 Laboratory for Ubiquitous and Pervasive Applications (LUPA) - UFG



Issue with CQLsh

2013-08-25 Thread Vivek Mishra
Hi,
I have created a column family using Cassandra-cli as:

create column family default;

and then inserted some record as:

set default[1]['type']='bytes';

Then i tried to alter table it via cqlsh as:

alter table default alter key type text;  // it works

alter table default alter column1 type text; // it goes for a toss

surprisingly any command after that, simple hangs and i need to reset
connection.


Any suggestions?


Re: Issue with CQLsh

2013-08-25 Thread Vivek Mishra
cassandra 1.2.4


On Mon, Aug 26, 2013 at 2:51 AM, Nate McCall n...@thelastpickle.com wrote:

 What version of cassandra are you using?


 On Sun, Aug 25, 2013 at 8:34 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 I have created a column family using Cassandra-cli as:

 create column family default;

 and then inserted some record as:

 set default[1]['type']='bytes';

 Then i tried to alter table it via cqlsh as:

 alter table default alter key type text;  // it works

 alter table default alter column1 type text; // it goes for a toss

 surprisingly any command after that, simple hangs and i need to reset
 connection.


 Any suggestions?








Re: Issue with CQLsh

2013-08-25 Thread Vivek Mishra
I understand that CQL - Thrift interoperability is an issue. For
Application which were build earlier(using thrift) there must be a way and
it should be at least give some error message, but it simply hangs with out
any error.

-Vivek


On Mon, Aug 26, 2013 at 8:42 AM, Jonathan Haddad j...@jonhaddad.com wrote:

 My understanding is that if you want to use CQL, you should create your
 tables via CQL.  Mixing thrift calls w/ CQL seems like it's just asking for
 problems like this.


 On Sun, Aug 25, 2013 at 6:53 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 cassandra 1.2.4


 On Mon, Aug 26, 2013 at 2:51 AM, Nate McCall n...@thelastpickle.comwrote:

 What version of cassandra are you using?


 On Sun, Aug 25, 2013 at 8:34 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 I have created a column family using Cassandra-cli as:

 create column family default;

 and then inserted some record as:

 set default[1]['type']='bytes';

 Then i tried to alter table it via cqlsh as:

 alter table default alter key type text;  // it works

 alter table default alter column1 type text; // it goes for a toss

 surprisingly any command after that, simple hangs and i need to reset
 connection.


 Any suggestions?









 --
 Jon Haddad
 http://www.rustyrazorblade.com
 skype: rustyrazorblade



CQLsh assume command

2013-08-24 Thread Vivek Mishra
Hi,
i am trying to get CQL3 ASSUME command, as it works with Cassandra-cli.

In my example, i have create a table as :

create table default(id blob PRIMARY KEY);

Then after connecting with CQLsh(version 3), i did execute:

assume default(id) values are text;
and then tried to insert a simple record as :

insert into default(id) values('1');

But still i am getting error as:
Bad Request: cannot parse '1' as hex bytes

Any suggestion, what am i doing incorrect here?

-Vivek


Re: InvalidRequestException(why:Not enough bytes to read value of component 0)

2013-07-18 Thread Vivek Mishra
+1 for Sylvain's answer.

This normally happens, if validation class for column value(s) differs.

-Vivek


On Thu, Jul 18, 2013 at 12:08 PM, Sylvain Lebresne sylv...@datastax.comwrote:

 I don't know Hector very much really, but I highly suspect that
 ts.toString() is wrong, since composite column names are not strings. So
 again, not a Hector expert, but I can't really see how converting the
 composite into string could work.

 --
 Sylvain


 On Wed, Jul 17, 2013 at 11:14 PM, Rahul Gupta rgu...@dekaresearch.comwrote:

  Getting error while trying to persist data in a Column Family having a
 CompositeType comparator

 ** **

 *Using Cassandra ver 1.1.9.7*

 *Hector Core ver 1.1.5 API ( which uses Thrift 1.1.10)*

 ** **

 *Created Column Family using cassandra-cli:*

 ** **

 create column family event_counts

 with comparator = 'CompositeType(DateType,UTF8Type)'

 and key_validation_class = 'UUIDType'

 and default_validation_class = 'CounterColumnType';

 ** **

 *Persistence Code(*sumLoad.java):

 ** **

 *import* me.prettyprint.cassandra.serializers.StringSerializer;

 *import* me.prettyprint.cassandra.serializers.DateSerializer;

 *import* me.prettyprint.cassandra.service.CassandraHostConfigurator;

 *import* me.prettyprint.hector.api.Cluster;

 *import* me.prettyprint.hector.api.Keyspace;

 *import* me.prettyprint.hector.api.beans.Composite;

 *import* me.prettyprint.hector.api.beans.HCounterColumn;

 *import* me.prettyprint.hector.api.factory.HFactory;

 *import* me.prettyprint.hector.api.mutation.Mutator;

 *import* java.sql.Date;

 *import* java.util.logging.Level;

 ** **

 *public* *class* sumLoad { 

 

 *final* *static* Cluster *cluster* = HFactory.*
 getOrCreateCluster*(Dev, *new* CassandraHostConfigurator(
 100.10.0.6:9160));

 *final* *static* Keyspace *keyspace* = HFactory.*
 createKeyspace*(Events, *cluster*);

 *final* *static* StringSerializer *ss* =
 StringSerializer.*get*();

 ** **

 *private* *boolean* storeCounts(String vKey, String
 counterCF, Date dateStr, String vStr, *long* value)

 {

 *try*

 {

MutatorString m1 = HFactory.*createMutator*(*
 keyspace*, StringSerializer.*get*());



Composite ts = *new* Composite();

ts.addComponent(dateStr, DateSerializer.*get*());*
 ***

ts.addComponent(vStr, StringSerializer.*get*());**
 **

HCounterColumnString hColumn_ts = HFactory.*
 createCounterColumn*(ts.toString(), value, StringSerializer.*get*());

 

m1.insertCounter(vKey, counterCF, hColumn_ts);

m1.execute(); 

*return* *true*;

 }

 *catch*(Exception ex)

 {

 LOGGER.*log*(Level.*WARNING*, Unable
 to store record, ex);  

 }

 *return* *false*;

 }   

 ** **

 *public* *static* *void* main(String[] args) {

 ** **

 Date vDate = *new* Date(0);

 sumLoad SumLoad = *new* sumLoad();

 SumLoad.storeCounts(b9874e3e-4a0e-4e60-ae23-c3f1e575af93,
 event_counts, vDate, StoreThisString, 673);

 }

 ** **

 }

 ** **

 *Error:*

 ** **

 [main] INFO me.prettyprint.cassandra.service.JmxMonitor - Registering JMX
 me.prettyprint.cassandra.service_Dev:ServiceType=hector,MonitorType=hector
 

 Unable to store record

 *me.prettyprint.hector.api.exceptions.HInvalidRequestException*:
 InvalidRequestException(why:Not enough bytes to read value of component 0)
 

 ** **

 ** **

 *Rahul Gupta*
 This e-mail and the information, including any attachments, it contains
 are intended to be a confidential communication only to the person or
 entity to whom it is addressed and may contain information that is
 privileged. If the reader of this message is not the intended recipient,
 you are hereby notified that any dissemination, distribution or copying of
 this communication is strictly prohibited. If you have received this
 communication in error, please immediately notify the sender and destroy
 the original message.

 ** **

 --
 This e-mail and the information, including any attachments, it contains
 are intended to be a confidential communication only to the person or
 entity to whom it is addressed and may contain information that is
 privileged. If the reader of 

Re: Exception while writing compsite column names

2013-07-18 Thread Vivek Mishra
Looks like validation class for composite column value is different than
UTF8Type? Though code suggests it is:
   composite.addComponent(TEXT1, StringSerializer.get());

Please validate.

-Vivek


On Thu, Jul 18, 2013 at 7:41 PM, anand_balara...@homedepot.com wrote:

  Hi



 I have an issue while inserting a composite column name to one of the
 Cassandra column families. Below is a detailed description of what I had
 done and stuck up at.

 Please let me know where I had went wrong.



 Requirement:

 --

Rowkey- RowIdString

Column name   - TEXT1 : value1 : TEXT2 : value2 : TEXT3

Column value - value3



 Column family definition:

 ---

 create column family CompositeColumnNameTest

WITH
 comparator='CompositeType(UTF8Type,UTF8Type,UTF8Type,UTF8Type,UTF8Type)'

AND key_validation_class=UTF8Type

WITH compression_options={sstable_compression:SnappyCompressor,
 chunk_length_kb:64};



 Code:

 

 String RowIdString = 1234;



Composite composite = new Composite();

composite.addComponent(TEXT1, StringSerializer.get());

composite.addComponent(value1, StringSerializer.get());

composite.addComponent(TEXT2, StringSerializer.get());

composite.addComponent(value3, StringSerializer.get());

composite.addComponent(TEXT3, StringSerializer.get());



Column column = new Column(composite.serialize());

column.setValue(value3.getBytes());

column.setTimestamp(System.currentTimeMillis());



// push data to cassandra

batchMutate.addInsertion(RowIdString, CompositeColumnNameTest,
 column);

keyspaceServiceImpl.batchMutate(batchMutate);



 Exception:

 -

 me.prettyprint.hector.api.exceptions.HInvalidRequestException:
 InvalidRequestException(why:Not enough bytes to read value of component 0)

at
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:45)

at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$1.execute(KeyspaceServiceImpl.java:97)

at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$1.execute(KeyspaceServiceImpl.java:90)

at
 me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)

at
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258)

at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:131)





 Thanks in advance

 -Anand

 --

 The information in this Internet Email is confidential and may be legally
 privileged. It is intended solely for the addressee. Access to this Email
 by anyone else is unauthorized. If you are not the intended recipient, any
 disclosure, copying, distribution or any action taken or omitted to be
 taken in reliance on it, is prohibited and may be unlawful. When addressed
 to our clients any opinions or advice contained in this Email are subject
 to the terms and conditions expressed in any applicable governing The Home
 Depot terms of business or client engagement letter. The Home Depot
 disclaims all responsibility and liability for the accuracy and content of
 this attachment and for any damages or losses arising from any
 inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
 items of a destructive nature, which may be contained in this attachment
 and shall not be liable for direct, indirect, consequential or special
 damages in connection with this e-mail message or its attachment.



Re: Exception while writing compsite column names

2013-07-18 Thread Vivek Mishra
Yes. can you please share output of describe keyspace which contains 
CompositeColumnNameTest
What is the datatype for column values?

-Vive


On Thu, Jul 18, 2013 at 9:17 PM, anand_balara...@homedepot.com wrote:

  I had been using the StringSerilaizer.get() for all UTF8Type fields so
 far. Do not think I need to check the code.

 Do you suspect the column family definition?



 -Anand



 *From:* Vivek Mishra [mailto:mishra.v...@gmail.com]
 *Sent:* Thursday, July 18, 2013 11:29 AM
 *To:* user@cassandra.apache.org
 *Subject:* Re: Exception while writing compsite column names



 Looks like validation class for composite column value is different than
 UTF8Type? Though code suggests it is:

composite.addComponent(TEXT1, StringSerializer.get());



 Please validate.



 -Vivek



 On Thu, Jul 18, 2013 at 7:41 PM, anand_balara...@homedepot.com wrote:

 Hi



 I have an issue while inserting a composite column name to one of the
 Cassandra column families. Below is a detailed description of what I had
 done and stuck up at.

 Please let me know where I had went wrong.



 Requirement:

 --

Rowkey- RowIdString

Column name   - TEXT1 : value1 : TEXT2 : value2 : TEXT3

Column value - value3



 Column family definition:

 ---

 create column family CompositeColumnNameTest

WITH
 comparator='CompositeType(UTF8Type,UTF8Type,UTF8Type,UTF8Type,UTF8Type)'

AND key_validation_class=UTF8Type

WITH compression_options={sstable_compression:SnappyCompressor,
 chunk_length_kb:64};



 Code:

 

 String RowIdString = 1234;



Composite composite = new Composite();

composite.addComponent(TEXT1, StringSerializer.get());

composite.addComponent(value1, StringSerializer.get());

composite.addComponent(TEXT2, StringSerializer.get());

composite.addComponent(value3, StringSerializer.get());

composite.addComponent(TEXT3, StringSerializer.get());



Column column = new Column(composite.serialize());

column.setValue(value3.getBytes());

column.setTimestamp(System.currentTimeMillis());



// push data to cassandra

batchMutate.addInsertion(RowIdString, CompositeColumnNameTest,
 column);

keyspaceServiceImpl.batchMutate(batchMutate);



 Exception:

 -

 me.prettyprint.hector.api.exceptions.HInvalidRequestException:
 InvalidRequestException(why:Not enough bytes to read value of component 0)

at
 me.prettyprint.cassandra.service.ExceptionsTranslatorImpl.translate(ExceptionsTranslatorImpl.java:45)

at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$1.execute(KeyspaceServiceImpl.java:97)

at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl$1.execute(KeyspaceServiceImpl.java:90)

at
 me.prettyprint.cassandra.service.Operation.executeAndSetResult(Operation.java:103)

at
 me.prettyprint.cassandra.connection.HConnectionManager.operateWithFailover(HConnectionManager.java:258)

at
 me.prettyprint.cassandra.service.KeyspaceServiceImpl.operateWithFailover(KeyspaceServiceImpl.java:131)





 Thanks in advance

 -Anand


  --


 The information in this Internet Email is confidential and may be legally
 privileged. It is intended solely for the addressee. Access to this Email
 by anyone else is unauthorized. If you are not the intended recipient, any
 disclosure, copying, distribution or any action taken or omitted to be
 taken in reliance on it, is prohibited and may be unlawful. When addressed
 to our clients any opinions or advice contained in this Email are subject
 to the terms and conditions expressed in any applicable governing The Home
 Depot terms of business or client engagement letter. The Home Depot
 disclaims all responsibility and liability for the accuracy and content of
 this attachment and for any damages or losses arising from any
 inaccuracies, errors, viruses, e.g., worms, trojan horses, etc., or other
 items of a destructive nature, which may be contained in this attachment
 and shall not be liable for direct, indirect, consequential or special
 damages in connection with this e-mail message or its attachment.



 --

 The information in this Internet Email is confidential and may be legally
 privileged. It is intended solely for the addressee. Access to this Email
 by anyone else is unauthorized. If you are not the intended recipient, any
 disclosure, copying, distribution or any action taken or omitted to be
 taken in reliance on it, is prohibited and may be unlawful. When addressed
 to our clients any opinions or advice contained in this Email are subject
 to the terms and conditions expressed in any applicable governing The Home
 Depot terms of business or client engagement letter. The Home Depot
 disclaims all responsibility and liability

Main method not found in class org.apache.cassandra.service.CassandraDaemon

2013-07-17 Thread Vivek Mishra
Error: Main method not found in class
org.apache.cassandra.service.CassandraDaemon, please define the main method
as:
   public static void main(String[] args)


Hi,
I am getting this error. Earlier it was working fine for me, when i simply
downloaded the tarball installation and ran cassandra server. Recently i
did rpm package installation of Cassandra and which is working fine. But
somehow when i try to run it via originally extracted tar package. i am
getting:

*
xss =  -ea
-javaagent:/home/impadmin/software/apache-cassandra-1.2.4//lib/jamm-0.2.5.jar
-XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M
-Xmn256M -XX:+HeapDumpOnOutOfMemoryError -Xss180k
Error: Main method not found in class
org.apache.cassandra.service.CassandraDaemon, please define the main method
as:
   public static void main(String[] args)
*

I tried setting CASSANDRA_HOME directory, but no luck.

Error is bit confusing, Any suggestions???

-Vivek


Re: Main method not found in class org.apache.cassandra.service.CassandraDaemon

2013-07-17 Thread Vivek Mishra
Any suggestions?
I am kind of stuck with this, else i need to delete rpm installation to get
it working.

-Vivek


On Wed, Jul 17, 2013 at 12:17 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Error: Main method not found in class
 org.apache.cassandra.service.CassandraDaemon, please define the main method
 as:
public static void main(String[] args)
 

 Hi,
 I am getting this error. Earlier it was working fine for me, when i simply
 downloaded the tarball installation and ran cassandra server. Recently i
 did rpm package installation of Cassandra and which is working fine. But
 somehow when i try to run it via originally extracted tar package. i am
 getting:

 *
 xss =  -ea
 -javaagent:/home/impadmin/software/apache-cassandra-1.2.4//lib/jamm-0.2.5.jar
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M
 -Xmn256M -XX:+HeapDumpOnOutOfMemoryError -Xss180k
 Error: Main method not found in class
 org.apache.cassandra.service.CassandraDaemon, please define the main method
 as:
public static void main(String[] args)
 *

 I tried setting CASSANDRA_HOME directory, but no luck.

 Error is bit confusing, Any suggestions???

 -Vivek



Re: Main method not found in class org.apache.cassandra.service.CassandraDaemon

2013-07-17 Thread Vivek Mishra
@aaron
Thanks for your reply. I did have a look rpm installed files
1.  /etc/alternatives/cassandra, it contains configuration files only.
and .sh files are installed within /usr/bin folder.

Even if i try to run from extracted tar ball folder as

/home/impadmin/apache-cassandra-1.2.4/bin/cassandra -f

same error.

/home/impadmin/apache-cassandra-1.2.4/bin/cassandra -v

gives me 1.1.12 though it should give me 1.2.4


-Vivek
it gives me same error.


On Wed, Jul 17, 2013 at 3:37 PM, aaron morton aa...@thelastpickle.comwrote:

 Something is messed up in your install.  Can you try scrubbing the install
 and restarting ?

 Cheers

 -
 Aaron Morton
 Cassandra Consultant
 New Zealand

 @aaronmorton
 http://www.thelastpickle.com

 On 17/07/2013, at 6:47 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Error: Main method not found in class
 org.apache.cassandra.service.CassandraDaemon, please define the main method
 as:
public static void main(String[] args)
 

 Hi,
 I am getting this error. Earlier it was working fine for me, when i simply
 downloaded the tarball installation and ran cassandra server. Recently i
 did rpm package installation of Cassandra and which is working fine. But
 somehow when i try to run it via originally extracted tar package. i am
 getting:

 *
 xss =  -ea
 -javaagent:/home/impadmin/software/apache-cassandra-1.2.4//lib/jamm-0.2.5.jar
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M
 -Xmn256M -XX:+HeapDumpOnOutOfMemoryError -Xss180k
 Error: Main method not found in class
 org.apache.cassandra.service.CassandraDaemon, please define the main method
 as:
public static void main(String[] args)
 *

 I tried setting CASSANDRA_HOME directory, but no luck.

 Error is bit confusing, Any suggestions???

 -Vivek





Re: Main method not found in class org.apache.cassandra.service.CassandraDaemon

2013-07-17 Thread Vivek Mishra
Finally,
i have to delete all rpm installed files to get this working, folders are:
/usr/share/cassandra
/etc/alternatives/cassandra
/usr/bin/cassandra
/usr/bin/cassandra.in.sh
/usr/bin/cassandra-cli

Still don't understand why it's giving me such weird error:

Error: Main method not found in class
org.apache.cassandra.service.CassandraDaemon, please define the main method
as:
   public static void main(String[] args)
***

This is not informative at all and does not even Help!

-Vivek


On Wed, Jul 17, 2013 at 3:49 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 @aaron
 Thanks for your reply. I did have a look rpm installed files
 1.  /etc/alternatives/cassandra, it contains configuration files only.
 and .sh files are installed within /usr/bin folder.

 Even if i try to run from extracted tar ball folder as

 /home/impadmin/apache-cassandra-1.2.4/bin/cassandra -f

 same error.

 /home/impadmin/apache-cassandra-1.2.4/bin/cassandra -v

 gives me 1.1.12 though it should give me 1.2.4


 -Vivek
 it gives me same error.


 On Wed, Jul 17, 2013 at 3:37 PM, aaron morton aa...@thelastpickle.comwrote:

 Something is messed up in your install.  Can you try scrubbing the
 install and restarting ?

 Cheers

-
 Aaron Morton
 Cassandra Consultant
 New Zealand

 @aaronmorton
 http://www.thelastpickle.com

 On 17/07/2013, at 6:47 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Error: Main method not found in class
 org.apache.cassandra.service.CassandraDaemon, please define the main method
 as:
public static void main(String[] args)
 

 Hi,
 I am getting this error. Earlier it was working fine for me, when i
 simply downloaded the tarball installation and ran cassandra server.
 Recently i did rpm package installation of Cassandra and which is working
 fine. But somehow when i try to run it via originally extracted tar
 package. i am getting:

 *
 xss =  -ea
 -javaagent:/home/impadmin/software/apache-cassandra-1.2.4//lib/jamm-0.2.5.jar
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M
 -Xmn256M -XX:+HeapDumpOnOutOfMemoryError -Xss180k
 Error: Main method not found in class
 org.apache.cassandra.service.CassandraDaemon, please define the main method
 as:
public static void main(String[] args)
 *

 I tried setting CASSANDRA_HOME directory, but no luck.

 Error is bit confusing, Any suggestions???

 -Vivek






Re: Main method not found in class org.apache.cassandra.service.CassandraDaemon

2013-07-17 Thread Vivek Mishra
Hi Brian,
Thanks for your response.
I think i did change CASSANDRA_HOME to point to new directory.

-Vivek


On Wed, Jul 17, 2013 at 7:03 PM, Brian O'Neill b...@alumni.brown.eduwrote:

 Vivek,

 The location of CassandraDaemon changed between versions.  (from
 org.apache.cassandra.thrift to org.apache.cassandra.service)

 It is likely that the start scripts are picking up the old version on the
 classpath, which results in the main method not being found.

 Do you have CASSANDRA_HOME set?  I believe the start scripts will use
 that.  Perhaps you have that set and pointed to the older 1.1.X version?

 -brian


 On Wed, Jul 17, 2013 at 8:31 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Finally,
 i have to delete all rpm installed files to get this working, folders are:
 /usr/share/cassandra
 /etc/alternatives/cassandra
 /usr/bin/cassandra
 /usr/bin/cassandra.in.sh
 /usr/bin/cassandra-cli

 Still don't understand why it's giving me such weird error:
 
 Error: Main method not found in class
 org.apache.cassandra.service.CassandraDaemon, please define the main method
 as:
public static void main(String[] args)
 ***

 This is not informative at all and does not even Help!

 -Vivek


 On Wed, Jul 17, 2013 at 3:49 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 @aaron
 Thanks for your reply. I did have a look rpm installed files
 1.  /etc/alternatives/cassandra, it contains configuration files only.
 and .sh files are installed within /usr/bin folder.

 Even if i try to run from extracted tar ball folder as

 /home/impadmin/apache-cassandra-1.2.4/bin/cassandra -f

 same error.

 /home/impadmin/apache-cassandra-1.2.4/bin/cassandra -v

 gives me 1.1.12 though it should give me 1.2.4


 -Vivek
 it gives me same error.


 On Wed, Jul 17, 2013 at 3:37 PM, aaron morton 
 aa...@thelastpickle.comwrote:

 Something is messed up in your install.  Can you try scrubbing the
 install and restarting ?

 Cheers

-
 Aaron Morton
 Cassandra Consultant
 New Zealand

 @aaronmorton
 http://www.thelastpickle.com

 On 17/07/2013, at 6:47 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Error: Main method not found in class
 org.apache.cassandra.service.CassandraDaemon, please define the main method
 as:
public static void main(String[] args)
 

 Hi,
 I am getting this error. Earlier it was working fine for me, when i
 simply downloaded the tarball installation and ran cassandra server.
 Recently i did rpm package installation of Cassandra and which is working
 fine. But somehow when i try to run it via originally extracted tar
 package. i am getting:

 *
 xss =  -ea
 -javaagent:/home/impadmin/software/apache-cassandra-1.2.4//lib/jamm-0.2.5.jar
 -XX:+UseThreadPriorities -XX:ThreadPriorityPolicy=42 -Xms1024M -Xmx1024M
 -Xmn256M -XX:+HeapDumpOnOutOfMemoryError -Xss180k
 Error: Main method not found in class
 org.apache.cassandra.service.CassandraDaemon, please define the main method
 as:
public static void main(String[] args)
 *

 I tried setting CASSANDRA_HOME directory, but no luck.

 Error is bit confusing, Any suggestions???

 -Vivek







 --
 Brian ONeill
 Chief Architect, Health Market Science (http://healthmarketscience.com)
 mobile:215.588.6024
 blog: http://brianoneill.blogspot.com/
 twitter: @boneill42



Error: Main method not found in class org.apache.cassandra.service.CassandraDaemon

2013-07-12 Thread Vivek Mishra
Earlier, everything was working fine but now i am getting this strange
error.
Initially i was working via tarball installation and did install a
Cassandra rpm package.

Since then, i am getting
Error: Main method not found in class
org.apache.cassandra.service.CassandraDaemon, please define the main method
as:
   public static void main(String[] args)


running from tarball installation. I did try setting CASSANDRA_HOME as

CASSANDRA_HOME=/home/impadmin/software/apache-cassandra-1.2.4/

but no luck.

This error is quite confusing, how can a user define a main method within
Cassandra source code??

-Vivek


Fwd: {kundera-discuss} Kundera 2.6 Released

2013-07-06 Thread Vivek Mishra
fyi
-Vivek
-- Forwarded message --
From: Amresh amresh.si...@impetus.co.in
Date: Sun, Jul 7, 2013 at 2:51 AM
Subject: {kundera-discuss} Kundera 2.6 Released
To: kundera-disc...@googlegroups.com


Hi All,

We are happy to announce the release of Kundera 2.6.

Kundera is a JPA 2.0 compliant, object-datastore mapping library for NoSQL
datastores. The idea behind Kundera is to make working with NoSQL databases
drop-dead simple and fun. It currently supports Cassandra, HBase, MongoDB,
Redis, OracleNoSQL, Neo4j and relational databases.

*Major Changes:*

1) Lazy fetching of relationships.
2) Multiple node support for Cassandra.
3) Pagination support for Cassandra and HBase


*Github Bug Fixes:*

https://github.com/impetus-opensource/Kundera/issues/313
https://github.com/impetus-opensource/Kundera/issues/285
https://github.com/impetus-opensource/Kundera/issues/280
https://github.com/impetus-opensource/Kundera/issues/277
https://github.com/impetus-opensource/Kundera/issues/252
https://github.com/impetus-opensource/Kundera/issues/239
https://github.com/impetus-opensource/Kundera/issues/236
https://github.com/impetus-opensource/Kundera/issues/234
https://github.com/impetus-opensource/Kundera/issues/230
https://github.com/impetus-opensource/Kundera/issues/217
https://github.com/impetus-opensource/Kundera/issues/169
https://github.com/impetus-opensource/Kundera/issues/180
https://github.com/impetus-opensource/Kundera/issues/246
https://github.com/impetus-opensource/Kundera/issues/312
https://github.com/impetus-opensource/Kundera/issues/297
https://github.com/impetus-opensource/Kundera/issues/293
https://github.com/impetus-opensource/Kundera/issues/291
https://github.com/impetus-opensource/Kundera/issues/242
https://github.com/impetus-opensource/Kundera/issues/205


*How to Download: *
To download, use or contribute to Kundera, visit:
http://github.com/impetus-opensource/Kundera

Latest released tag version is 2.6. Kundera maven libraries are now
available at:
https://oss.sonatype.org/content/repositories/releases/com/impetus

Sample codes and examples for using Kundera can be found here:
https://github.com/impetus-opensource/Kundera/tree/trunk/kundera-tests

*Survey/Feedback:*
http://www.surveymonkey.com/s/BMB9PWG

Thank you all for your contributions and using Kundera!


Sincerely,
Kundera Team

-- 
You received this message because you are subscribed to the Google Groups
kundera-discuss group.
To unsubscribe from this group and stop receiving emails from it, send an
email to kundera-discuss+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/groups/opt_out.


Partitioner type

2013-07-04 Thread Vivek Mishra
Hi,
Is it possible to know, type of partitioner programmitcally at runtime?

-Vivek


Re: Partitioner type

2013-07-04 Thread Vivek Mishra
Just saw , thrift apis describe_paritioner() method.

Thanks for quick suggestions.

-Vivek


On Thu, Jul 4, 2013 at 5:40 PM, Haithem Jarraya
haithem.jarr...@struq.comwrote:

 yes, you can query local CF in system keyspace:

  select partitioner from system.local;


 H


 On 4 July 2013 13:02, Vivek Mishra mishra.v...@gmail.com wrote:

 Hi,
 Is it possible to know, type of partitioner programmitcally at runtime?

 -Vivek





Kundera 2.5 released

2013-04-29 Thread Vivek Mishra
Hi All,



We are happy to announce the release of Kundera 2.5.



Kundera is a JPA 2.0 compliant, object-datastore mapping library for NoSQL
datastores. The idea behind Kundera is to make working with NoSQL databases
drop-dead simple and fun. It currently supports Cassandra, HBase, MongoDB,
Redis, OracleNoSQL, Neo4j and relational databases.



*Major Changes:*
**


 1) Support for OracleNoSQL (
http://www.oracle.com/technetwork/products/nosqldb/overview/index.html).
See https://github.com/impetus-opensource/Kundera/wiki/Kundera-OracleNoSQL.



*[Please use the Oracle NoSQL jars from the Oracle NoSQL distribution at
http://download.oracle.com/otn-pub/otn_software/nosql-database/kv-ce-2.0.26.zip.
For the convenience of those who want to build Kundera from source we have
additionally placed the jars at
http://kundera.googlecode.com/svn/maven2/maven-missing-resources/]*



2) CQL 3.0 interoperability with thrift.

3) Performance fixes.





*Github Bug Fixes:*


https://github.com/impetus-opensource/Kundera/issues/240

https://github.com/impetus-opensource/Kundera/issues/232

https://github.com/impetus-opensource/Kundera/issues/231

https://github.com/impetus-opensource/Kundera/issues/230

https://github.com/impetus-opensource/Kundera/issues/226

https://github.com/impetus-opensource/Kundera/issues/221

https://github.com/impetus-opensource/Kundera/issues/218

https://github.com/impetus-opensource/Kundera/issues/214

https://github.com/impetus-opensource/Kundera/issues/209

https://github.com/impetus-opensource/Kundera/issues/207

https://github.com/impetus-opensource/Kundera/issues/196

https://github.com/impetus-opensource/Kundera/issues/193

https://github.com/impetus-opensource/Kundera/issues/190

https://github.com/impetus-opensource/Kundera/issues/188

https://github.com/impetus-opensource/Kundera/issues/182

https://github.com/impetus-opensource/Kundera/issues/181


*How to Download** *

 To download, use or contribute to Kundera, visit:

http://github.com/impetus-opensource/Kundera



Latest released tag version is 2.5. Kundera maven libraries are now
available at:
https://oss.sonatype.org/content/repositories/releases/com/impetus



Sample codes and examples for using Kundera can be found here:

http://github.com/impetus-opensource/Kundera-Examples

and

https://github.com/impetus-opensource/Kundera/tree/trunk/kundera-tests



*Survey/Feedback:*

http://www.surveymonkey.com/s/BMB9PWG



Thank you all for your contributions and using Kundera!



Sincerely,

Kundera Team





--
image001.gif

Re: describe keyspace or column family query not working

2013-04-10 Thread Vivek Mishra
Ok. A column family and keyspace created via cqlsh using cql3 is visible
via cassandra-cli or thrift API?

-Vivek


On Wed, Apr 10, 2013 at 9:23 PM, Tyler Hobbs ty...@datastax.com wrote:

 DESCRIBE is a cqlsh feature, not a part of the CQL language.


 On Wed, Apr 10, 2013 at 2:37 AM, Kuldeep Mishra 
 kuld.cs.mis...@gmail.comwrote:

 Hi ,
 I am trying to execute following query but not working and throwing
 exception

 QUERY:--
  Cassandra.Client client;
  client.execute_cql3_query(ByteBuffer.wrap(describe keyspace
 mykeyspace.getBytes(Constants.CHARSET_UTF8)),   Compression.NONE,
 ConsistencyLevel.ONE);

  client.execute_cql3_query(ByteBuffer.wrap(describe table
 mytable.getBytes(Constants.CHARSET_UTF8)),   Compression.NONE,
 ConsistencyLevel.ONE);

 but both query giving following exception,

 STACK TRACE

 InvalidRequestException(why:line 1:0 no viable alternative at input
 'describe')
 at
 org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:37849)
 at
 org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
 at
 org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1562)
 at
 org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1547)

 Please help..


 Thanks and Regards
 Kuldeep






 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199




 --
 Tyler Hobbs
 DataStax http://datastax.com/



Apache Cassandra for Developers-Starter

2013-04-03 Thread Vivek Mishra
Hi,
Just wanted to share that recently i worked with Packt publishing to author
a quick Cassandra reference in form of a book. Here it is:
http://www.packtpub.com/apache-cassandra-for-developers/book


Sincerely,
-Vivek


Re: cql query not giving any result.

2013-03-18 Thread Vivek Mishra
If this is the case, Why can't we restrict key as a keyword and not to be
used as a column name?

-Vivek

On Mon, Mar 18, 2013 at 2:37 PM, Sylvain Lebresne sylv...@datastax.comwrote:

 CQL can't work correctly if 2 (CQL) columns have the same name. Now, to
 allow upgrade from thrift, CQL does use some default names like key for
 the Row key when there isn't anything else.

 Honestly I think the easiest workaround here is probably to disambiguate
 things manually. Typically, you could update the column family definition
 to set the key_alias (in CfDef) to some name that make sense for you. This
 will end up being the name of the Row key for CQL. You may also try issue a
 RENAME from CQL to rename the row key, which should work. Typically
 something like ALTER KunderaExamples RENAME key TO rowKey.

 --
 Sylvain



 On Sat, Mar 16, 2013 at 4:39 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Any suggestions?
 -Vivek

 On Fri, Mar 15, 2013 at 5:20 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Ok. So it's a case  when, CQL returns rowkey value as key and there is
 also column present with name as key.

 Sounds like a bug?

 -Vivek


 On Fri, Mar 15, 2013 at 5:17 PM, Kuldeep Mishra 
 kuld.cs.mis...@gmail.com wrote:

 Hi Sylvain,
   I created it using thrift client, here is column family creation
 script,

 Cassandra.Client client;
 CfDef user_Def = new CfDef();
 user_Def.name = DOCTOR;
 user_Def.keyspace = KunderaExamples;
 user_Def.setComparator_type(UTF8Type);
 user_Def.setDefault_validation_class(UTF8Type);
 user_Def.setKey_validation_class(UTF8Type);
 ColumnDef key = new
 ColumnDef(ByteBuffer.wrap(KEY.getBytes()), UTF8Type);
 key.index_type = IndexType.KEYS;
 ColumnDef age = new
 ColumnDef(ByteBuffer.wrap(AGE.getBytes()), UTF8Type);
 age.index_type = IndexType.KEYS;
 user_Def.addToColumn_metadata(key);
 user_Def.addToColumn_metadata(age);

 client.set_keyspace(KunderaExamples);
 client.system_add_column_family(user_Def);


 Thanks
 KK


 On Fri, Mar 15, 2013 at 4:24 PM, Sylvain Lebresne sylv...@datastax.com
  wrote:

 On Fri, Mar 15, 2013 at 11:43 AM, Kuldeep Mishra 
 kuld.cs.mis...@gmail.com wrote:

 Hi,
 Is it possible in Cassandra to make multiple column with same name ?,
 like in this particular scenario I have two column with same name as 
 key,
 first one is rowkey and second on is column name .


 No, it shouldn't be possible and that is your problem. How did you
 created that table?

 --
 Sylvain



 Thanks and Regards
 Kuldeep


 On Fri, Mar 15, 2013 at 4:05 PM, Kuldeep Mishra 
 kuld.cs.mis...@gmail.com wrote:


 Hi ,
 Following cql query not returning any result
 cqlsh:KunderaExamples select * from DOCTOR where
 key='kuldeep';

I have enabled secondary indexes on both column.

 Screen shot is attached

 Please help


 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199




 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199





 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199







Re: cql query not giving any result.

2013-03-15 Thread Vivek Mishra
Ok. So it's a case  when, CQL returns rowkey value as key and there is
also column present with name as key.

Sounds like a bug?

-Vivek

On Fri, Mar 15, 2013 at 5:17 PM, Kuldeep Mishra kuld.cs.mis...@gmail.comwrote:

 Hi Sylvain,
   I created it using thrift client, here is column family creation
 script,

 Cassandra.Client client;
 CfDef user_Def = new CfDef();
 user_Def.name = DOCTOR;
 user_Def.keyspace = KunderaExamples;
 user_Def.setComparator_type(UTF8Type);
 user_Def.setDefault_validation_class(UTF8Type);
 user_Def.setKey_validation_class(UTF8Type);
 ColumnDef key = new ColumnDef(ByteBuffer.wrap(KEY.getBytes()),
 UTF8Type);
 key.index_type = IndexType.KEYS;
 ColumnDef age = new ColumnDef(ByteBuffer.wrap(AGE.getBytes()),
 UTF8Type);
 age.index_type = IndexType.KEYS;
 user_Def.addToColumn_metadata(key);
 user_Def.addToColumn_metadata(age);

 client.set_keyspace(KunderaExamples);
 client.system_add_column_family(user_Def);


 Thanks
 KK


 On Fri, Mar 15, 2013 at 4:24 PM, Sylvain Lebresne sylv...@datastax.comwrote:

 On Fri, Mar 15, 2013 at 11:43 AM, Kuldeep Mishra 
 kuld.cs.mis...@gmail.com wrote:

 Hi,
 Is it possible in Cassandra to make multiple column with same name ?,
 like in this particular scenario I have two column with same name as key,
 first one is rowkey and second on is column name .


 No, it shouldn't be possible and that is your problem. How did you
 created that table?

 --
 Sylvain



 Thanks and Regards
 Kuldeep


 On Fri, Mar 15, 2013 at 4:05 PM, Kuldeep Mishra 
 kuld.cs.mis...@gmail.com wrote:


 Hi ,
 Following cql query not returning any result
 cqlsh:KunderaExamples select * from DOCTOR where key='kuldeep';

I have enabled secondary indexes on both column.

 Screen shot is attached

 Please help


 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199




 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199





 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199



Re: cql query not giving any result.

2013-03-15 Thread Vivek Mishra
Any suggestions?
-Vivek

On Fri, Mar 15, 2013 at 5:20 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Ok. So it's a case  when, CQL returns rowkey value as key and there is
 also column present with name as key.

 Sounds like a bug?

 -Vivek


 On Fri, Mar 15, 2013 at 5:17 PM, Kuldeep Mishra 
 kuld.cs.mis...@gmail.comwrote:

 Hi Sylvain,
   I created it using thrift client, here is column family creation
 script,

 Cassandra.Client client;
 CfDef user_Def = new CfDef();
 user_Def.name = DOCTOR;
 user_Def.keyspace = KunderaExamples;
 user_Def.setComparator_type(UTF8Type);
 user_Def.setDefault_validation_class(UTF8Type);
 user_Def.setKey_validation_class(UTF8Type);
 ColumnDef key = new ColumnDef(ByteBuffer.wrap(KEY.getBytes()),
 UTF8Type);
 key.index_type = IndexType.KEYS;
 ColumnDef age = new ColumnDef(ByteBuffer.wrap(AGE.getBytes()),
 UTF8Type);
 age.index_type = IndexType.KEYS;
 user_Def.addToColumn_metadata(key);
 user_Def.addToColumn_metadata(age);

 client.set_keyspace(KunderaExamples);
 client.system_add_column_family(user_Def);


 Thanks
 KK


 On Fri, Mar 15, 2013 at 4:24 PM, Sylvain Lebresne 
 sylv...@datastax.comwrote:

 On Fri, Mar 15, 2013 at 11:43 AM, Kuldeep Mishra 
 kuld.cs.mis...@gmail.com wrote:

 Hi,
 Is it possible in Cassandra to make multiple column with same name ?,
 like in this particular scenario I have two column with same name as key,
 first one is rowkey and second on is column name .


 No, it shouldn't be possible and that is your problem. How did you
 created that table?

 --
 Sylvain



 Thanks and Regards
 Kuldeep


 On Fri, Mar 15, 2013 at 4:05 PM, Kuldeep Mishra 
 kuld.cs.mis...@gmail.com wrote:


 Hi ,
 Following cql query not returning any result
 cqlsh:KunderaExamples select * from DOCTOR where key='kuldeep';

I have enabled secondary indexes on both column.

 Screen shot is attached

 Please help


 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199




 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199





 --
 Thanks and Regards
 Kuldeep Kumar Mishra
 +919540965199





Re: CQL query issue

2013-03-05 Thread Vivek Mishra
Thank you  i am able to solve this one.
If i am trying as :

SELECT * FROM CompositeUser WHERE userId='mevivs' LIMIT 100 ALLOW
FILTERING

it works. Somehow got confused by
http://www.datastax.com/docs/1.2/cql_cli/cql/SELECT, which states as :

SELECT select_expression
  FROM *keyspace_name.*table_name
  *WHERE clause AND clause ...*
*ALLOW FILTERING**LIMIT n*
  *ORDER BY compound_key_2 ASC | DESC*

*
*

*is this an issue?*

*
*

*-Vivek*



On Tue, Mar 5, 2013 at 5:21 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Hi,
 I am trying to execute a cql3 query as :

 SELECT * FROM CompositeUser WHERE userId='mevivs' ALLOW FILTERING
 LIMIT 100

 and getting given below error:

 Caused by: InvalidRequestException(why:line 1:70 missing EOF at 'LIMIT')
 at
 org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:37849)
  at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
 at
 org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1562)
  at
 org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1547)


 Is there something incorrect in syntax?



Re: CQL query issue

2013-03-05 Thread Vivek Mishra
Somebody in group, please confirm if it is an issue or that needs rectified
for select syntax.

-Vivek

On Tue, Mar 5, 2013 at 5:31 PM, Vivek Mishra mishra.v...@gmail.com wrote:

 Thank you  i am able to solve this one.
 If i am trying as :

 SELECT * FROM CompositeUser WHERE userId='mevivs' LIMIT 100 ALLOW
 FILTERING

 it works. Somehow got confused by
 http://www.datastax.com/docs/1.2/cql_cli/cql/SELECT, which states as :

 SELECT select_expression
   FROM *keyspace_name.*table_name
   *WHERE clause AND clause ...*
 *ALLOW FILTERING**LIMIT n*
   *ORDER BY compound_key_2 ASC | DESC*

 *
 *

 *is this an issue?*

 *
 *

 *-Vivek*



 On Tue, Mar 5, 2013 at 5:21 PM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 I am trying to execute a cql3 query as :

 SELECT * FROM CompositeUser WHERE userId='mevivs' ALLOW FILTERING
 LIMIT 100

 and getting given below error:

 Caused by: InvalidRequestException(why:line 1:70 missing EOF at 'LIMIT')
 at
 org.apache.cassandra.thrift.Cassandra$execute_cql3_query_result.read(Cassandra.java:37849)
  at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
 at
 org.apache.cassandra.thrift.Cassandra$Client.recv_execute_cql3_query(Cassandra.java:1562)
  at
 org.apache.cassandra.thrift.Cassandra$Client.execute_cql3_query(Cassandra.java:1547)


 Is there something incorrect in syntax?





Re: Querying composite keys

2013-02-10 Thread Vivek Mishra
You can query over composite columns as:
1) Partition key
2) First part of clustered key(using EQ ops).

Secondary indexes over non composite columns are not possible.

-Vivek
On Mon, Feb 11, 2013 at 12:06 PM, Rishabh Agrawal 
rishabh.agra...@impetus.co.in wrote:

  Hello



 I have key and columns defined in following fashion:





 HotelName1:RoomNum1

 HotelName2:RoomNum2

 HotelName3:RoomNum3

 Key1:TimeStamp:VersionNum











 Is there a way that I can query this schema by only ‘key’ or ‘HotelName’
 i.e. querying using a part of composite key and not the full key.





 Thanks and Regards

 Rishabh Agrawal



 --






 NOTE: This message may contain information that is confidential,
 proprietary, privileged or otherwise protected by law. The message is
 intended solely for the named addressee. If received in error, please
 destroy and notify the sender. Any use of this email is prohibited when
 received in error. Impetus does not represent, warrant and/or guarantee,
 that the integrity of this communication has been maintained nor that the
 communication is free of errors, virus, interception or interference.



Re: Netflix/Astynax Client for Cassandra

2013-02-06 Thread Vivek Mishra
Kundera 2.3 is also upgraded for Cassandra 1.2(except CQL binary protocol).

-Vivek

On Thu, Feb 7, 2013 at 11:50 AM, Gabriel Ciuloaica gciuloa...@gmail.comwrote:

  Astyanax is not working with Cassandra 1.2.1. Only java-driver is
 working very well with both Cassandra 1.2 and 1.2.1.

 Cheers,
 Gabi

 On 2/7/13 8:16 AM, Michael Kjellman wrote:

 It's a really great library and definitely recommended by me and many who
 are reading this.

  And if you are just starting out on 1.2.1 with C* you might also want to
 evaluate https://github.com/datastax/java-driver and the new binary
 protocol.

  Best,
 michael

  From: Cassa L lcas...@gmail.com
 Reply-To: user@cassandra.apache.org user@cassandra.apache.org
 Date: Wednesday, February 6, 2013 10:13 PM
 To: user@cassandra.apache.org user@cassandra.apache.org
 Subject: Netflix/Astynax Client for Cassandra

   Hi,
  Has anyone used Netflix/astynax java client library for Cassandra? I have
 used Hector before and would like to evaluate astynax. Not sure, how it is
 accepted in Cassandra community. Any issues with it or advantagest? API
 looks very clean and simple compare to Hector. Has anyone used it in
 production except Netflix themselves?

 Thanks
 LCassa





Re: DataModel Question

2013-02-05 Thread Vivek Mishra
Avoid super columns. If you need Sorted, wide rows then go for Composite
columns.

-Vivek

On Wed, Feb 6, 2013 at 7:09 AM, Kanwar Sangha kan...@mavenir.com wrote:

  Hi –  We are designing a Cassandra based storage for the following use
 cases-

 ** **

 **·**Store SMS messages

 **·**Store MMS messages

 **·**Store Chat history

 ** **

 What would be the ideal was to design the data model for this kind of
 application ? I am thinking on these lines ..

 ** **

 Row-Key :  Composite key [ PhoneNum : Day]

 ** **

 **·**Example:   19876543456:05022013

 ** **

 Dynamic Column Families

 ** **

 **·**Composite column key for SMS [SMS:MessageId:TimeUUID]

 **·**Composite column key for MMS [MMS:MessageId:TimeUUID]

 **·**Composite column key for user I am chatting with
 [UserId:198765432345] – This can have multiple values since each chat conv
 can have many messages. Should this be a super column ?

 ** **

 ** **

 198:05022013

 SMS::ttt

 SMS:xxx12:ttt

 MMS::ttt

 :19

 198:05022013

 ** **

 ** **

 ** **

 ** **

 1987888:05022013

 ** **

 ** **

 ** **

 ** **

 ** **

 ** **

 Thanks,

 Kanwar

 ** **



Re: Multiple Data Center Clusters on Cassandra

2013-01-30 Thread Vivek Mishra
1. I want to replicate whole data on another DC and after that both DC's
nodes should have complete Data. In which topology is it possible ?
  I think NetworkTopology is best suited for such configuration, You may
want to use nodetool to generate token accordingly.

2. If I need backup, what's the command of cluster screen shot?
 You can always create a snapshot/backup files and later can use them for
restoration.
{http://www.datastax.com/docs/1.0/operations/backup_restore}

3. I will use internet connection with VPN facility for traffic and in case
disconnection what will happen?
   based on configuration, (e.g. hinted-handoff,consistency level,
read-repair), it should be fine.


On Wed, Jan 30, 2013 at 5:58 PM, adeel.ak...@panasiangroup.com wrote:

 Hi,

 I am running 3 nodes cassandra cluster with replica factor 2 in one DC.
 Now I need to run multiple data center clusters with cassandra and I have
 following queries;

 1. I want to replicate whole data on another DC and after that both DC's
 nodes should have complete Data. In which topology is it possible ?

 2. If I need backup, what's the command of cluster screen shot?

 3. I will use internet connection with VPN facility for traffic and in
 case disconnection what will happen?

 Regards,

 Adeel





Re: Suggestion: Move some threads to the client-dev mailing list

2013-01-30 Thread Vivek Mishra
I totally agree.

-Vivek

On Wed, Jan 30, 2013 at 8:51 PM, Edward Capriolo edlinuxg...@gmail.comwrote:

 A good portion of people and traffic on this list is questions about:

 1) asytnax
 2) cassandra-jdbc
 3) cassandra native client
 3) pyhtondra / whatever

 With the exception of the native transport which is only half way part of
 Cassandra, none of the these other client issues have much to do with core
 cassandra at all. If someone authors a client library/driver/etc they
 should be supporting it outside of the user@cassandra mailing list.

 My suggestion: At minimum we should re-route these questions to client-dev
 or simply say, If it is not part of core Cassandra, you are looking in the
 wrong place for support

 Edward



Re: CQL binary protocol

2013-01-25 Thread Vivek Mishra
Thanks Sylvain. I need to refer org.apache.cassandra.transport package
for code walkthrough?

-vivek

On Fri, Jan 25, 2013 at 2:51 PM, Sylvain Lebresne sylv...@datastax.comwrote:


 https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=doc/native_protocol.spec


 On Fri, Jan 25, 2013 at 10:15 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 I am looking for some sort documentation around usage for CQL binary
 protocol. Basically i need some documentation about usage of cassandra
 native transport and it's usage.


 -Vivek





Re: CQL binary protocol

2013-01-25 Thread Vivek Mishra
Any documentation for this?

-Vivek

On Fri, Jan 25, 2013 at 3:39 PM, Sylvain Lebresne sylv...@datastax.comwrote:

 On Fri, Jan 25, 2013 at 10:29 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 I need to refer org.apache.cassandra.transport package for code
 walkthrough?


 Yes, that's where the code is.

 --
 Sylvain




 -vivek


 On Fri, Jan 25, 2013 at 2:51 PM, Sylvain Lebresne 
 sylv...@datastax.comwrote:


 https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob_plain;f=doc/native_protocol.spec


 On Fri, Jan 25, 2013 at 10:15 AM, Vivek Mishra mishra.v...@gmail.comwrote:

 Hi,
 I am looking for some sort documentation around usage for CQL binary
 protocol. Basically i need some documentation about usage of cassandra
 native transport and it's usage.


 -Vivek







  1   2   >