select query returns wrong value if use DESC option

2014-03-13 Thread Katsutoshi Nagaoka
Hi.

I am using Cassandra 2.0.6 version. There is a case that select query
returns wrong value if use DESC option. My test procedure is as follows:

--
cqlsh:test CREATE TABLE mytable (key int, range int, PRIMARY KEY (key,
range));
cqlsh:test INSERT INTO mytable (key, range) VALUES (0, 0);
cqlsh:test SELECT * FROM mytable WHERE key = 0 AND range = 0;

 key | range
-+---
   0 | 0

(1 rows)

cqlsh:test SELECT * FROM mytable WHERE key = 0 AND range = 0 ORDER BY
range ASC;

 key | range
-+---
   0 | 0

(1 rows)

cqlsh:test SELECT * FROM mytable WHERE key = 0 AND range = 0 ORDER BY
range DESC;

(0 rows)
--

Why returns value is 0 rows if using DESC option? I expected the same 1 row
as the return value of other queries. Does anyone has a similar issue?

Thanks,
Katsutoshi


Re: paging state will not work

2014-02-20 Thread Katsutoshi
Thank you for the reply. Added:
https://issues.apache.org/jira/browse/CASSANDRA-6748

Katsutoshi


2014-02-21 2:14 GMT+09:00 Sylvain Lebresne sylv...@datastax.com:

 That does sound like a bug. Would you mind opening a JIRA (
 https://issues.apache.org/jira/browse/CASSANDRA) ticket for it?


 On Thu, Feb 20, 2014 at 3:06 PM, Edward Capriolo edlinuxg...@gmail.comwrote:

 I would try a fetch size other then 1. Cassandras slices are start
 inclusive so maybe that is a bug.


 On Tuesday, February 18, 2014, Katsutoshi nagapad.0...@gmail.com wrote:
  Hi.
 
  I am using Cassandra 2.0.5 version. If null is explicitly set to a
 column, paging_state will not work. My test procedure is as follows:
 
  --
  create a table and insert 10 records using cqlsh. the query is as
 follows:
 
  cqlsh:test CREATE TABLE mytable (id int, range int, value text,
 PRIMARY KEY (id, range));
  cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 0);
  cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 1);
  cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 2);
  cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 3);
  cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 4);
  cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 5,
 null);
  cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 6,
 null);
  cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 7,
 null);
  cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 8,
 null);
  cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 9,
 null);
 
  select data using datastax driver. the pseudocode is as follows:
 
  Statement statement =
 QueryBuilder.select().from(mytable).setFetchSize(1);
  ResultSet rs = session.execute(statement);
  for(Row row : rs){
  System.out.println(String.format(id=%s, range=%s, value=%s,
  row.getInt(id), row.getInt(range),
 row.getString(value)));
  }
 
  the result is as follows:
 
  id=0, range=0, value=null
  id=0, range=1, value=null
  id=0, range=2, value=null
  id=0, range=3, value=null
  id=0, range=4, value=null
  id=0, range=5, value=null
  id=0, range=7, value=null
  id=0, range=9, value=null
  --
 
  Result is 8 records although 10 records were expected. Does anyone has
 a similar issue?
 
  Thanks,
  Katsutoshi
 

 --
 Sorry this was sent from mobile. Will do less grammar and spell check
 than usual.





paging state will not work

2014-02-18 Thread Katsutoshi
Hi.

I am using Cassandra 2.0.5 version. If null is explicitly set to a column,
paging_state will not work. My test procedure is as follows:

--
create a table and insert 10 records using cqlsh. the query is as follows:

cqlsh:test CREATE TABLE mytable (id int, range int, value text,
PRIMARY KEY (id, range));
cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 0);
cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 1);
cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 2);
cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 3);
cqlsh:test INSERT INTO mytable (id, range) VALUES (0, 4);
cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 5, null);
cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 6, null);
cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 7, null);
cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 8, null);
cqlsh:test INSERT INTO mytable (id, range, value) VALUES (0, 9, null);

select data using datastax driver. the pseudocode is as follows:

Statement statement =
QueryBuilder.select().from(mytable).setFetchSize(1);
ResultSet rs = session.execute(statement);
for(Row row : rs){
System.out.println(String.format(id=%s, range=%s, value=%s,
row.getInt(id), row.getInt(range), row.getString(value)));
}

the result is as follows:

id=0, range=0, value=null
id=0, range=1, value=null
id=0, range=2, value=null
id=0, range=3, value=null
id=0, range=4, value=null
id=0, range=5, value=null
id=0, range=7, value=null
id=0, range=9, value=null
--

Result is 8 records although 10 records were expected. Does anyone has a
similar issue?

Thanks,
Katsutoshi


select count query not working at cassandra 2.0.0

2013-09-20 Thread Katsutoshi
I would like to use select count query.
Although it was work at Cassandra 1.2.9, but there is a situation which
does not work at Cassandra 2.0.0.
so, If some row is deleted, 'select count query' seems to return the wrong
value.
Did anything change by Cassandra 2.0.0 ? or Have I made a mistake ?

My test procedure is as follows:

### At Cassandra 1.2.9

1) create table, and insert two rows

```
cqlsh:test CREATE TABLE count_hash_test (key text, value text, PRIMARY KEY
(key));
cqlsh:test INSERT INTO count_hash_test (key, value) VALUES ('key1',
'value');
cqlsh:test INSERT INTO count_hash_test (key, value) VALUES ('key2',
'value');
```

2) do a select count query, it returns 2 which is expected

```
cqlsh:test SELECT * FROM count_hash_test;

 key  | value
--+---
 key1 | value
 key2 | value

cqlsh:test SELECT COUNT(*) FROM count_hash_test;

 count
---
 2
```

3) delete one row

```
cqlsh:test DELETE FROM count_hash_test WHERE key='key1';
```

4) do a select count query, it returns 1 which is expected

```
cqlsh:test SELECT * FROM count_hash_test;

 key  | value
--+---
 key2 | value

cqlsh:test SELECT COUNT(*) FROM count_hash_test;

 count
---
 1
```

### At Cassandra 2.0.0

1) create table, and insert two rows

```
cqlsh:test CREATE TABLE count_hash_test (key text, value text, PRIMARY KEY
(key));
cqlsh:test INSERT INTO count_hash_test (key, value) VALUES ('key1',
'value');
cqlsh:test INSERT INTO count_hash_test (key, value) VALUES ('key2',
'value');
```

2) do a select count query, it returns 2  which is expected

```
cqlsh:test SELECT * FROM count_hash_test;

 key  | value
--+---
 key1 | value
 key2 | value

cqlsh:test SELECT COUNT(*) FROM count_hash_test;

 count
---
 2
```

3) delete one row

```
cqlsh:test DELETE FROM count_hash_test WHERE key='key1';
```

4) do a select count query, but it returns 0 which is NOT expected

```
cqlsh:test SELECT * FROM count_hash_test;

 key  | value
--+---
 key2 | value

cqlsh:test SELECT COUNT(*) FROM count_hash_test;

 count
---
 0
```

Could anyone help me for this? thanks.

Katsutoshi


Re: Custom data type is not work at C* 2.0

2013-09-05 Thread Katsutoshi
Thanks to your reply, this problem has been solved.
I was making a mistake which uses AbstractType of a version 1.2.9.


2013/9/6 Dave Brosius dbros...@mebigfatguy.com

  I think your class is missing a required

 public TypeSerializerVoid getSerializer() {}

 method


 This is what you need to derive from


 https://git-wip-us.apache.org/repos/asf?p=cassandra.git;a=blob;f=src/java/org/apache/cassandra/db/marshal/AbstractType.java;h=74fe446319c199433b47d3ae60fc4d644e86b653;hb=03045ca22b11b0e5fc85c4fabd83ce6121b5709b




 On 09/04/2013 09:14 AM, Katsutoshi wrote:

 package my.marshal;

 import java.nio.ByteBuffer;

 import org.apache.cassandra.db.marshal.AbstractType;
 import org.apache.cassandra.db.marshal.MarshalException;
 import org.apache.cassandra.utils.ByteBufferUtil;

 public class DummyType extends AbstractTypeVoid {

 public static final DummyType instance = new DummyType();

 private DummyType(){
 }

 public Void compose(ByteBuffer bytes){
 return null;
 }

 public ByteBuffer decompose(Void value){
 return ByteBufferUtil.EMPTY_BYTE_BUFFER;
 }

 public int compare(ByteBuffer o1, ByteBuffer o2){
 return 0;
 }

 public String getString(ByteBuffer bytes){
 return ;
 }

 public ByteBuffer fromString(String source) throws MarshalException{
 if(!source.isEmpty()) throw new
 MarshalException(String.format('%s' is not empty, source));
 return ByteBufferUtil.EMPTY_BYTE_BUFFER;
 }

 public void validate(ByteBuffer bytes) throws MarshalException{
 }
 }





Custom data type is not work at C* 2.0

2013-09-04 Thread Katsutoshi
/libthrift-0.9.0.jar:/opt/cassandra/bin/../lib/log4j-1.2.16.jar:/opt/cassandra/bin/../lib/lz4-1.1.0.jar:/opt/cassandra/bin/../lib/metrics-core-2.0.3.jar:/opt/cassandra/bin/../lib/my-marshal-1.0.0.jar:/opt/cassandra/bin/../lib/netty-3.5.9.Final.jar:/opt/cassandra/bin/../lib/servlet-api-2.5-20081211.jar:/opt/cassandra/bin/../lib/slf4j-api-1.7.2.jar:/opt/cassandra/bin/../lib/slf4j-log4j12-1.7.2.jar:/opt/cassandra/bin/../lib/snakeyaml-1.11.jar:/opt/cassandra/bin/../lib/snappy-java-1.0.5.jar:/opt/cassandra/bin/../lib/snaptree-0.1.jar:/opt/cassandra/bin/../lib/thrift-server-0.3.0.jar:/opt/cassandra/bin/../lib/jamm-0.2.5.jar
(snip)
```

3) Create column family that use DummyType.

```
cqlsh:test CREATE TABLE test_cf ( key 'my.marshal.DummyType' PRIMARY KEY);
Bad Request: Error setting type my.marshal.DummyType: Unable to find
abstract-type class 'my.marshal.DummyType'
```

Thanks,

Katsutoshi