Re: Best way to format a ResultSet / Row ?

2014-08-19 Thread Fabrice Larcher
Hello,

I would try something like that (I have not tested, no guarantee ..) :

import com.datastax.driver.core.ColumnDefinitions;
import com.datastax.driver.core.ResultSet;
import com.datastax.driver.core.Row;
import com.datastax.driver.core.utils.Bytes;

/* ... */

ResultSet result = null; // Put your instance HERE
final StringBuilder builder = new StringBuilder();
for (Row row : result) {
builder.append([ );
for (ColumnDefinitions.Definition def :
row.getColumnDefinitions()) {
String value =
Bytes.toHexString(row.getBytesUnsafe(def.getName()));

builder.append(def.getName()).append(=).append(value).append( );
}
builder.append(] );
}
System.out.println(builder.toString());

/* ... */

But this is probably not very usefull, since you get only prints of bytes.
You can then test the type of the column (variable 'def') in order to call
the best suited method of 'row', so that the variable 'value' can be more
readable.


Fabrice LARCHER


2014-08-19 3:29 GMT+02:00 Kevin Burton bur...@spinn3r.com:

 The DataStax java driver has a Row object which getInt, getLong methods…

 However, the getString only works on string columns.

 That's probably reasonable… but if I have a raw Row, how the heck do I
 easily print it?

 I need a handy way do dump a ResultSet …

 --

 Founder/CEO Spinn3r.com
 Location: *San Francisco, CA*
 blog: http://burtonator.wordpress.com
 … or check out my Google+ profile
 https://plus.google.com/102718274791889610666/posts
 http://spinn3r.com




Re: Secondary indexes not working properly since 2.1.0-rc2 ?

2014-08-14 Thread Fabrice Larcher
Hello, I created https://issues.apache.org/jira/browse/CASSANDRA-7766 about
that

Fabrice LARCHER


2014-08-13 14:58 GMT+02:00 DuyHai Doan doanduy...@gmail.com:

 Hello Fabrice.

  A quick hint, try to create your secondary index WITHOUT the IF NOT
 EXISTS clause to see if you still have the bug.

 Another idea is to activate query tracing on client side to see what's
 going on underneath.


 On Wed, Aug 13, 2014 at 2:48 PM, Fabrice Larcher 
 fabrice.larc...@level5.fr wrote:

 Hello,

 I have used C* 2.1.0-rc1, 2.1.0-rc2, 2.1.0-rc3 and I currently use
 2.1.0-rc5. Since 2.1.0-rc2, it appears that the secondary indexes are not
 always working. Just after the INSERT of a row, the index seems to be
 there. But after a while (I do not know when or why), SELECT statements
 based on any secondary index do not return the corresponding row(s)
 anymore. I noticed that a restart of C* may have an impact (the data
 inserted before the restart may be seen through the index, even if it was
 not returned before the restart).

 Here is a use-case example (in order to clarify my request) :

 CREATE TABLE IF NOT EXISTS ks.cf ( k int PRIMARY KEY, ind ascii, value
 text);
 CREATE INDEX IF NOT EXISTS ks_cf_index ON ks.cf(ind);
 INSERT INTO ks.cf (k, ind, value) VALUES (1, 'toto', 'Hello');
 SELECT * FROM ks.cf WHERE ind = 'toto'; // Returns no result after a
 while

 The last SELECT statement may or may not return a row depending on the
 instant of the request. I experienced that with 2.1.0-rc5 through CQLSH
 with clusters of one and two nodes. Since it depends on the instant of the
 request, I am not able to deliver any way to reproduce that systematically
 (It appears to be linked with some scheduled job inside C*).

 Is anyone working on this issue ?
 Am I possibly missing some configuration that prevent secondary indexes
 to return empty result ?
 Should I rather create a new table for each secondary index and remove
 secondary indexes ?

 Many thanks for your support.

 ​Fabrice​





Secondary indexes not working properly since 2.1.0-rc2 ?

2014-08-13 Thread Fabrice Larcher
Hello,

I have used C* 2.1.0-rc1, 2.1.0-rc2, 2.1.0-rc3 and I currently use
2.1.0-rc5. Since 2.1.0-rc2, it appears that the secondary indexes are not
always working. Just after the INSERT of a row, the index seems to be
there. But after a while (I do not know when or why), SELECT statements
based on any secondary index do not return the corresponding row(s)
anymore. I noticed that a restart of C* may have an impact (the data
inserted before the restart may be seen through the index, even if it was
not returned before the restart).

Here is a use-case example (in order to clarify my request) :

CREATE TABLE IF NOT EXISTS ks.cf ( k int PRIMARY KEY, ind ascii, value
text);
CREATE INDEX IF NOT EXISTS ks_cf_index ON ks.cf(ind);
INSERT INTO ks.cf (k, ind, value) VALUES (1, 'toto', 'Hello');
SELECT * FROM ks.cf WHERE ind = 'toto'; // Returns no result after a while

The last SELECT statement may or may not return a row depending on the
instant of the request. I experienced that with 2.1.0-rc5 through CQLSH
with clusters of one and two nodes. Since it depends on the instant of the
request, I am not able to deliver any way to reproduce that systematically
(It appears to be linked with some scheduled job inside C*).

Is anyone working on this issue ?
Am I possibly missing some configuration that prevent secondary indexes to
return empty result ?
Should I rather create a new table for each secondary index and remove
secondary indexes ?

Many thanks for your support.

​Fabrice​


Re: C* 2.1-rc2 gets unstable after a 'DROP KEYSPACE' command ?

2014-08-07 Thread Fabrice Larcher
Hello,

After a 'DROP TABLE' command that returns errors={}, last_host=127.0.0.1
(like most DROP commands do) from CQLSH with C* 2.1.0-rc2, I stopped C*.
And I can not start one node. It says :
ERROR 09:18:34 Exception encountered during startup
java.lang.NullPointerException: null
at org.apache.cassandra.db.Directories.init(Directories.java:191)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:553)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:245)
[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:455)
[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:544)
[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
java.lang.NullPointerException
at org.apache.cassandra.db.Directories.init(Directories.java:191)
at
org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:553)
at
org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:245)
at
org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:455)
at
org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:544)
Exception encountered during startup: null

I do not now if it can help.


Fabrice LARCHER


2014-07-18 7:23 GMT+02:00 Fabrice Larcher fabrice.larc...@level5.fr:

 Hello,

 I still experience a similar issue after a 'DROP KEYSPACE' command with C*
 2.1-rc3. Connection to the node may fail after a 'DROP'.

 But I did not see this issue with 2.1-rc1 (- it seems like to be a
 regression brought with 2.1-rc2).

 Fabrice LARCHER


 2014-07-17 9:19 GMT+02:00 Benedict Elliott Smith 
 belliottsm...@datastax.com:

 Also https://issues.apache.org/jira/browse/CASSANDRA-7437 and
 https://issues.apache.org/jira/browse/CASSANDRA-7465 for rc3, although
 the CounterCacheKey assertion looks like an independent (though
 comparatively benign) bug I will file a ticket for.

 Can you try this against rc3 to see if the problem persists? You may see
 the last exception, but it shouldn't affect the stability of the cluster.
 If either of the other exceptions persist, please file a ticket.


 On Thu, Jul 17, 2014 at 1:41 AM, Tyler Hobbs ty...@datastax.com wrote:

 This looks like https://issues.apache.org/jira/browse/CASSANDRA-6959,
 but that was fixed for 2.1.0-rc1.

 Is there any chance you can put together a script to reproduce the issue?


 On Thu, Jul 10, 2014 at 8:51 AM, Pavel Kogan pavel.ko...@cortica.com
 wrote:

 It seems that memtable tries to flush itself to SSTable of not existing
 keyspace. I don't know why it is happens, but probably running nodetool
 flush before drop should prevent this issue.

 Pavel


 On Thu, Jul 10, 2014 at 4:09 AM, Fabrice Larcher 
 fabrice.larc...@level5.fr wrote:

 ​Hello,

 I am using the 'development' version 2.1-rc2.

 With one node (=localhost), I get timeouts trying to connect to C*
 after running a 'DROP KEYSPACE' command. I have following error messages 
 in
 system.log :

 INFO  [SharedPool-Worker-3] 2014-07-09 16:29:36,578
 MigrationManager.java:319 - Drop Keyspace 'test_main'
 (...)
 ERROR [MemtableFlushWriter:6] 2014-07-09 16:29:37,178
 CassandraDaemon.java:166 - Exception in thread
 Thread[MemtableFlushWriter:6,5,main]
 java.lang.RuntimeException: Last written key
 DecoratedKey(91e7f660-076f-11e4-a36d-28d2444c0b1b,
 52446dde90244ca49789b41671e4ca7c) = current key
 DecoratedKey(91e7f660-076f-11e4-a36d-28d2444c0b1b,
 52446dde90244ca49789b41671e4ca7c) writing into
 ./../data/data/test_main/user-911d5360076f11e4812d3d4ba97474ac/test_main-user.user_account-tmp-ka-1-Data.db
 at
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:215)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:351)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:314)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 ~[guava-16.0.jar:na]
 at
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2

Re: C* 2.1-rc2 gets unstable after a 'DROP KEYSPACE' command ?

2014-07-17 Thread Fabrice Larcher
Hello,

I still experience a similar issue after a 'DROP KEYSPACE' command with C*
2.1-rc3. Connection to the node may fail after a 'DROP'.

But I did not see this issue with 2.1-rc1 (- it seems like to be a
regression brought with 2.1-rc2).

Fabrice LARCHER


2014-07-17 9:19 GMT+02:00 Benedict Elliott Smith belliottsm...@datastax.com
:

 Also https://issues.apache.org/jira/browse/CASSANDRA-7437 and
 https://issues.apache.org/jira/browse/CASSANDRA-7465 for rc3, although
 the CounterCacheKey assertion looks like an independent (though
 comparatively benign) bug I will file a ticket for.

 Can you try this against rc3 to see if the problem persists? You may see
 the last exception, but it shouldn't affect the stability of the cluster.
 If either of the other exceptions persist, please file a ticket.


 On Thu, Jul 17, 2014 at 1:41 AM, Tyler Hobbs ty...@datastax.com wrote:

 This looks like https://issues.apache.org/jira/browse/CASSANDRA-6959,
 but that was fixed for 2.1.0-rc1.

 Is there any chance you can put together a script to reproduce the issue?


 On Thu, Jul 10, 2014 at 8:51 AM, Pavel Kogan pavel.ko...@cortica.com
 wrote:

 It seems that memtable tries to flush itself to SSTable of not existing
 keyspace. I don't know why it is happens, but probably running nodetool
 flush before drop should prevent this issue.

 Pavel


 On Thu, Jul 10, 2014 at 4:09 AM, Fabrice Larcher 
 fabrice.larc...@level5.fr wrote:

 ​Hello,

 I am using the 'development' version 2.1-rc2.

 With one node (=localhost), I get timeouts trying to connect to C*
 after running a 'DROP KEYSPACE' command. I have following error messages in
 system.log :

 INFO  [SharedPool-Worker-3] 2014-07-09 16:29:36,578
 MigrationManager.java:319 - Drop Keyspace 'test_main'
 (...)
 ERROR [MemtableFlushWriter:6] 2014-07-09 16:29:37,178
 CassandraDaemon.java:166 - Exception in thread
 Thread[MemtableFlushWriter:6,5,main]
 java.lang.RuntimeException: Last written key
 DecoratedKey(91e7f660-076f-11e4-a36d-28d2444c0b1b,
 52446dde90244ca49789b41671e4ca7c) = current key
 DecoratedKey(91e7f660-076f-11e4-a36d-28d2444c0b1b,
 52446dde90244ca49789b41671e4ca7c) writing into
 ./../data/data/test_main/user-911d5360076f11e4812d3d4ba97474ac/test_main-user.user_account-tmp-ka-1-Data.db
 at
 org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:215)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:351)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:314)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
 ~[guava-16.0.jar:na]
 at
 org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
 ~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
 ~[na:1.7.0_55]
 at
 java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
 ~[na:1.7.0_55]
 at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_55]

 Then, I can not connect to the Cluster anymore from my app (Java Driver
 2.1-SNAPSHOT) and got in application logs :

 com.datastax.driver.core.exceptions.NoHostAvailableException: All
 host(s) tried for query failed (tried: /127.0.0.1:9042
 (com.datastax.driver.core.exceptions.DriverException: Timeout during read))
 at
 com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
 at
 com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:258)
 at
 com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:174)
 at
 com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
 at
 com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:36)
 (...)
 Caused by:
 com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
 tried for query failed (tried: /127.0.0.1:9042
 (com.datastax.driver.core.exceptions.DriverException: Timeout during read))
 at
 com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:103)
 at
 com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:175)
 at
 java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145

C* 2.1-rc2 gets unstable after a 'DROP KEYSPACE' command ?

2014-07-10 Thread Fabrice Larcher
​Hello,

I am using the 'development' version 2.1-rc2.

With one node (=localhost), I get timeouts trying to connect to C* after
running a 'DROP KEYSPACE' command. I have following error messages in
system.log :

INFO  [SharedPool-Worker-3] 2014-07-09 16:29:36,578
MigrationManager.java:319 - Drop Keyspace 'test_main'
(...)
ERROR [MemtableFlushWriter:6] 2014-07-09 16:29:37,178
CassandraDaemon.java:166 - Exception in thread
Thread[MemtableFlushWriter:6,5,main]
java.lang.RuntimeException: Last written key
DecoratedKey(91e7f660-076f-11e4-a36d-28d2444c0b1b,
52446dde90244ca49789b41671e4ca7c) = current key
DecoratedKey(91e7f660-076f-11e4-a36d-28d2444c0b1b,
52446dde90244ca49789b41671e4ca7c) writing into
./../data/data/test_main/user-911d5360076f11e4812d3d4ba97474ac/test_main-user.user_account-tmp-ka-1-Data.db
at
org.apache.cassandra.io.sstable.SSTableWriter.beforeAppend(SSTableWriter.java:172)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.io.sstable.SSTableWriter.append(SSTableWriter.java:215)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.db.Memtable$FlushRunnable.writeSortedContents(Memtable.java:351)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.db.Memtable$FlushRunnable.runWith(Memtable.java:314)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.io.util.DiskAwareRunnable.runMayThrow(DiskAwareRunnable.java:48)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.utils.WrappedRunnable.run(WrappedRunnable.java:28)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
com.google.common.util.concurrent.MoreExecutors$SameThreadExecutorService.execute(MoreExecutors.java:297)
~[guava-16.0.jar:na]
at
org.apache.cassandra.db.ColumnFamilyStore$Flush.run(ColumnFamilyStore.java:1054)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
~[na:1.7.0_55]
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
~[na:1.7.0_55]
at java.lang.Thread.run(Thread.java:744) ~[na:1.7.0_55]

Then, I can not connect to the Cluster anymore from my app (Java Driver
2.1-SNAPSHOT) and got in application logs :

com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s)
tried for query failed (tried: /127.0.0.1:9042
(com.datastax.driver.core.exceptions.DriverException: Timeout during read))
at
com.datastax.driver.core.exceptions.NoHostAvailableException.copy(NoHostAvailableException.java:65)
at
com.datastax.driver.core.DefaultResultSetFuture.extractCauseFromExecutionException(DefaultResultSetFuture.java:258)
at
com.datastax.driver.core.DefaultResultSetFuture.getUninterruptibly(DefaultResultSetFuture.java:174)
at
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:52)
at
com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:36)
(...)
Caused by: com.datastax.driver.core.exceptions.NoHostAvailableException:
All host(s) tried for query failed (tried: /127.0.0.1:9042
(com.datastax.driver.core.exceptions.DriverException: Timeout during read))
at
com.datastax.driver.core.RequestHandler.sendRequest(RequestHandler.java:103)
at
com.datastax.driver.core.RequestHandler$1.run(RequestHandler.java:175)
at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)

I can still connect through CQLSH but if I run (again) a DROP KEYSPACE
command from CQLSH, I get the following error :
errors={}, last_host=127.0.0.1

Now, on a 2 nodes cluster I also have a similar issue but the error's
stacktrace is different :

From application logs :

17971 [Cassandra Java Driver worker-2] WARN
com.datastax.driver.core.Cluster  - No schema agreement from live replicas
after 1 ms. The schema may not be up to date on some nodes.

From system.log :

INFO  [SharedPool-Worker-2] 2014-07-10 09:04:53,434
MigrationManager.java:319 - Drop Keyspace 'test_main'
(...)
ERROR [MigrationStage:1] 2014-07-10 09:04:56,553
CommitLogSegmentManager.java:304 - Failed waiting for a forced recycle of
in-use commit log segments
java.lang.AssertionError: null
at
org.apache.cassandra.db.commitlog.CommitLogSegmentManager.forceRecycleAll(CommitLogSegmentManager.java:299)
~[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.db.commitlog.CommitLog.forceRecycleAllSegments(CommitLog.java:160)
[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.db.DefsTables.dropColumnFamily(DefsTables.java:516)
[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at
org.apache.cassandra.db.DefsTables.mergeColumnFamilies(DefsTables.java:300)
[apache-cassandra-2.1.0-rc2.jar:2.1.0-rc2]
at