Hi Tom,
thanks, I take your answer as nobody else has found an elegant solution,
either :-)
I guess I could use a secondary index for some cases, but there are several
reasons I can't use them in most cases. Especially the permissions are
problematic.
A user may have dozens of permission for
Now I'm experiencing this:
[iwuser@erwin-lab2 bin]$ ./cqlsh
Python CQL driver not installed, or not on PYTHONPATH.
You might try easy_install cql.
Python: /usr/local/bin/python2.6
Module load path: ['./../lib/thrift-python-internal-only-0.9.1.zip',
Since you're on RHEL 5, you have compiled Python (no package available,
right?).
Have you configured Python to be built with zlib support:
--with-zlib=/usr/lib?
If not, compiled it with zlib and then run:
python -c import zlib
No error should appear.
Romain
erwin.karb...@gmail.com a écrit sur
Hi Romin,
I've managed.
I installed Zlib and then compiled with --with-zlib=/usr/lib.
I didn't do this python -c import zlib.
Thanks a lot for fast turnaround response,
Erwin Karbasi
ATT, Senior Software Architect
On Wed, Nov 6, 2013 at 4:21 PM, Romain HARDOUIN
romain.hardo...@urssaf.frwrote:
Is it possible that the keyspace was dropped then re-created (
https://issues.apache.org/jira/browse/CASSANDRA-4857)? I've seen similar
stack traces in that case.
On 11/05/2013 10:47 PM, Elias Ross wrote:
I'm seeing the following:
Caused by: java.lang.RuntimeException:
Both caches involve several objects per entry (What do we want? Packed
objects. When do we want them? Now!). The size is an estimate of the
off heap values only and not the total size nor number of entries.
An acceptable size will depend on your data and access patterns. In one
case we
On Wed, Nov 6, 2013 at 9:10 AM, Keith Freeman 8fo...@gmail.com wrote:
Is it possible that the keyspace was dropped then re-created (
https://issues.apache.org/jira/browse/CASSANDRA-4857)? I've seen similar
stack traces in that case.
Thanks for the pointer.
There's a program (RHQ) that's
We are using CQL table like this -
CREATE TABLE testing (
description text,
last_modified_date timeuuid,
employee_id text,
value text,
PRIMARY KEY (employee_name, last_modified_date)
)
We have made description as text in the above table. I am thinking is there
any limitations on text
Hello Techy Teck,
Couldn't find any evidence on the datastax website but found this
http://wiki.apache.org/cassandra/CassandraLimitations
which I believe is correct.
Thanks
Jabbar Azam
On 6 November 2013 20:19, Techy Teck comptechge...@gmail.com wrote:
We are using CQL table like this -
Forget. The text value can be upto 2GB in size, but in practice it will be
less.
Thanks
Jabbar Azam
On 6 November 2013 21:12, Jabbar Azam aja...@gmail.com wrote:
Hello Techy Teck,
Couldn't find any evidence on the datastax website but found this
I was wondering if anyone had a sense of performance/best practices
around the 'IN' predicate.
I have a list of up to potentially ~30k keys that I want to look up in a
table (typically queries will have 500, but I worry about the long tail). Most
of them will not exist in the table, but, say,
Unless you explicitly set a page size (i'm pretty sure the query is
converted to a paging query automatically under the hood) you will get
capped at the default of 10k which might get a little weird semantically.
That said, you should experiment with explicit page sizes and see where it
gets you
Thanks Nate,
I assume 10k is the return limit. I don't think I'll ever get close to
10k matches to the IN query. That said, you're right: to be safe I'll
increase the limit to match the number of items on the IN.
I didn't know CQL supported stored procedures, but I'll take a look. I
We want to upgrade our Cassandra cluster to have newer hardware, and were
wondering if anyone has suggestions on Cassandra or linux config changes that
will prove to be beneficial.
As of now, our performance tests (our application specific as well as
cassandra-stress) are not showing any
Class Name
| Shallow Heap | Retained Heap
---
If one big query doesn't cause problems
Every row you read becomes a (roughly) RF number of tasks in the cluster. If
you ask for 100 rows in one query it will generate 300 tasks that are processed
by the read thread pool which as a default of 32 threads. If you ask for a lot
of rows and the
Running Cassandra 1.1.5 currently, but evaluating to upgrade to 1.2.11 soon.
You will make more use of the extra memory moving to 1.2 as it moves bloom
filters and compression data off heap.
Also grab the TLAB setting from cassandra-env.sh in v1.2
As of now, our performance tests (our
17 matches
Mail list logo