Whatever I wanted to do does not seem to be possible (probably a limitation
or a bug)... I see a way to get the KeyspaceMetadata and from that get the
UserType instance (code lines 1 & 2 below).
1.)
org.apache.cassandra.schema.KeyspaceMetadata ksm =
I had to do something similar (in my case it was an IN query)... I ended
up writing hack in java to create a custom Expression and injecting into
the RowFilter of a dummy secondary index (not advisable and very short term
but it keeps my application code clean). I am keeping my eyes open for the
I was wondering if it is possible to create an UDT and return it within a
user defined function.
I looked at this documentation
http://docs.datastax.com/en/cql/3.3/cql/cql_using/useCreateUDF.html but the
examples are only for basic types.
This is my pseudo code I came up with... the part I think
I just happened to run into a similar situation myself and I can see it's
through a bad schema design (and query design) on my part. What I wanted to
do was narrow down by the range on one clustering column and then by
another range on the next clustering column. Failing to adequately think
What is CS?
On Thu, Apr 7, 2016 at 10:03 AM Kevin Burton wrote:
> I have a paging model whereby we stream data from CS by fetching 'pages'
> thereby reading (sequentially) entire datasets.
>
> We're using the bucket approach where we write data for 5 minutes, then we
> can
I have a paging model whereby we stream data from CS by fetching 'pages'
thereby reading (sequentially) entire datasets.
We're using the bucket approach where we write data for 5 minutes, then we
can just fetch the bucket for that range.
Our app now has TONS of data and we have a piece of
This sounds most like https://issues.apache.org/jira/browse/CASSANDRA-10371.
Are you on a version that could be affected by this issue?
Best,
Joel
On Thu, Apr 7, 2016 at 11:51 AM, Anubhav Kale
wrote:
> Hello,
>
>
>
> We removed a DC using instructions from
>
Hello,
We removed a DC using instructions from
https://docs.datastax.com/en/cassandra/2.1/cassandra/operations/ops_decomission_dc_t.html
After all nodes were gone,
1. System.peers don't have an entry for the nodes that were removed.
(confirmed via a cqlsh query with consistency all)
Hi,
I have a Cassandra 2.2.5 cluster with a datacenter DC03 with 5 nodes in a ring
and I have DC04 with one node.
Setup by default with all nodes talking on the external interfaces works well,
no problems, all nodes in each DC can see and talk to each other.
Iām trying to follow the
Not that we aren't enthusiastic about you moving to Cassandra, but it needs
to be for the right reasons, and for Cassandra the right reasons are
scaling and HA.
In case it's not obvious, I would make a really lousy used-car or
real-estate/time-share salesman!
-- Jack Krupansky
On Thu, Apr 7,
On Wed, Apr 6, 2016 at 9:15 AM, Bhupendra Baraiya
wrote:
>
> The main reason we want to migrate to Cassandra is we have a denormalized
> data structure in Ms Sql server Database and we want to move to Open source
> database...
If it all boils down to this,
That certainly looks like a bug, would you mind opening a ticket at
https://issues.apache.org/jira/browse/CASSANDRA please?
Thanks,
Sam
On Thu, Apr 7, 2016 at 2:19 PM, Ivan Georgiev wrote:
> Hi, are secondary index queries with thrift supported in Cassandra 3.x ?
> Asking as I
Hi, are secondary index queries with thrift supported in Cassandra 3.x ?
Asking as I am not able to get them working.
I am doing a get_range_slices call with row_filter set in the KeyRange
property, but I am getting an exception in the server with the following
trace:
INFO | jvm 1|
Hello guys!
Can you suggest me a consulting company or specialist in Apache Cassandra in
Russia?
We need to expert support/consult our production clusters.
Thank you!
āāā
Roman Skvazh
I have a table mapping continuous ranges to discrete values.
CREATE TABLE range_mapping (k int, lower int, upper int, mapped_value int,
PRIMARY KEY (k, lower, upper));
INSERT INTO range_mapping (k, lower, upper, mapped_value) VALUES (0, 0, 99, 0);
INSERT INTO range_mapping (k, lower, upper,
Well, then you could trying to replace this node as soon as you have more nodes
available. I would use this procedure as I believe it is the most efficient
one:
http://docs.datastax.com/en/cassandra/2.0/cassandra/operations/ops_replace_node_t.html.
It is not always the same node, it is always
Hi Jeff
Thanks for your answer.
- Regarding to the drain, I will proceed as you indicate. Run a flush
and then shutdown the node.
- Regarding to the Cassandra updates. I want to upgrade the version of
the cluster because we are having problems with timeouts (all the nodes
17 matches
Mail list logo