The data model suggested isn’t optimal for the “end of month” query you want to
run since you are not querying by partition key.
The query would look like “select EmpID, FN, LN, basic from salaries where
month = 1” which requires filtering and has unpredictable performance.
For this type of
3. How do we rebuild System keyspace?
wipe this node and start it all over.
hth
jason
On Tue, Jul 7, 2015 at 12:16 AM, Shashi Yachavaram shashi...@gmail.com
wrote:
When we reboot the problematic node, we see the following errors in
system.log.
1. Does this mean hints column family is
Thanks for the inputs.
Now my question is how should the app populate the duplicate data, i.e., if
I have an employee record (along with his FN, LN,..) for the month of Apr
and later I am populating the same record for the month of may (with salary
changed), should my application first read/fetch
I guess you're right, using my proposal, getting last employee's record is
straightforward and quick, but also, as Peter pointed, getting all slips
for a particular month requires you to know all the employee IDs and,
ideally, run a query for each employee. This would work depending on how
many
Thanks for the response. I’m trying to remove a node that’s already down for
some reason so its not allowing me to decommission it, is there some other way
to do this?
On Tue, Jul 7, 2015 at 12:45 PM, Kiran mk coolkiran2...@gmail.com wrote:
Yes, if your intension is to decommission a node.
On Mon, Jul 6, 2015 at 5:38 PM, Jonathan Haddad j...@jonhaddad.com wrote:
Wouldn't it suggest a delete heavy workload, rather than update?
I consider DELETE a case of UPDATE, but sure, you are correct. :D
=Rob
I know you can use `nodetool removenode` from the command line but is there a
way to remove a node from a cluster using OpsCenter?
Yes, if your intension is to decommission a node. You can do that by
clicking on the node and decommission.
Best Regards,
Kiran.M.K.
On Jul 8, 2015 1:00 AM, Sid Tantia sid.tan...@baseboxsoftware.com wrote:
I know you can use `nodetool removenode` from the command line but is
there a way to
If node is down use :
nodetool removenode Host ID
We have to run the below command when the node is down if the cluster
does not use vnodes, before running the nodetool removenode command, adjust
the tokens.
If the node is up, then the command would be “nodetool decommission” to
remove the
I tried both `nodetool remove node Host ID` and `nodetool decommission` and
they both give the error:
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException: 'Connection
refused’.
Here is what I have tried to fix this:
1) Uncommented JVM_OPTS=”$JVM_OPTS
On Tue, Jul 7, 2015 at 4:39 PM, Sid Tantia sid.tan...@baseboxsoftware.com
wrote:
I tried both `nodetool remove node Host ID` and `nodetool decommission`
and they both give the error:
nodetool: Failed to connect to '127.0.0.1:7199' - ConnectException:
'Connection refused’.
Here is what I
Hey all,
We are using DTCS and we have a ttl of 30 days for all inserts, there are
no deletes/updates we do.
When the SST tables is dropped by DTCS what kind of logging do we see in C*
logs.
any help would be useful. The reason is my db size is not hovering around a
size it is increasing, there
Hi,
I've been following this thread and my thoughts are inline with Carlos'
latest response... Model your data to suite your queries. That is one of
the data model / design considerations in Cassandra that differs from the
RDBMS world. Embrace demoralization and data duplication. Disk space is
Thanks Rob, Jeff. I have updated the Jira issue with my information.
On 6 July 2015 at 23:46, Jeff Ferland j...@tubularlabs.com wrote:
I’ve seen the same thing:
https://issues.apache.org/jira/browse/CASSANDRA-9577
I’ve had cases where a restart clears the old tables, and I’ve had cases
On 07/07/2015 07:27 PM, Robert Coli wrote:
On Tue, Jul 7, 2015 at 4:39 PM, Sid Tantia
sid.tan...@baseboxsoftware.com mailto:sid.tan...@baseboxsoftware.com
wrote:
I tried both `nodetool remove node Host ID` and `nodetool
decommission` and they both give the error:
nodetool: Failed
Suppose I have the following schema,
CREATE TABLE foo (
id text,
time timeuuid,
prop1 text,
PRIMARY KEY (id, time)
)
WITHCLUSTERING ORDER BY (time ASC);
And I have two clients who execute quorum writes, e.g.,
// client 1
INSERT INTO FOO (id, time, prop1) VALUES ('test',
Hi,
Is there any way to export the results of a query (e.g. select * from tbl1
where id =aa and loc =bb) into a file as CSV format?
I tried to use COPY command with cqlsh, but the command does not work
when you have where condition ?!!!
does any have any idea how to do this?
best,
/Shahab
Hi Jerome,
Good point!! Really a nice usage of static columns! BTW, wouldn't the EmpID
be static as well?
Cheers
Carlos Alonso | Software Engineer | @calonso https://twitter.com/calonso
On 7 July 2015 at 14:42, Jérôme Mainaud jer...@mainaud.com wrote:
Hello,
You can slightly adapt Carlos
25 MB seems very specific. Is there a reason why?
On Tuesday, July 7, 2015, Peer, Oded oded.p...@rsa.com wrote:
The data model suggested isn’t optimal for the “end of month” query you
want to run since you are not querying by partition key.
The query would look like “select EmpID, FN, LN,
Hello,
You can slightly adapt Carlos answer to reduce repliation of data that
don't change for month to month.
Static columns are great for this.
The table become:
CREATE TABLE salaries (
EmpID varchar,
FN varchar *static*,
LN varchar *static*,
Phone varchar *static*,
Address varchar
On behalf of the development community, I am pleased to announce the
release of YCSB version 0.2.0.
Highlights:
* Apache Cassandra 2.0 CQL support
* Apache HBase 1.0 support
* Apache Accumulo 1.6 support
* MongoDB - support for all production versions released since 2011
* Tarantool 1.6 support
Hello,
I'm loading data from HDFS Cassandra using Spotify's hdfs2cass. The
setup is a 4-node cluster running Cassandra 2.1.6, RF=2, STCS, raw
data size is about 1tb before loading and 3.8tb after. The process
works fine, but I do have a few questions.
1. Some Hadoop jobs fail due to streaming
22 matches
Mail list logo