http://frommyworkshop.blogspot.ru/2012/07/single-node-hadoop-cassandra-pig-setup.html
I use Cassandra 1.2.2 and Hadoop 1.0.4
2013/3/11 Renato Marroquín Mogrovejo renatoj.marroq...@gmail.com
Hi there,
Check this out [1]. It´s kinda old but I think it will help you get started.
Renato M.
Dear users,
We have got very strange beheviour of hadoop cluster after upgrading
Cassandra from 1.1.5 to Cassandra 1.2.1. We have 5 nodes cluster of Cassandra,
where three of them are hodoop slaves. Now when we are submitting job through
Pig script, only one map task runs on one of the hadoop
Hello Chin,
you can extract delta using pig script and save it in another CF in Cassandra.
By using Pentaho kettle you can then load the data from the CF to RDBMS.
Pentaho Kettle is open source project. All of the process you can automate
through Azkaban or Ozzie.
Kafka is also an alternatives
You can use Apache PIG to load data and filter it by row key, filter in pig is
very fast.
Regards
Shamim
11.12.2012, 20:46, Ayush V. ayushv...@gmail.com:
I'm working on Cassandra Hadoop intergration (MapReduce). We have used Random
Partioner to insert data to gain faster write. Now we have
Hello Owen,
Seems you did not configure token for all nodes correctly. See the section
Calculating Tokens for multiple data centers here
http://www.datastax.com/docs/0.8/install/cluster_init
Best regards
Shamim
---
On Mon, Dec 3, 2012 at 4:42 PM, Owen Davies cassan...@obduk.com wrote:
specified Options: [dc1:3, dc2:3], surely after a while all the data
will be on every server?
Thanks,
Owen
On 3 December 2012 14:06, Шамим sre...@yandex.ru wrote:
Hello Owen,
Seems you did not configure token for all nodes correctly. See the section
Calculating Tokens for multiple
://issues.apache.org/jira/browse/CASSANDRA ?
Thanks
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 27/11/2012, at 2:40 AM, Шамим sre...@yandex.ru wrote:
Hello users,
faced very strange behaviour when chnaging
Hello users,
faced very strange behaviour when chnaging compression_parameters of exisiting CF. After changing the compaction strategy, compression_strategy returning back to the SnappyCompressor.
Using version 1.1.5.
[cqlsh 2.2.0 | Cassandra 1.1.5 | CQL spec 2.0.0 | Thrift protocol 19.32.0]
I
Hello All,
we are using pig (pig-0.10.0) to store some data in CF with compound key.
Cassandra version is 1.1.15. Here is the script for creating CF
CREATE TABLE clicks_c (
user_id varchar,
time timestamp,
url varchar,
PRIMARY KEY (user_id, time)
) WITH COMPACT STORAGE;
Here is