Hello,
im expecting this problem:
with c-cli:
get
messagesContent['558a512f30a46f55e75e63f2f816f7435283269f92070618ba9213c0bfac730f'];
Returned 33 results.
within pycassa code:
server_list=['SERVER:9160',],
prefill=False, pool_size=15, max_overflow=10, max_retries=-1, timeout=5,
, Jeffrey Wang wrote:
Did you set PIG_RPC_PORT in your hadoop-env.sh? I was seeing this error
for a while before I added that.
-Jeffrey
From: pob [mailto:peterob...@gmail.com]
Sent: Tuesday, April 19, 2011 6:42 PM
To: user@cassandra.apache.org
Subject: Re: pig + hadoop
Hey Aaron
my false,
ignore last post.
2011/4/20 pob peterob...@gmail.com
Hi,
everything works fine with cassandra 0.7.5, but when I tried with 0.7.3
another errors showed up, but task finished with success whats strange.
2011-04-20 11:45:40,674 INFO org.apache.hadoop.mapred.TaskInProgress
Hello,
I did cluster configuration by
http://wiki.apache.org/cassandra/HadoopSupport. When I run
pig example-script.pig
-x local, everything is fine and i get correct results.
Problem is occurring with -x mapreduce
Im getting those errors :
2011-04-20 01:24:21,791 [main] ERROR
function do ?
Set a break point, what are the two strings you are feeding into the hash
functions ?
Aaron
On 15 Apr 2011, at 03:50, pob wrote:
Hello,
I'm experiencing really strange problem. I wrote data into cassandra
cluster. I'm trying to check if data inserted then fetched are equally
or cassandra.partitioner.class : cluster partitioner
Hope that helps.
Aaron
On 20 Apr 2011, at 11:28, pob wrote:
Hello,
I did cluster configuration by
http://wiki.apache.org/cassandra/HadoopSupport. When I run
pig example-script.pig
-x local, everything is fine and i get correct results
: initial address to
connect to
* PIG_PARTITIONER or cassandra.partitioner.class : cluster partitioner
Hope that helps.
Aaron
On 20 Apr 2011, at 11:28, pob wrote:
Hello,
I did cluster configuration by
http://wiki.apache.org/cassandra/HadoopSupport. When I run
pig example-script.pig
-x
'KillJobAction' for job: job_201104200331_0002
2011/4/20 pob peterob...@gmail.com
ad2. it works with -x local , so there cant be issue with
pig-DB(Cassandra).
im using pig-0.8 from official site + hadoop-0.20.2 from offic. site.
thx
2011/4/20 aaron morton aa...@thelastpickle.com
Am
2011/4/20 pob peterob...@gmail.com
Thats from jobtracker:
2011-04-20 03:36:39,519 INFO org.apache.hadoop.mapred.JobInProgress:
Choosing rack-local task task_201104200331_0002_m_00
2011-04-20 03:36:42,521 INFO org.apache.hadoop.mapred.TaskInProgress: Error
from
Hello,
what kind of bug is it?
If I do nodetool host1 ring, the output is:
Address Status State LoadOwnsToken
141784319550391026443072753096570088105
1.174 Up Normal 4.14 GB 16.67% 0
1.173 Down Normal 4.07 GB 16.67%
Hi,
I'm inserting data from client node with stress.py to cluster of 6 nodes.
They are all on 1Gbps network, max real throughput of network is 930Mbps
(after measurement).
python stress.py -c 1 -S 17 -d{6nodes} -l3 -e QUORUM
--operation=insert -i 1 -n 50 -t100
The problem is
some estimation about how big
stream i can write into cluster, what happens if I double nodes of cluster
and so on.
Thanks for explanation or any hints.
Best,
Peter
2011/3/20 pob peterob...@gmail.com
Hello,
I set up cluster with 3 nodes/ 4Gram,4cores,raid0. I did experiment with
stress.py
Hello,
I set up cluster with 3 nodes/ 4Gram,4cores,raid0. I did experiment with
stress.py to see how fast my inserts are. The results are confusing.
In each case stress.py was inserting 170KB of data:
1)
stress.py was inserting directly to one node -dNode1, RF=3, CL.ONE
30 inserts in 1296
info from that HTTP server, then write you own
zabbix templates
2011/3/8 pob peterob...@gmail.com
Hello,
Im using cassandra with mx4j, I was googling half day but cant find
anything usable to connect it with zibbix. I just found Zapcat, but I dont
wanna make any change into code
14 matches
Mail list logo