I followed step-by-step instructions for installing Cassandra on Red Hat
Linux Server 6.3 from the datastax site, without much success. Apparently
it installs fine but starting cassandra service does nothing(no ports are
bound so opscenter/cli doesnt work). When I check service's status, it
shows
I hope you have already gone through this link *
https://github.com/zznate/hector-examples*. If not will suggest you to go
through, and you can also refer
http://hector-client.github.com/hector/build/html/documentation.html.
Best Regards,
On Mon, Feb 18, 2013 at 12:15 AM, Jain Rahul
Just out of curiosity :
When using compression, does this affect this one way or another?
Is 300G (compressed) SSTable size, or total size of data?
.vegard,
- Original Message -
From: user@cassandra.apache.org
To:
Cc:
Sent:Mon, 18 Feb 2013 08:41:25 +1300
Hi,
Is anyone using Cassandra to store firewall logs ?
If so any points to share?
Regards Hans-Peter
Hans-Peter Sloot
Oracle Technical Expert
Oracle 10g/11g Certified Master
Global Fact ATS NL
T + 31 6 303 83 499
[atos_logotype]
Dit bericht is vertrouwelijk en kan geheime informatie
Thanks Aaron.
Does the rpc_timeout not control the client timeout ? Is there any param which
is configurable to control the replication timeout between nodes ? Or the same
param is used to control that since the other node is also like a client ?
From: aaron morton
I think it is actually more of a problem that there were no error messages
or other indication of what went wrong in the setup where the nodes
couldn't contact. Should I file issue report on this? Clearly Cassandra
must have tried to contact some IP on port 7000 and failed. Why didn't it
log? That
We have not quite gotten to it yet and has been driven by paying customers
at this time and there is one customer who wants it but they themselves
keep pushing it out for other things they want.
Thanks,
Dean
On 2/15/13 4:16 PM, Drew Kutcharian d...@venarc.com wrote:
Hey Dean, do you guys have
These issues are more cloud specific then they are cassandra specific.
Cloud executives tell me in white papers that cloud is awesome and you
can fire all your sysadmins and network people and save money.
This is what happens when you believe cloud executives and their white
papers, you spend 10+
I don't think it is cloud at all, and I am no newcomer to sysadmin (though
am relative new to AWS cloud). The mistake is clearly mine, but also
clearly easy to make -- so I assume a lot of other people must make it too.
But the logs don't provide any guidance. Or is this another mistake I make,
Why have u assigned for both nodes a genenerated token? And how you
calculated it?
Shouldnt u choose one of them to has its token as the '0' start value?
At least that is what is said on the tutorials I've read.
On Mon, Feb 18, 2013 at 2:55 PM, Boris Solovyov boris.solov...@gmail.comwrote:
These are running the latest Cassandra 1.2 with 256 vnodes each.
On Mon, Feb 18, 2013 at 2:07 PM, Víctor Hugo Oliveira Molinar
vhmoli...@gmail.com wrote:
Why have u assigned for both nodes a genenerated token? And how you
calculated it?
Shouldnt u choose one of them to has its token as the
Yes, for instance I have 6 nodes and have 50% ownership because I have RF=3,
and 6/3 = 2 virtual entities that are written to which means each node owns 50%.
Dean
From: Alain RODRIGUEZ arodr...@gmail.commailto:arodr...@gmail.com
Reply-To:
That makes sense, thanks.
On Mon, Feb 18, 2013 at 2:26 PM, Hiller, Dean dean.hil...@nrel.gov wrote:
Yes, for instance I have 6 nodes and have 50% ownership because I have
RF=3, and 6/3 = 2 virtual entities that are written to which means each
node owns 50%.
Sorry, missed the Counters part.
You are probably interested in this one
https://issues.apache.org/jira/browse/CASSANDRA-5228
Add your need to ticket to help it along. IMHO if you have write once, read
many time series data the SSTables are effectively doing horizontal
partitioning for you.
So, running it periodically on just one node is enough for cluster
maintenance ?
In the special case where you have RF == Number of nodes.
The recommended approach is to use -pr and run it on each node periodically.
Also: running it with -pr does output:
That does not look right. There
Nothing jumps out.
Check /var/log/cassandra/output.log , that's where stdout and std err are
directed.
Check file permissions.
Cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
On 18/02/2013, at 9:08 PM, amulya
An you can never go wrong relying on the documentation for the python pycassa
library, it has some handy tutorials for getting started.
http://pycassa.github.com/pycassa/
cheers
-
Aaron Morton
Freelance Cassandra Developer
New Zealand
@aaronmorton
http://www.thelastpickle.com
My experience is repair of 300GB compressed data takes longer than 300GB of
uncompressed, but I cannot point to an exact number. Calculating the
differences is mostly CPU bound and works on the non compressed data.
Streaming uses compression (after uncompressing the on disk data).
So if you
It's throwing MalformedURLException
Error: Exception thrown by the agent : java.net.MalformedURLException:
Local host name unknown: java.net.UnknownHostException: ip-10-0-0-228:
ip-10-0-0-228
Where should I set the correct IP of the machine?
2013/2/19 aaron morton aa...@thelastpickle.com
However the old rows will not be purged from disk unless all fragments of
the row are involved in a compaction process. So it may take some time to
purge from disk, depending on the workload.
http://wiki.apache.org/cassandra/Counters
The doc says: Counter removal is intrinsically limited. For
I thought about this more, and even with a 10Gbit network, it would take 40
days to bring up a replacement node if mongodb did truly have a 42T / node like
I had heard. I wrote the below email to the person I heard this from going
back to basics which really puts some perspective on it….(and a
Hi - We have a req to store around 90 days of data per user. Last 7 days of
data is going to be accessed frequently. Is there a way we can have the recent
data (7 days) in SSD and the rest of the data in the
HDD ? Do we take a snapshot every 7 days and use a separate 'archive' cluster
to serve
There is this:
http://www.datastax.com/dev/blog/whats-new-in-cassandra-1-1-flexible-data-file-placement
But you'll need to design your data model around the fact that this is only as
granular as 1 column family
Best,
michael
From: Kanwar Sangha kan...@mavenir.commailto:kan...@mavenir.com
Thanks. I will look into the details.
One issue I see is that if I have only one column family which needs only the
last 7 days data to be on SSD and the rest to be on the HDD, how will that work.
From: Michael Kjellman [mailto:mkjell...@barracuda.com]
Sent: 18 February 2013 20:08
To:
On 02/18/2013 03:07 PM, amulya rattan wrote:
It's throwing MalformedURLException
Error: Exception thrown by the agent : java.net.MalformedURLException:
Local host name unknown: java.net.UnknownHostException: ip-10-0-0-228:
ip-10-0-0-228
Where should I set the correct IP of the machine?
25 matches
Mail list logo