Which debian/ubuntu packages are you using? I am using the ones that are
maintained by Eric Evans and the init.d script stops the server correctly.
Lee Parker
On Mon, Jul 26, 2010 at 9:22 AM, miche...@hermanus.cc wrote:
This is how I have been doing it:
pkill cassandra
then I do a netstat
The iostat numbers are rather low as is cpu utilization. We have a couple
of nightly jobs which do a lot of reads in a short amount of time. That is
when the pending reads was climbing. I'm going to bump up the number and
see how things run.
Lee Parker
On Tue, Jul 13, 2010 at 6:18 AM, Schubert
and
it couldn't keep up with new writes. In the end, I completely dropped that
server and spun up another one to take it's place since the one good server
had all the data anyway. So, it might have also been an issue with that
box.
One more question, are all of the instances in the same region?
Lee
Also, I am using batch_mutate for all of my writes.
Lee Parker
On Mon, May 17, 2010 at 7:11 PM, Lee Parker l...@socialagency.com wrote:
What are your storage-conf settings for Memtable thresholds? One thing
that could cause lots of CPU usage is dumping the memtables too frequently
You need to set the Reversed attribute on the sliceRange. I don't know
which library you are using, but it may be something like
sliceRange.setReversed(true).
Lee Parker
On Sat, May 8, 2010 at 7:36 PM, AJ Chen ajc...@web2express.org wrote:
I have a column family for sorting time-uuid
Thanks. I'll apply the patch. I'm not real familiar with the JVM options,
but I assume that on a production machine I should remove -Xdebug and the
-Xrunjdwp options.
Lee Parker
On Mon, May 3, 2010 at 2:29 PM, Eric Evans eev...@rackspace.com wrote:
On Mon, 2010-05-03 at 13:30 -0500, Lee
there is
only one node up in the prod cluster.
Lee Parker
Is there a preference as to which JRE is used for cassandra?
Lee Parker
2364
-/+ buffers/cache: 5266 2413
Swap: 1023388635
Lee Parker
On Wed, Apr 28, 2010 at 9:18 AM, Jonathan Ellis jbel...@gmail.com wrote:
If you're running so close to the edge of running out of memory that
creating a ln process pushes you over the edge, you should
Can anyone help with this? It is preventing me from getting backups of our
cluster.
Lee Parker
On Mon, Apr 26, 2010 at 10:02 PM, Lee Parker l...@socialagency.com wrote:
I was attempting to get a snapshot on our cassandra nodes. I get the
following error every time I run nodetool ... snapshot
Adding a swapfile fixed the error, but it doesn't look as though the process
is even using the swap file at all.
Lee Parker
On Tue, Apr 27, 2010 at 9:49 AM, Eric Hauser ewhau...@gmail.com wrote:
Have you read this?
http://forums.sun.com/thread.jspa?messageID=9734530
http://forums.sun.com
.
It is not an ideal solution because the timestamp you pass in may not have
the same granularity as the one used to create your current indexes. So,
you might be using a standard unix epoch timestamp in seconds, while a true
TimeUUID uses 100 nanosecond slices of time.
Lee Parker
On Tue, Apr 27, 2010
to do in gathering up the snapshots into a tarball and pushing it to
S3.
Lee Parker
On Tue, Apr 27, 2010 at 9:01 PM, Lee Parker l...@socialagency.com wrote:
The system is a ubuntu server running 8.04 LTS. Now, I'm getting the
problem again this evening even with the addition of the swap space
? Is it
problematic to bring the cluster back up with new ips on each node, but the
same tokens as before?
Lee Parker
,
cassandra couldn't recover from the error because I didn't have enough
memory on the machines. I have now upgraded to larger machines and that has
cleared up the real memory issues.
Lee Parker
On Sun, Apr 18, 2010 at 6:51 PM, Brandon Williams dri...@gmail.com wrote:
On Fri, Apr 16, 2010 at 12:50 PM
Row caching is not turned on.
Lee Parker
On Fri, Apr 16, 2010 at 12:58 PM, Paul Brown paulrbr...@gmail.com wrote:
On Apr 16, 2010, at 10:50 AM, Lee Parker wrote:
[...]
I am trying to migrate data from mysql into the cluster using the
following methodology:
1. get 500 rows (12 columns
performance such that this shouldn't be an issue?
Lee Parker
On Fri, Apr 16, 2010 at 1:13 PM, Jonathan Ellis jbel...@gmail.com wrote:
On Fri, Apr 16, 2010 at 12:50 PM, Lee Parker l...@socialagency.com wrote:
Each time I start it up, it will
work fine for about 1 hour and then it will crash
When I am verifying the columns in the mutation map before sending it to
cassandra, none of the timestamps are 0. I have had a difficult time
recreating the error in a controlled environment so I can see the mutation
map that was actually sent.
Lee Parker
l...@spredfast.com
[image: Spredfast
I'm actually using PHP. I do have several php processes running, but each
one should have it's own Thrift connection.
Lee Parker
l...@spredfast.com
[image: Spredfast]
On Thu, Apr 15, 2010 at 10:53 AM, Jonathan Ellis jbel...@gmail.com wrote:
Looks like you are using C++ and not setting
The entire thing was completely my own fault. I was making an invalid
request and, somewhere in the code, I was catching the exception and not
handling it at all. So it only appeared to be silent when in reality it was
throwing a nice descriptive exception.
Lee Parker
l...@spredfast.com
[image
to feel like this is a bug in Thrift or the Cassandra Thrift interface. Can
anyone offer any other insight? I'm using the current stable release of
Thrift 0.2.0, and Cassandra 0.6.0.
It seems to happen more under heavy load. I don't know if that is meaningful
or not.
Lee Parker
On Thu, Apr 15
Has anyone used json2sstable to migrate a large amount of data into
cassandra? What was your methodology? I assume that this will be much
faster than stepping through my data and doing writes via PHP/Thrift.
Lee Parker
It appears that after some testing, the buffered transport seems more
stable. I am occasionally getting a missing timestamp error during
batch_mutate calls. It happens both on framed and buffered transports, but
when it happens on a framed transport, the server crashes. Is this typical?
Lee
nevermind. I figured out what the problem was. I was not putting the
column inside a ColumnOrSuperColumn container.
Lee Parker
l...@spredfast.com
[image: Spredfast]
On Tue, Apr 13, 2010 at 4:19 PM, Lee Parker l...@socialagency.com wrote:
I upgraded my dev environment to 0.6.0 today
be? If these are too low, would i get a more
descriptive error?
Lee Parker
thrift connection from multiple threads, for instance.
On Mon, Apr 12, 2010 at 10:02 AM, Lee Parker l...@socialagency.com wrote:
I am a newbie with Cassandra. We are currently migrating a large amount
of
data out of MySQL into Cassandra. I have two ColumnFamilies. One
contains
one row per
environment.
Lee Parker
On Mon, Apr 12, 2010 at 10:45 AM, Jonathan Ellis jbel...@gmail.com wrote:
Pandra is probably targetting 0.6.
If you're just starting, there's no reason for you not to use 0.6 over 0.5
now.
On Mon, Apr 12, 2010 at 10:42 AM, Lee Parker l...@socialagency.com wrote:
I'm using
uses.
after regenerating my php code from thrift-r917130 (for
cassandra-0.6.0-rc1), the errors stopped.
-keith
On Mon, Apr 12, 2010 at 9:40 AM, vineet daniel vineetdan...@gmail.com
wrote:
can you post the code
On Mon, Apr 12, 2010 at 9:22 PM, Lee Parker l...@socialagency.com
wrote
28 matches
Mail list logo