there a situation in which that behavior would be useful?
guessing, makes life easier to client implementations and is consistent in the
sense that when doing a slice by name the server is the entity that decides
which columns are in the result set.
I took a look at the performance of
Can you provide:
* the CF existing schema and output from nodetool cfstats
* the command you are running
* the error you get
Also it's handy to know what version the schema was originally created in.
Cheers
-
Aaron Morton
Freelance Developer
@aaronmorton
Each column mutation (insert / update or delete) includes an int64 timestamp.
Typically this is sent by the client, or in the case of CQL typically it is set
by the coordinating server.
When we have multiple values for a column we compare timestamps, the higher
timestamp wins and deletes win
Your seeing dropped mutations reported from nodetool tpstats ?
Take a look at the logs. Look for messages from the MessagingService with the
pattern {} {} messages dropped in last {}ms They will be followed by info
about the TP stats.
First would be the workload. Are you sending very big
Aaron Morton (aa...@thelastpickle.com) advised:
If possible i would avoid using PHP. The PHP story with cassandra has
not been great in the past. There is little love for it, so it takes a
while for work changes to get in the client drivers.
AFAIK it lacks server side states which makes
Hi all,
I had problems creating a table with composite keys with CQL 3 and
accessing it via thrift.
AFAIK the comparators weren't set up in a compatible way. Probably due to
betaness of CQL 3.
So I'm now creating and using CFs with Composite Columns exclusively via
thrift/Astyanax. Pelops works
Hi All,
I have a question about the ColumnFamilies.ReadCount counter.
I use:
- One node.
- Cassandra 1.0.10.
- No row cache.
- One table Products containing a few rows
If I use the cli command list Products, the ColumnFamilies.ReadCount
counter does not increase (also via nodetool cfstats).
if
I had to give up on using CQL and thrift, and go to composites created
and accessed in thrift. I'm using Hector for java access and python
for scripting
access.
-g
On Sun, Aug 19, 2012 at 5:19 AM, Georg Köster georg.koes...@gmail.com wrote:
Hi all,
I had problems creating a table with
Hi all
I have a Windows 7 machine (64 bit) with DataStax community server
installed. Running a benchmark app on the server gives me 7000 inserts per
second. Running the same app on a networked client gives me only 5 inserts
per second. The two computers are connected directly via a cross over
Hi Nick,
I'm talking about the total writes/reads in the dashboard (left graph). It
exactly tripled during our update. I guess this is a change because of the
fact we also replicate with RF=3. Is that true?
With kind regards,
Robin Verlangen
*Software engineer*
*
*
W
Are you using multiple client threads?
You might want to try the stress tool in the distribution.
On 08/19/2012 02:09 PM, Peter Morris wrote:
Hi all
I have a Windows 7 machine (64 bit) with DataStax community server
installed. Running a benchmark app on the server gives me 7000
inserts
No I am using a single thread. My aim at this point is to see how quickly I
can connect, post, then complete. I just find it bizarre that it goes from
7000 per second down to 5!
On Sun, Aug 19, 2012 at 8:22 PM, Dave Brosius dbros...@mebigfatguy.comwrote:
Are you using multiple client
You're almost certainly using a client that doesn't set TCP_NODELAY on
the thrift TCP socket. The nagle algorithm is enabled, leading to 200
ms latency for each, and thus 5 requests/second.
http://en.wikipedia.org/wiki/Nagle's_algorithm
--
/ Peter Schuller (@scode,
On Sun, Aug 19, 2012 at 3:55 AM, aaron morton aa...@thelastpickle.comwrote:
It is not a judgement on the quality of PHPCassa or PDO-cassandra, neither
of which I have used.
My comments were mostly informed by past issues with Thrift and PHP.
Eh, you don't need to disclaim your opinion
On Sun, Aug 19, 2012 at 6:27 AM, Rene Kochen rene.koc...@schange.comwrote:
Why does it not increase when servicing a range operation?
It doesn't because, basically, it wasn't designed to. Range queries aren't
very commonly used with Cassandra, so I doubt that there has been any
demand for
I setup cassandra with default configuration in clean AWS instance, and
I insert 1 columns into a row, each column has a 1MB data. I use
this ruby(version 1.9.3) script:
1.times do
key = rand(36**8).to_s(36)
value = rand(36**1024).to_s(36) * 1024
16 matches
Mail list logo