Hi Dominique,
I have the same problem! I would like to place an object in a specific node
because I'm working in a spatial application. How should I choose the K1
part to forcing a given object to go to a node?
2013/1/3 DE VITO Dominique dominique.dev...@thalesgroup.com
Hi Everton,
**
I'm currently using Astyanax 1.56.21 to retrieve a entire row. My code:
ColumnListString result = keyspace.prepareQuery(cf_name)
.getKey(key)
.execute().getResult();
But, sometimes Astyanax returns a empty row for a specific key. For
example, on first attempt Astyanax returns a empty row
I ran the tests with only one machine, so the CL_ONE is not the problem. Am
i right?
2013/1/15 Hiller, Dean dean.hil...@nrel.gov
What is your consistency level set to? If you set it to CL_ONE, you could
get different results or is your database constant and unchanging?
Dean
From: Sávio
We have multiple clients reading the same row key. It makes no sense fail
in one machine. When we use Thrift, Cassandra always returns the correct
result.
2013/1/16 Sávio Teles savio.te...@lupa.inf.ufg.br
I ran the tests with only one machine, so the CL_ONE is not the problem.
Am i right
We wish to store a column in a row with size larger
thanthrift_framed_transport_size_in_mb
. But, Thrift has a maximum frame size configured by
thrift_framed_transport_size_in_mb in cassandra.yaml.
so, How to store columns with size larger than
thrift_framed_transport_size_in_mb? Increasing this
Thanks Keith Wright.
2013/1/21 Keith Wright kwri...@nanigans.com
This may be helpful:
https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store
From: Vegard Berget p...@fantasista.no
Reply-To: user@cassandra.apache.org user@cassandra.apache.org, Vegard
Berget p...@fantasista.no
Date:
Astyanax split large objects into multiple keys. Is it a good idea? It
is better
to split into multiple columns?
Thanks
2013/1/21 Sávio Teles savio.te...@lupa.inf.ufg.br
Thanks Keith Wright.
2013/1/21 Keith Wright kwri...@nanigans.com
This may be helpful:
https://github.com/Netflix
) be distributed on different nodes. This helps to avoid hot
spots.
Hope this helps,
-Jason Brown
Netflix
--
*From:* Sávio Teles [savio.te...@lupa.inf.ufg.br]
*Sent:* Monday, January 21, 2013 9:51 AM
*To:* user@cassandra.apache.org
*Subject:* Re: How to store
. This helps to avoid hot
spots.
Hope this helps,
-Jason Brown
Netflix
--
*From:* Sávio Teles [savio.te...@lupa.inf.ufg.brsavio.te...@lupa.inf.ufgbr
]
*Sent:* Monday, January 21, 2013 9:51 AM
*To:* user@cassandra.apache.org
*Subject:* Re: How to store large
We are using ChunkedStorage described in
https://github.com/Netflix/astyanax/wiki/Chunked-Object-Store to store
large objects (about 40 MB). We have defined the chunk size to 1 MB. But,
when this code is called, the exception UnavaibleException is thrown. Does
anyone has any idea?
Thanks in
I have the same problem!
2013/3/11 Alain RODRIGUEZ arodr...@gmail.com
I can add that I have JNA corectly loaded, from the logs: JNA mlockall
successful
2013/3/11 Alain RODRIGUEZ arodr...@gmail.com
Any clue on this ?
Row cache well configured could avoid us a lot of disk read, and IO
is
I'm running a Cassandra 1.1.10 cluster with a ByteOrderedPartitioner. I'm
generating a key to force an object to be stored in a specific machine.
When I used org.apache.cassandra.thrift.CassandraServer to store the
object, this object was stored on correct machine. When I used Thrift, the
key is
We are using Cassandra 1.2 Embedded in a production environment.
We are some issues with these lines:
SocketAddress remoteSocket.get = socket ();
assert socket! = null;
ThriftClientState cState = activeSocketSessions.get (socket);
The connection is maintained by remoteSocket thread.
We're using ByteOrderedPartition to programmatically choose the machine
which a objet will be inserted.*
*How can I use *ByteOrderedPartition *with vnode on Cassandra 1.2?
*
*
--
Atenciosamente,
Sávio S. Teles de Oliveira
voice: +55 62 9136 6996
http://br.linkedin.com/in/savioteles
Mestrando
2013 21:04, Sávio Teles savio.te...@lupa.inf.ufg.br wrote:
We're using ByteOrderedPartition to programmatically choose the machine
which a objet will be inserted.*
*How can I use *ByteOrderedPartition *with vnode on Cassandra 1.2?
Don't. Managing tokens with ByteOrderedPartitioner is very
Ok! Thanks!
2013/7/3 Richard Low rich...@wentnet.com
On 3 July 2013 22:18, Sávio Teles savio.te...@lupa.inf.ufg.br wrote:
We were able to implement ByteOrderedPartition on Cassandra 1.1 and
insert an object in a specific machine.
However, with Cassandra 1.2 and VNodes we can't implement
Some bug was fixed in 2.0.0-beta2 by C* developers. Try it!
2013/7/22 Andrew Cobley a.e.cob...@dundee.ac.uk
I've been noticing some strange casandra-stress results with 2.0.0 beta
1. I've set up a single node on a Mac (4 gig ram, 2.8Ghz core 2 duo) and
installed 2.0.0 beta1.
When I run
It is very useful to upgrade the apps perfomance.
For example, if you have a machine with X capacity, you can put the
num_token=256. If you add a machine in your cluster with (X*2) capacity you
can put the num_token=512.
So, this new machine will receive twice the load automatically.
Moreover, you
I need to perform range query efficiently. I have the table like:
users
---
user_id | age | gender | salary | ...
The attr user_id is the PRIMARY KEY.
Example of querying:
select * from users where user_id = '*x*' and age *y *and age *z* and
salary *a* and salary *b *and age='M';
Ops, inverted index*!
2013/8/26 Sávio Teles savio.te...@lupa.inf.ufg.br
Do I Have to use revert index to optimize range query operation?
2013/8/23 Sávio Teles savio.te...@lupa.inf.ufg.br
I need to perform range query efficiently. I have the table like:
users
---
user_id | age
exactly ?
Alain
Le 23 août 2013 14:53, Sávio Teles savio.te...@lupa.inf.ufg.br a
écrit :
I need to perform range query efficiently. I have the table like:
users
---
user_id | age | gender | salary | ...
The attr user_id is the PRIMARY KEY.
Example of querying:
select * from users
Use a database that is designed for efficient range queries? ;D
Is there no way to do this with Cassandra? Like using Hive, Sorl...
2013/8/27 Robert Coli rc...@eventbrite.com
On Fri, Aug 23, 2013 at 5:53 AM, Sávio Teles
savio.te...@lupa.inf.ufg.brwrote:
I need to perform range query
on secondary indexes will be passed on each node in
ring.
-Vivek
On Tue, Aug 27, 2013 at 11:11 PM, Sávio Teles savio.te...@lupa.inf.ufg.br
wrote:
Use a database that is designed for efficient range queries? ;D
Is there no way to do this with Cassandra? Like using Hive, Sorl...
2013/8/27 Robert
I can populate again. We are modelling the data yet! Tks.
2013/8/28 Vivek Mishra mishra.v...@gmail.com
Just saw that you already have data populated, so i guess modifying for
composite key may not work for you.
-Vivek
On Tue, Aug 27, 2013 at 11:55 PM, Sávio Teles savio.te
I have a column family with this conf:
CREATE TABLE geoms (
geom_key text PRIMARY KEY,
part_geom listblob,
the_geom text
) WITH
bloom_filter_fp_chance=0.01 AND
caching='KEYS_ONLY' AND
comment='' AND
dclocal_read_repair_chance=0.00 AND
gc_grace_seconds=864000 AND
I'm running one Cassandra node -version 1.2.6- and I *enabled* the *row
cache* with *1GB*.
But looking the Cassandra metrics on JConsole, *Row Cache Requests* are
very *low* after a high number of queries (about 12 requests).
RowCache metrics:
*Capacity: 1GB*
*Entries: 3
*
*HitRate: 0.75
*
Yes, it is! I've fixed the problem. I miss the caching property set to
'ALL' to create the columily family.
2013/8/31 Jonathan Haddad jonathan.had...@gmail.com
9/12 = .75
It's a rate, not a percentage.
On Sat, Aug 31, 2013 at 2:21 PM, Sávio Teles
savio.te...@lupa.inf.ufg.brwrote
PM, Sávio Teles
savio.te...@lupa.inf.ufg.brwrote:
I can populate again. We are modelling the data yet! Tks.
2013/8/28 Vivek Mishra mishra.v...@gmail.com
Just saw that you already have data populated, so i guess modifying for
composite key may not work for you.
-Vivek
On Tue, Aug 27
I'm running the Cassandra 1.2.4 and when I enable the row_cache, the system
throws TImeoutExcpetion and Garbage Collection don't stop.
When I disable the query returns in 700ms.
*Configuration:
*
- *row_cache_size_in_mb: 256*
- *row_cache_save_period: 0*
- *# row_cache_keys_to_save:
:
http://www.mail-archive.com/user@cassandra.apache.org/msg31693.html
tl;dr - it depends completely on use case. Small static rows work best.
On Mon, Sep 2, 2013 at 2:05 PM, Sávio Teles
savio.te...@lupa.inf.ufg.brwrote:
I'm running the Cassandra 1.2.4 and when I enable the row_cache
The list is null.
2013/9/3 Baskar Duraikannu baskar.duraika...@outlook.com
I don't know of any. I would check the size of LIST. If it is taking long,
it could be just that disk read is taking long.
--
Date: Sat, 31 Aug 2013 16:35:22 -0300
Subject: Listblob
Solr's index sitting on a single machine, even if that single machine can
vertically scale, is a single point of failure.
And about Cloud Solr?
2013/9/30 Ken Hancock ken.hanc...@schange.com
Yes.
On Mon, Sep 30, 2013 at 1:57 PM, Andrey Ilinykh ailin...@gmail.comwrote:
Also, be aware
We have the same problem.
2013/11/5 Jiri Horky ho...@avast.com
Hi there,
we are seeing extensive memory allocation leading to quite long and
frequent GC pauses when using row cache. This is on cassandra 2.0.0
cluster with JNA 4.0 library with following settings:
key_cache_size_in_mb: 300
I'm running a cluster with Cassandra and my app embedded.
Regarding performance, it is better to run embedded Cassandra?
What are the implications of running an embedded Cassandra ?
Tks
--
Atenciosamente,
Sávio S. Teles de Oliveira
voice: +55 62 9136 6996
http://br.linkedin.com/in/savioteles
Is it advisable to run the embedded Cassandra in production?
2014-04-16 12:08 GMT-03:00 Sávio Teles savio.te...@lupa.inf.ufg.br:
I'm running a cluster with Cassandra and my app embedded.
Regarding performance, it is better to run embedded Cassandra?
What are the implications of running
new feature X!)
would make embedding worth it only for edge scenarios. I would recommend
against it.
---
Chris Lohfink
On Apr 16, 2014, at 10:13 AM, Sávio Teles savio.te...@lupa.inf.ufg.br
wrote:
Is it advisable to run the embedded Cassandra in production?
2014-04-16 12:08 GMT-03:00
36 matches
Mail list logo