It looks like you're using the wrong tool and architecture.
If the use case really needs continuous query like event processing, use an ESP
product to do that. You can still store data in Cassandra for persistence .
The design you want is to have two paths: event stream and persistence. At the
Hello.
We're currently using Hazelcast (http://hazelcast.org/) as a distributed
in-memory data grid. That's been working sort-of-well for us, but going
solely in-memory has exhausted its path in our use case, and we're
considering porting our application to a NoSQL persistent store. After the
See: https://issues.apache.org/jira/browse/CASSANDRA-5355
Collections values are currently limited to 64K because the serialized
form used uses shorts to encode the elements length (and for sets elements
and key map, because they are part of the internal column name that is
itself limited to
From what I understand from the docs, the 64k limit applies to both the
number of items in a collection and the size of its elements?
Why is there a constraint on value size in collections, when other types
such as blob or text can be larger?
Thanks,
Sylvain
Le 01/01/2015 20:04, DuyHai Doan
Looks like someone else is experiencing almost exactly what we are seeing:
https://issues.apache.org/jira/browse/CASSANDRA-8552
On Mon, Dec 29, 2014 at 5:14 PM, Robert Coli rc...@eventbrite.com wrote:
Might be https://issues.apache.org/jira/browse/CASSANDRA-8061 or one of
the linked/duplicate
Am 03.01.2015 um 07:07 schrieb Srinivasa T N:
Hi Wilm,
The reason is that for some auditing purpose, I want to store the
original files also.
well, then I would use a hdfs cluster for storing, as it seems to be
exactly what you need. If you collocate hdfs DataNodes and yarns
ResourceManager,
Hello all,
I have a cassandra node at a machine. When I access cqlsh from the same
machne it works properly.
But when I tried to connect to it's cqlsh using 192.x.x.x from another
machine, I'm getting an error saying
Connection error: ('Unable to connect to any servers', {'192.x.x.x':
Hello Hugo
I was facing the same kind of requirement from some users. Long story
short, below are the possible strategies with advantages and draw-backs of
each
1) Put Spark in front of the back-end, every incoming
modification/update/insert goes into Spark first, then Spark will forward
it to
Hello,
Or you can have a look at akka http://www.akka.io for event processing and
use cassandra for persistence(Peters suggestion).
On Sat Jan 03 2015 at 11:59:45 AM Peter Lin wool...@gmail.com wrote:
It looks like you're using the wrong tool and architecture.
If the use case really needs
Thank you all for your answers.
It seems I'll have to go with some event-driven processing before/during the
Cassandra write path.
My concern would be that I'd love to first guarantee the disk write of the
Cassandra persistence and then do the event processing (which is mostly CRUD
Use a message bus with a transactional get, get the message, send to cassandra,
upon write success, submit to esp, commit get on bus. Messaging systems like
rabbitmq support this semantic.
Using cassandra as a queuing mechanism is an anti-pattern.
--
Colin Clark
+1-320-221-9531
On Jan 3,
listen to colin's advice, avoid the temptation of anti-patterns.
On Sat, Jan 3, 2015 at 6:10 PM, Colin colpcl...@gmail.com wrote:
Use a message bus with a transactional get, get the message, send to
cassandra, upon write success, submit to esp, commit get on bus. Messaging
systems like
Indeed this makes sense for map keys and set values, but AFAIU from the
docs this also applies to map and list _values_: The maximum size of
an item in a collection is 64K
http://www.datastax.com/documentation/cql/3.0/cql/cql_using/use_collections_c.html
Or are collection values also
Thanks :)
Duly noted - this is all uncharted territory for us, hence the value of
seasoned advice.
Best
--
Hugo José Pinto
No dia 03/01/2015, às 23:43, Peter Lin wool...@gmail.com escreveu:
listen to colin's advice, avoid the temptation of anti-patterns.
On Sat, Jan 3, 2015 at 6:10
if you like SQL dialect, try out products that use streamSQL to do
continuous queries. Espers comes to mind. Google to see what other products
support streamSQL
On Sat, Jan 3, 2015 at 6:48 PM, Hugo José Pinto hugo.pi...@inovaworks.com
wrote:
Thanks :)
Duly noted - this is all uncharted
Check firewall settings for incoming requests.
Regards,
Rao
On 3 Jan 2015 23:34, Chamila Wijayarathna cdwijayarat...@gmail.com
wrote:
Hello all,
I have a cassandra node at a machine. When I access cqlsh from the same
machne it works properly.
But when I tried to connect to it's cqlsh using
This is most likely because your listen address is set to localhost. Try
changing it to listen on the external interface.
On Sat Jan 03 2015 at 10:03:57 AM Chamila Wijayarathna
cdwijayarat...@gmail.com wrote:
Hello all,
I have a cassandra node at a machine. When I access cqlsh from the same
Thanks Jonathan,
It worked after setting both listen address and rpc_address to 0.0.0.0
On Sun, Jan 4, 2015 at 7:58 AM, Jonathan Haddad j...@jonhaddad.com wrote:
This is most likely because your listen address is set to localhost. Try
changing it to listen on the external interface.
On
18 matches
Mail list logo