.
On Thu, Jul 2, 2015 at 1:39 AM Serega Sheypak serega.shey...@gmail.com
wrote:
What is the reason to do that? I understand BatchStatement as a kind of
atomic insert hack.
How it can help me to solve concurrency problem? 1 thread with sync
insert gives me 1K ops/sec. 10 threads give me 20 ops/sec
Hi, I have weird driver behaviour. Can you help me please to find the
problem?
Problem: I try to insert data using 10 threads.
I see that 10 thread starts, they start to insert some data and then they
hung. It takes enormous amount of time to insert (seconds for 1K inserts).
It runs 1K per second
99% = 0.02 milliseconds
99.9% = 0.12 milliseconds
what should I do to reach better performance when i use several threads?
2015-07-02 10:34 GMT+02:00 Vova Shelgunov vvs...@gmail.com:
Did you tried to use BatchStatement?
On Jul 2, 2015 11:00 AM, Serega Sheypak serega.shey
Sorry, misprint
//composeQuery() = INSERT INTO packets (id, fingerprint, mark) VALUES (?,
?, ?);
PreparedStatement preparedStatement = session.prepare(composeQuery());
//exception happens here!
2015-06-24 11:20 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
Hi, I'm trying to use bounded
Hi, I'm trying to use bounded query and I get weird error:
Here is a query:
Bounded query: INSERT INTO packets (id, fingerprint, mark) VALUES (?, ?, ?);
Here is a code:
PreparedStatement preparedStatement = session.prepare(composeQuery());
//composeQuery returns INSERT INTO packets (id,
omg!!!
It was some weird unprinted character. That is why C* driver failed to
parse it
2015-06-24 11:35 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
Sorry, misprint
//composeQuery() = INSERT INTO packets (id, fingerprint, mark) VALUES
(?, ?, ?);
PreparedStatement preparedStatement
Hi, spark-sql estimated input for Cassandra table with 3 rows as 8 TB.
sometimes it's estimated as -167B.
I run it on laptop, I don't have 8 TB space for the data.
We use DSE 4.7 with bundled spark and spark-sql-thriftserver
Here is the stat for a dummy select foo from bar where bar three rows
Hi, are Cassandra and Spark from Cloudera compatible?
Where can I find these compatilibity notes?
in DSE
with something else.
However you could probably read or write from/to DSE / Cassandra from a
cloudera spark cluster using the open source DataStax connector. Are you
looking for a particular feature that is not available in Spark 1.1?
On Apr 22, 2015 1:50 PM, Serega Sheypak serega.shey
-services/datastax-enterprise
Thanks,
Jay
On Wed, Apr 22, 2015 at 6:41 AM, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi, are Cassandra and Spark from Cloudera compatible?
Where can I find these compatilibity notes?
/datastax_enterprise/4.6/datastax_enterprise/spark/sparkTOC.html
On Apr 22, 2015 2:05 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
What is embedded spark? Where can I read about it?
Right now we just install spark 1.2 built fro hadoop 2.4 and use it to
query data from cassandra.
2015-04
hi, what happens if unloader meets blob field?
2015-04-20 23:43 GMT+02:00 Sebastian Estevez sebastian.este...@datastax.com
:
Try Brian's cassandra-unloader
https://github.com/brianmhess/cassandra-loader#cassandra-unloader
All the best,
[image: datastax_logo.png] http://www.datastax.com/
I understand the reason, but If I user OrderPreservingPartitioner and have
compound partition key, can I use select using only FIRST component of
compound partition key?
2015-04-08 20:43 GMT+02:00 Robert Coli rc...@eventbrite.com:
On Wed, Apr 8, 2015 at 1:27 AM, Serega Sheypak serega.shey
Hi imagine I have a table events
with fields:
ymd int
user_id uuid
ts timestamp
attr_1
attr_2
with primary key ((ymd, user_id, ts))
and I set OrderPreservingPartitioner as a partitioner for the table
ymd is int representation for the day: 20150410, 20150411, e.t.c.
Can I select from table using
one more good summary:
http://superuser.com/questions/845143/any-limitation-for-having-many-files-in-a-directory-in-mac-os-x
2015-04-07 13:49 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
That is the reason for trying to work with ZFS. Unfortunately, it was
dropped
It's single-threaded for writing :)
2015-04-07 13:13 GMT+02:00 Jean Tremblay jean.tremb...@zen-innovations.com
:
Hi,
Why do everyone say that Cassandra should not be used in production on an
Mac OS x?
Why would this not work?
Are there anyone out there using OS x in production? What is
in HFS are of fixed size, in HFS Plus the size can
vary depending on the actual size of the data they store.
2015-04-07 13:41 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
It's single-threaded for writing :)
2015-04-07 13:13 GMT+02:00 Jean Tremblay
jean.tremb...@zen-innovations.com:
Hi
That is the reason for trying to work with ZFS. Unfortunately, it was
dropped.
And that is the reason pcie interface for SSD in my MacBook pro.
2015-04-07 13:46 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
HFS:
The Catalog File, which stores all the file and directory records
weeks.
http://kairosdb.github.io/kairosdocs/CassandraSchema.html
On Mon, Apr 6, 2015 at 3:27 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
Thanks, is it a kind of opentsdb?
2015-04-05 18:28 GMT+02:00 Kevin Burton bur...@spinn3r.com:
Hi, I switched from HBase to Cassandra and try
Hi, getting weird problem when agent to connect to OpsCenter
OpsCenter installed on VM with DSE and agent.
It's not for production, I have 3 VMs with DSE and OpsCenter for dev/test
purposes.
The stacktrace from agent log is:
vagrant@dsenode03:~$ sudo cat /var/log/datastax-agent/agent.log
Hi, I switched from HBase to Cassandra and try to find problem solution for
timeseries analysis on top Cassandra.
I have a entity named Event.
Event has attributes:
user_id - a guy who triggered event
event_ts - when even happened
event_type - type of event
some_other_attr - some other attrs we
for the row key. Then you could query
within the partition. The partition key determines which node can satisfy
the query. Designing your partition key judiciously is the key (haha!) to
performant Cassandra applications.
-- Jack Krupansky
On Sat, Apr 4, 2015 at 9:33 AM, Serega Sheypak
.
Yes, all of the rows within a partition are stored on one physical node as
well as the replica nodes.
-- Jack Krupansky
On Sat, Apr 4, 2015 at 1:38 PM, Serega Sheypak serega.shey...@gmail.com
wrote:
non-equal relation on a partition key is not supported
Ok, can I generate select query
as the number of operators :). You should go with
method you can be confident with. I can assure the one I propose is quite
secure.
C*heers,
Alain
2015-03-31 15:32 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
I have to ask you if you considered doing an Alter keyspace, change RF
, Serega Sheypak serega.shey...@gmail.com
wrote:
Hi bharat,
you are talking about Cassandra 1.2.5 Does it fit Cassandra 2.1?
Were there any significant changes to SSTable format and layout?
Thank you, article is interesting.
Hi jacob jacob.rho...@me.com,
HBase does it for example.
http
Hi, I have 2 cassandra clusters.
cluster1 is datastax community 2.1
cluster2 is datastax DSE
I can run sstableloader from cluster1(Community) and stream data to
cluster2 (DSE)
But I get exception while streaming from cluster2 (DSE) to cluster1
(Community)
The expection is:
Could not retrieve
Hi bharat,
you are talking about Cassandra 1.2.5 Does it fit Cassandra 2.1?
Were there any significant changes to SSTable format and layout?
Thank you, article is interesting.
Hi jacob jacob.rho...@me.com,
HBase does it for example. http://hbase.apache.org/book.html#_hfile_format_2
It would be
Sorry
cluster1 community version is: ii cassandra 2.1.3
distributed storage system for structured data
cluster2 DSE version is: ii dse-libcassandra4.6.2-1
The DataStax Enterprise package includes a production-certifie
2015-04-01 14:53 GMT+02:00 Serega Sheypak
Got it.
2015-04-01 20:39 GMT+02:00 Michael Shuler mich...@pbandjelly.org:
On 04/01/2015 08:10 AM, Serega Sheypak wrote:
Sorry
cluster1 community version is: ii cassandra 2.1.3
distributed storage system for structured data
cluster2 DSE version is: ii dse
Hi, I have a simple question and can't find related info in docs.
I have cluster1 with 3 nodes and cluster2 with 5 nodes. I want to transfer
whole keyspace named 'mykeyspace' data from cluster1 to cluster2 using
sstableloader. I understand that it's not the best solution, I need it for
testing
wouldn't to lead you to a failure of any kind. Also, I
don't know if data is also replicated directly with sstableloader or if you
need to repair c2 after loading data.
C*heers,
Alain
2015-03-31 13:21 GMT+02:00 Serega Sheypak serega.shey...@gmail.com:
Hi, I have a simple question and can't find
31 matches
Mail list logo