If it's run by Linux Foundation, we may be able to present. Even better
(but very difficult), get one of our open-source users to present.
On Sat, 2022-10-01 at 13:04 -0700, Patrick McFadin wrote:
> I know a lot of people are excited to see the return of Cassandra
> Summit and I'm one of them!
On 12/12/2019 06.25, lampahome wrote:
Jon Haddad mailto:j...@jonhaddad.com>> 於
2019年12月12日 週四 上午12:42寫道:
I'm not sure how you're measuring this - could you share your
benchmarking code?
s the details of theri?
start = time.time()
for i in range(40960):
prep =
In other words, your write amplification would increase from 30-50 for
normal LCS to 200-300 when fan-out changes to 100.
On 18/09/2018 14.54, Marcus Eriksson wrote:
problem would be that for every file you flush, you would recompact
all of L1 - files are flushed to L0, then compacted together
Does the last_update_date constraint filter out a lot of rows? In that
case the server may be reading a large number of rows, only to throw
them away since they get filtered out.
If you apply the filter on the client side, you shouldn't see timeouts
(but overall the process will be slower
On the flip side, a large number of vnodes is also beneficial. For
example, if you add a node to a 20-node cluster with many vnodes, each
existing node will contribute 5% of the data towards the new node, and
all nodes will participate in streaming (meaning the impact on any
single node will
There is now a readme with some examples and a build file.
On 07/11/2017 11:53 AM, Avi Kivity wrote:
Yeah, posting a github link carries an implied undertaking to write a
README file and make it easily buildable. I'll see what I can do.
On 07/11/2017 06:25 AM, Nate McCall wrote:
You
<n...@thelastpickle.com
<mailto:n...@thelastpickle.com>> wrote:
On Tue, Jul 11, 2017 at 3:20 AM, Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
[1] https://github.com/avikivity/shardsim
<https://github.com/avikivity/shards
the solution.
On 07/11/2017 10:27 AM, Loic Lambiel wrote:
Thanks for the hint and tool !
By the way, what does the --shards parameter means ?
Thanks
Loic
On 07/10/2017 05:20 PM, Avi Kivity wrote:
32 tokens is too few for 33 nodes. I have a sharding simulator [1] and
it shows
$ ./shardsim --vnodes
32 tokens is too few for 33 nodes. I have a sharding simulator [1] and
it shows
$ ./shardsim --vnodes 32 --nodes 33 --shards 1
33 nodes, 32 vnodes, 1 shards
maximum node overcommit: 1.42642
maximum shard overcommit: 1.426417
So 40% overcommit over the average. Since some nodes can be
s fired up.
On Thu, May 25, 2017 at 8:06 AM Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
Not sure whether you're asking me or the original poster, but the
more times data gets overwritten in a memtable, the less it has to
be compacted later on (and even with
think keeping your data in the memtable is a what you need
to do?
On Thu, May 25, 2017 at 7:16 AM Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
Then it doesn't have to (it still may, for other reasons).
On 05/25/2017 05:11 PM, preetika tyagi wrot
Then it doesn't have to (it still may, for other reasons).
On 05/25/2017 05:11 PM, preetika tyagi wrote:
What if the commit log is disabled?
On May 25, 2017 4:31 AM, "Avi Kivity" <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
Cassandra has to flush the
Cassandra has to flush the memtable occasionally, or the commit log
grows without bounds.
On 05/25/2017 03:42 AM, preetika tyagi wrote:
Hi,
I'm running Cassandra with a very small dataset so that the data can
exist on memtable only. Below are my configurations:
In jvm.options:
|-Xms4G
[ 1416.202103] blk_update_request: I/O error, dev nvme0n1, sector
1397303080
On 04/06/2017 05:26 PM, Avi Kivity wrote:
Is there anything in dmesg?
On 04/06/2017 07:25 PM, Cogumelos Maravilha wrote:
Now dies and restart (systemd) without logging why
system.log
INFO [Native-Transport-Requests
Is there anything in dmesg?
On 04/06/2017 07:25 PM, Cogumelos Maravilha wrote:
Now dies and restart (systemd) without logging why
system.log
INFO [Native-Transport-Requests-2] 2017-04-06 16:06:55,362
AuthCache.java:172 - (Re)initializing RolesCache (validity period
/update interval/max
You can use static columns to and just one table:
CREATE TABLE documents (
doc_id uuid,
element_id uuid,
description text static,
doc_title text static,
element_title text,
PRIMARY KEY (doc_id, element_id)
);
The static columns are present once per unique
Is the driver doing the right thing by directing all reads for a given
token to the same node? If that node fails, then all of those reads
will be directed at other nodes, all oh whom will be cache-cold for the
the failed node's primary token range. Seems like the driver should
distribute
Hi,
We did indeed consider support for a mixed cluster, but in the end
decided against it, for many reasons:
- the internode protocol is underdocumented and keeps changing, so it
would be hard to support it, and hard to test it
- it would limit the kind of optimizations we can do by
overhead and all the problems
that comes with it. This to me just sounds like difference in the
design paradigm but doesn't seem to add much to the performance.
Seaster sounds very similar to Quasar. And I am not seeing great
benefits from it.
On Sun, Mar 12, 2017 at 1:48 AM, Avi Kivity &l
at 1:05 AM, Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
btw, for an example of how user-level tasks can be scheduled in a
way that cannot be done with kernel threads, see this pair of blog
posts:
http://www.scylladb.com/2016/04/14/io-
of control when you rely on the
kernel for scheduling and page cache management. As a result you have
to overprovision your node and then you mostly underutilize it.
On 03/12/2017 10:23 AM, Avi Kivity wrote:
On 03/12/2017 12:19 AM, Kant Kodali wrote:
My response is inline.
On Sat, Mar 11, 2017
On 03/12/2017 12:19 AM, Kant Kodali wrote:
My response is inline.
On Sat, Mar 11, 2017 at 1:43 PM, Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
There are several issues at play here.
First, a database runs a large number of concurrent operatio
this add to the performance? or say why
is user level scheduling necessary Given the Thread per core design
and the callback mechanism?
On Sat, Mar 11, 2017 at 12:51 PM, Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
Scylla uses a the seastar framework, wh
THP of size 1KB? According to this post
<https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-transhuge.html> it
looks like the valid values 2MB and 1GB.
Thanks,
kant
On Sat, Mar 11, 2017 at 11:41 AM, Avi Kivity <a...@scy
Agreed, I'd recommend to treat benchmarks as a rough guide to see where
there is potential, and follow through with your own tests.
On 03/11/2017 09:37 PM, Edward Capriolo wrote:
Benchmarks are great for FUDly blog posts. Real world work loads
matter more. Every NoSQL vendor wins their
Bhuvan
On Sat, Mar 11, 2017 at 2:59 PM, Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
There is no magic 10X bullet. It's a mix of multiple factors,
which can come up to less than 10X in some circumstances and more
than 10X in others, as has been r
metrics (Disk, CPU, Network), the OPS increase starts
to decay along with the configs used. Having plain ops per second and
99p latency is blackbox.
Regards,
Bhuvan
On Fri, Mar 10, 2017 at 12:47 PM, Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>> wrote:
ScyllaDB engineer
ScyllaDB engineer here.
C++ is really an enabling technology here. It is directly responsible
for a small fraction of the gain by executing faster than Java. But it
is indirectly responsible for the gain by allowing us direct control
over memory and threading. Just as an example, Scylla
or not. It also was my
first thought but in the end the main thing is, it works again and it
does with more mibn_free_kbytes
2017-02-06 11:53 GMT+01:00 Avi Kivity <a...@scylladb.com
<mailto:a...@scylladb.com>>:
On 01/26/2017 07:36 AM, Benjamin Roth wrote:
Hi there,
We
On 01/26/2017 07:36 AM, Benjamin Roth wrote:
Hi there,
We installed 2 new nodes these days. They run on ubuntu (Ubuntu
16.04.1 LTS) with kernel 4.4.0-59-generic. On these nodes (and only on
these) CS gets killed by the kernel due to OOM. It seems very strange
to me because, CS only takes
30 matches
Mail list logo