Hi,
Can we know if a value is applied eventually after WriteTimeoutException?
I'm investigate the behavior of the counter write of Cassandra.
In CAS write, we can know the value is applied eventually when the
WriteType of the response is SIMPLE because it failed in the commit phase.
How about
ode and not commitlog? Batch is
> typically not what people expect (group commitlog in 4.0 is probably closer
> to what you think batch does).
>
> --
> Jeff Jirsa
>
>
> On Nov 27, 2018, at 10:55 PM, Yuji Ito wrote:
>
> Hi,
>
> Thank you for the reply.
> I've
ing Netty in 4.0. It will be better to
> test it using that as potential changes will mostly land on top of that.
>
> On Mon, Nov 26, 2018 at 7:39 AM Yuji Ito wrote:
>
>> Hi,
>>
>> I'm investigating LWT performance with C* 3.11.3.
>> It looks that the performance is
Our tests are based on riptano's great work.
https://github.com/riptano/jepsen/tree/cassandra/cassandra
I refined it for the latest Jepsen and removed some tests.
Next, I'll fix clock-drift tests.
I would like to get your feedback.
Thanks,
Yuji Ito
s.apache.org/jira/browse/CASSANDRA-9766
>
> To pick up those fixes, you'd want to benchmark 3.11.1 instead of 3.0.15
>
>
> On Tue, Oct 17, 2017 at 8:04 PM, Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Hi all,
>>
>> I'm comparing performance between Cass
Hi all,
I'm comparing performance between Cassandra 2.2.10 and 3.0.15.
SELECT of 3.0.15 is faster than 2.2.10.
However, conditional INSERT and UPDATE of 3.0.15 are slower than 2.2.10.
Is it expected? If so, I want know why.
I'm trying to measure performance for non-conditional operation next.
you getting Cassandra 2.2 built from yum?
> On Wed, May 10, 2017 at 9:54 PM Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Hi Joaquin,
>>
>> > Were both tests run from the same machine at close the same time?
>> Yes. I run the both tests within 30 min.
>> I
lting
> http://www.thelastpickle.com
>
> On Wed, May 10, 2017 at 5:01 AM, Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Hi all,
>>
>> I'm trying a simple performance test.
>> The test requests select operations (CL.SERIAL or CL.QUORUM) by
>> increasing
Hi all,
I'm trying a simple performance test.
The test requests select operations (CL.SERIAL or CL.QUORUM) by increasing
the number of threads.
There is the difference of the performance between C* installed by yum and
C* which I built by myself.
What causes the difference?
I use C* 2.2.8.
One
ion
objects per instance.
But it is inefficient and uncommon.
So, we aren't sure that the application works when a lot of cluster/session
objects are created.
Is it correct?
Thank you,
Yuji
On Wed, Feb 8, 2017 at 12:01 PM, Ben Bromhead <b...@instaclustr.com> wrote:
> On Tue, 7 Feb 2017 at 17
gt;>
>> Hi,
>>
>> The API seems kind of not correct because credentials should be
>> usually set with a session but actually they are set with a cluster.
>>
>> So, if there are 1000 clients, then with this API it has to create
>> 1000 cluster instances ?
>>
Hi all,
I want to know how to authenticate Cassandra users for multiple instances
with Java driver.
For instance, each thread creates a instance to access Cassandra with
authentication.
As the implementation example, only the first constructor builds a cluster
and a session.
Other constructors
Hi Shalom,
I also got WriteTimeoutException in my destructive test like your test.
When did you drop a node?
A coordinator node sends a write request to all replicas.
When one of nodes was down while the request is executed, sometimes
WriteTimeOutException happens.
cf.
Hi all,
I got OutOfMemoryError in startup process as below.
I have 3 questions about the error.
1. Why did Cassandra built by myself cause OutOfMemory errors?
OutOfMemory errors happened in startup process in some (not all) nodes on
Cassandra 2.2.8 which I got from github and built by myself.
> to
>> > clear out the hints
>> > (http://cassandra.apache.org/doc/latest/tools/nodetool/
>> truncatehints.html?highlight=truncate)
>> > . However, it seems to clear all hints on particular endpoint, not just
>> for
>> > a specific table.
nsistency serial; select
count(*) from testdb.testtbl;"
echo "restart C* process on node2"
pdsh -l $node2_user -R ssh -w $node2_ip "sudo /etc/init.d/cassandra start"
Thanks,
yuji
On Fri, Nov 18, 2016 at 7:52 PM, Yuji Ito <y...@imagine-orb.com> wrote:
> I i
ed a high
> priority to fix given there is a clear operational work-around.
>
> Cheers
> Ben
>
> On Thu, 24 Nov 2016 at 15:14 Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Hi Ben,
>>
>> I continue to investigate the data loss issue.
>> I'm investigating
ERROR logs) in the failing case.
>
> Cheers
> Ben
>
> On Fri, 11 Nov 2016 at 13:07 Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Thanks Ben,
>>
>> I tried 2.2.8 and could reproduce the problem.
>> So, I'm investigating some bug fixes of repair and commitlo
5,028)
thanks,
yuji
On Wed, Nov 16, 2016 at 5:25 PM, Yuji Ito <y...@imagine-orb.com> wrote:
> Hi,
>
> I could find stale data after truncating a table.
> It seems that truncating starts while recovery is being executed just
> after a node restarts.
> After the t
Hi,
I could find stale data after truncating a table.
It seems that truncating starts while recovery is being executed just after
a node restarts.
After the truncating finishes, recovery still continues?
Is it expected?
I use C* 2.2.8 and can reproduce it as below.
[create table]
en for some (bad) reason.
>
> Good news that 3.0.9 fixes the problem so up to you if you want to
> investigate further and see if you can narrow it down to file a JIRA
> (although the first step of that would be trying 2.2.9 to make sure it’s
> not already fixed there).
>
> Cheer
I tried C* 3.0.9 instead of 2.2.
The data lost problem hasn't happen for now (without `nodetool flush`).
Thanks
On Fri, Nov 4, 2016 at 3:50 PM, Yuji Ito <y...@imagine-orb.com> wrote:
> Thanks Ben,
>
> When I added `nodetool flush` on all nodes after step 2, the problem
> di
)
>
> Cheers
> Ben
>
> On Mon, 24 Oct 2016 at 10:29 Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Hi Ben,
>>
>> The test without killing nodes has been working well without data lost.
>> I've repeated my test about 200 times after removing data and
>&g
generate any overlapping inserts (by
> PK)? Cassandra basically treats any inserts with the same primary key as
> updates (so 1000 insert operations may not necessarily result in 1000 rows
> in the DB).
>
> On Fri, 21 Oct 2016 at 16:30 Yuji Ito <y...@imagine-orb.com> wrote:
>
>
n strategy is used by the test
> keyspace? What consistency level is used by your operations?
>
>
> Cheers
> Ben
>
> On Fri, 21 Oct 2016 at 13:57 Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Thanks Ben,
>>
>> I tried to run a reb
e at least) but I think the solution
> of running a rebuild or repair still applies.
>
> On Tue, 18 Oct 2016 at 15:45 Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Thanks Ben, Jeff
>>
>> Sorry that my explanation confused you.
>>
>> Only node1 is the seed n
will probably be quicker given you
> know all the data needs to be re-streamed.
>
> Cheers
> Ben
>
> On Tue, 18 Oct 2016 at 14:03 Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Thank you Ben, Yabin
>>
>> I understood the rejoin was illegal.
>> I expec
Hi all,
A failure node can rejoin a cluster.
On the node, all data in /var/lib/cassandra were deleted.
Is it normal?
I can reproduce it as below.
cluster:
- C* 2.2.7
- a cluster has node1, 2, 3
- node1 is a seed
- replication_factor: 3
how to:
1) stop C* process and delete all data in
ors...@gmail.com> wrote:
> Nop, still don't get stale values. (I just ran your script 3 times)
>
> On Thu, Aug 25, 2016 at 12:36 PM, Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Thank you for testing, Christian
>>
>> What did you set commitlog_sync in cassandra.y
alues. (On my Linux laptop)
>>
>> regards,
>> Ch
>>
>> On Mon, Aug 15, 2016 at 7:29 AM, Yuji Ito <y...@imagine-orb.com> wrote:
>>
>>> Hi,
>>>
>>> I can reproduce the problem with the following script.
>>> I got rows w
an I do.
>
> Have you tried with durable_writes=False? If the issue is caused by the
> commitlog, then it should work if you disable durable_writes.
>
> Cheers,
> Christian
>
>
>
> On Tue, Aug 9, 2016 at 3:04 PM, Yuji Ito <y...@imagine-orb.com> wrote:
>
>>
> Christian
>
>
>
> On Mon, Aug 8, 2016 at 7:34 AM, Yuji Ito <y...@imagine-orb.com> wrote:
>
>> Hi all,
>>
>> I have a question about clearing table and commit log replay.
>> After some tables were truncated consecutively, I got some stale val
Hi all,
I have a question about clearing table and commit log replay.
After some tables were truncated consecutively, I got some stale values.
This problem doesn't occur when I clear keyspaces with DROP (and CREATE).
I'm testing the following test with node failure.
Some stale values appear at
el.SERIAL
4. the read succeeds. But the result isn't new value
This problem does not always occurs.
Almost read can get the latest value.
Thanks
Yuji
On Mon, Jul 25, 2016 at 4:34 PM, DuyHai Doan <doanduy...@gmail.com> wrote:
> Can you outline the detailed steps to reproduce the issue
On July 24, 2016 at 8:08:01 PM, Yuji Ito (y...@imagine-orb.com) wrote:
>
>> Hi,
>>
>> I have another question about CAS operation.
>>
>> Can a read get stale data after failure in commit phase?
>>
>> According to the following article,
>> when
Hi,
I have another question about CAS operation.
Can a read get stale data after failure in commit phase?
According to the following article,
when a write fails in commit phase (a WriteTimeout with WriteType SIMPLE
happens),
a subsequent read will repair the uncommitted state
and get the latest
gmail.com> wrote:
> Hi Yuji Ito,
>
> I don't know Cassandra 2.2 that much and I try to avoid using indexes, but
> I imagine that what happened there is that creating the index took some
> time, and all the newly created data went to L0 and compaction was
> intensive as this node
debug.log was filled by "Choosing candidates for L0".
This problem hasn't occurred in STCS setting.
Thanks,
Yuji Ito
uot;1" in the row!!
5. read/results phase fails by ReadTimeoutException caused by failure of
node C
Thanks,
Yuji Ito
39 matches
Mail list logo