From: Pan, Adeline (TR Technology & Ops)
Sent: Tuesday, September 06, 2016 12:34 PM
To: 'user@cassandra.apache.org'
Cc: Yang, Ling (TR Technology & Ops)
Subject: FW: WriteTimeoutException with LOCAL_QUORUM
Hi All,
I hope you are doing well today, and I need your help.
We were using Cassandra 1
You're right Christopher, I missed the fact that with RF=3 NTS will always
place a replica on us-east-1d, so in this case repair on this node would be
sufficient. Thanks for clarifying!
2016-09-05 11:28 GMT-03:00 Christopher Bradford :
> If each AZ has a different rack
Call QueryProcessor.execute method to insert data into table in cassandra
unit test file.
public static UntypedResultSet execute(String query, ConsistencyLevel cl,
Object... values)
throws RequestExecutionException
{
return execute(query, cl, internalQueryState(), values);
}
Hi All,
As per datastax report Cassandra to spark type
timestamp Long, java.util.Date, java.sql.Date, org.joda.time.DateTime
Please help me with your input.
I have a Cassandra table with 30 fields. Out of it 3 are timestamp.
I read cassandratable using sc.cassandraTable
Attached are the sstablemeta outputs from 2 SSTables of size 28 MB and 52
MB (out2). The records are inserted with different TTLs based on their
nature ; test records with 1 day, typeA records with 6 months, typeB
records with 1 year etc. There are also explicit DELETEs from this table,
though
Hi,
You don't have to worry about that unless you write with CL = ANY. The sole
method to force hints that I know is to invoke scheduleHintDelivery on
"org.apache.cassandra.db:type=HintedHandoffManager" via JMX but it takes an
endpoint as argument. If you have lots of nodes and several DCs,
We have seen read time out issue in cassandra due to high droppable
tombstone ratio for repository.
Please check for high droppable tombstone ratio for your repo.
On Mon, Sep 5, 2016 at 8:11 PM, Romain Hardouin wrote:
> Yes dclocal_read_repair_chance will reduce the
Yes dclocal_read_repair_chance will reduce the cross-DC traffic and latency, so
you can swap the values ( https://issues.apache.org/jira/browse/CASSANDRA-7320
). I guess the sstable_size_in_mb was set to 50 because back in the day (C*
1.0) the default size was way too small: 5 MB. So maybe
If each AZ has a different rack identifier and the keyspace uses
NetworkTopologyStrategy with a replication factor of 3 then the single host
in us-east-1d *will receive 100% of the data*. This is due
to NetworkTopologyStrategy's preference for placing replicas across
different racks before placing
Thanks, Romain . We will try to enable the DEBUG logging (assuming it won't
clog the logs much) . Regarding the table configs, read_repair_chance must
be carried over from older versions - mostly defaults. I think
sstable_size_in_mb
was set to limit the max SSTable size, though i am not sure on
Hi,
Try to put org.apache.cassandra.db.ConsistencyLevel at DEBUG level, it could
help to find a regular pattern. By the way, I see that you have set a global
read repair chance: read_repair_chance = 0.1And not the local read repair:
dclocal_read_repair_chance = 0.0 Is there any reason to
Hi Ryan,
Attached are the cfhistograms run within few mins of each other. On the
surface, don't see anything which indicates too much skewing (assuming
skewing ==keys spread across many SSTables) . Please confirm. Related to
this, what does the "cell count" metric indicate ; didn't find a clear
12 matches
Mail list logo