Impossible to guess with that info, but maybe one of:

- “Wrong” consistency level for reads or writes
- Incorrect primary key definition (you’re overwriting data you don’t realize 
you’re overwriting)

Less likely:
- Broken cluster where hosts are flapping and you’re missing data on read
- using a version of Cassandra with bugs in short read protection

-- 
Jeff Jirsa


> On Apr 21, 2018, at 2:05 PM, Soheil Pourbafrani <soheil.i...@gmail.com> wrote:
> 
> I consume data from Kafka and insert them into Cassandra cluster using Java 
> API. The table has 4 keys including a timestamp based on millisecond. But 
> when executing the code, it just inserts 120 to 190 rows and ignores other 
> incoming data!
> 
> What parts can be the cause of the problem? Bad insert code in key fields 
> that overwrite data, improper cluster configuration,....?

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscr...@cassandra.apache.org
For additional commands, e-mail: user-h...@cassandra.apache.org

Reply via email to