Greetings, I am hitting a behavior which looks like a bug to me and I’m not sure how to work around it. If I insert rows with a given key like so: path=‘some:test:key' for c in range(count): session.execute("""insert into raw_data (key,column1,value) values (%s,%s,%s)""", (path,c,"%d"%(c+1)))
and then delete them like so: session.execute("""delete from raw_data where key = %s""",(path,)) and then try to select from that key like so: rows = session.execute("""select * from raw_data where key = %s limit 5""",(path,)) This works fine for small values of count, say 1000 or so. If I do this with 500,000 rows I end up in a state where any attempt to query the key results in a timeout error from Cassandra. The only way I seem to be able to clear it up is to completely stop and start the Cassandra server. I’m running version 2.0.7 and I don’t think this was happening under 2.0.1. Is this a known bug? Any ideas what might be causing this? Thanks in advance, -David Mitchell -- David Mitchell, Network Engineer Energy Sciences Network (ESnet) Lawrence Berkeley National Laboratory (LBL) Email:mitch...@es.net Phone:(510) 936-0720