the difference between se and random the test code is just how the key of each record is generated.
the test code is : long totalSWriteTime = 0; for (int i = 0; i < totalRows; i++) { byte[] key = dg.getRandomRow();//when sequential write , we use i as the key byte[] data = dg.generateValue(); long start = System.currentTimeMillis(); client.insert("Keyspace1", new String(key), new ColumnPath( "Standard1", null, "data".getBytes("UTF-8")), data,timestamp, ConsistencyLevel.ONE); totalSWriteTime += (System.currentTimeMillis() - start); if(i % 10000 == 0){ System.out.println("Has write " + i); } } is there something wrong? 2010-03-12 Bingbing Liu 发件人: Jonathan Ellis 发送时间: 2010-03-12 13:40:40 收件人: cassandra-dev 抄送: 主题: Re: wo did some test on cassandra ,but the result puzzled us why reads are slower than writes: http://wiki.apache.org/cassandra/FAQ#reads_slower_writes no idea on seq vs random. i would not be surprised if there is a bug in your test code. On Fri, Mar 12, 2010 at 12:36 AM, Bingbing Liu <rucb...@gmail.com> wrote: > We did some test on on Cassandra, and the benchmark is from Section 7 of the > BigTable paper “Bigtable: A Distributed Storage System for Structured Data”, > the benchmark task includes: random write, random read, sequential write, and > sequential read. The test results made us puzzled. We use a cluster of 5 > nodes (each node has a 4 cores cpu , 4G memory).The data for test is a table > with 4,000,000 records each of which is 1000 bytes. The test results are as > follows: > Sequential write: 875124 ms > Sequential read: 1972588 ms > Random read: 43331738 ms > Random write: 20193484 ms > We wondered why the speed of sequential write are so faster than the speed of > sequential read, and why the speed of sequential write are so faster than > that of random write? We think that the speed of read should be faster than > that of data write, but the results are just the opposite, would you please > give us some explanations, thanks a lot! > > 2010-03-12 > > > > Bingbing Liu >