On Jun 11, 2009, at 9:05 PM, Jim Wilcoxson wrote:
> you will have to place each on its own physical disk drive to
> increase transaction rates.
Arguably, such micro management of what data block sits on what disk
spindle would be better left to the underlying volume manager or such.
A bit
On 11 Jun 2009, at 16:19, Jim Wilcoxson wrote:
> SSD's usually have poor write performance, because to do a write, they
> have to use read, erase, write sequences across large blocks like 64K.
> Most of the SSD benchmarks that quote good write performance are for
> sequential write performance.
On 11 Jun 2009, at 20:05, Jim Wilcoxson wrote:
> If you partition the database into multiple databases, you will have
> to place each on its own physical disk drive to increase transaction
> rates. If your base transaction rate with one drive is T, with N
> drives it should be N*T; 4 drives
Yes, good point.
If you partition the database into multiple databases, you will have
to place each on its own physical disk drive to increase transaction
rates. If your base transaction rate with one drive is T, with N
drives it should be N*T; 4 drives gives you 4x the transaction rate,
etc.
On Jun 11, 2009, at 4:53 PM, Sam Carleton wrote:
> I am a late comer to this discussion, so this might have already
> been purposed...
Additionally, if this was not mentioned already, you can partition
your database across multiple physical files through the magic of
'attach database' or
On Thu, Jun 11, 2009 at 1:46 AM, Florian Weimer wrote:
> That's 500 commits per second, right? If you need durability, you can
> get these numbers only with special hardware.
>
Not really, you don't need special hardware (if you don't use SQLite).
The use case that Robel
Jim,
I am about to have my first one here in a few hours. Can you email me
the program directly?
Sam
Jim Wilcoxson wrote:
Hey, if anybody has an SSD laying around, it would be interesting to
run that commit test program I posted a while back to see what kind of
transaction rates are
Thank you all for the wonderful advices. I guess the only thing left now is
to dive into writing the app and stress test to find out :)
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Jim Wilcoxson
Sent: Thursday, June 11,
Hey, if anybody has an SSD laying around, it would be interesting to
run that commit test program I posted a while back to see what kind of
transaction rates are possible. Although, verifying whether the SSD
is actually doing the commits or just saying it is would be very
difficult. With
SSD's usually have poor write performance, because to do a write, they
have to use read, erase, write sequences across large blocks like 64K.
Most of the SSD benchmarks that quote good write performance are for
sequential write performance. If you skip all over the disk doing
small writes, like
Jim Wilcoxson wrote:
Here's what I'd try:
1. Write a small server that accepts connections and writes to the
SQLite database using prepared statements. If you need require 500
transaction per second, it's simply not possible with rotating media.
I am a late comer to this discussion, so this
I should have mentioned, if it were me, I'd write the mini server
first as a single process in a loop, and make it as fast as possible.
If you try to do db updates with multiple processes, you'll have
concurrency issues. It might make sense to use multiple processes if
you also have lots of
Here's what I'd try:
1. Write a small server that accepts connections and writes to the
SQLite database using prepared statements. If you need require 500
transaction per second, it's simply not possible with rotating media.
So the solution is to either turn off synchronous, which is dangerous,
I bet "synchronous"ness will not be your only bottleneck. Opening
connection, preparing statement and closing connection will take in
total much longer than executing statement itself. So that doing all
these operations 500 times per second will not be possible I think. If
you keep pool of
Thanks all for your input, very helpful. And yes, there will be 500 separate
connections to the db per seconds, each updating 1 record. I've read about
setting PRAGMA synchronous=OFF to cause SQLite to not wait on data to reach
the disk surface, which will make write operations appear to be much
Aqlite is not the DB for your application. You need a server like
PostgreSQL or Oracle.
Robel Girma wrote:
> Hello,
>
> I am in need for a database to hold a couple of tables with max 10,000 rows
> each that I will update frequently and query frequently.
>
> Example, 5000 users connect to our
On 11 Jun 2009, at 8:23am, Roger Binns wrote:
> It depends very strongly on how the app is structured and in
> particular
> if there are a few persistent connections to the SQLite database, or
> if
> each request involves a separate connection to the database. If you
> have lots of
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Robel Girma wrote:
> but rather trying to
> find out if my app will work with SQLite.
SQLite will definitely work and at the very least you will it useful
during (rapid) development and demos. Quite simply SQLite will get you
results far quicker
* Robel Girma:
> Example, 5000 users connect to our server every 10 seconds and each
> time they connect, I need to update a table with their IP and
> Last_connect_time.
That's 500 commits per second, right? If you need durability, you can
get these numbers only with special hardware.
SQL
Thanks for the reply Roger and I have read the section you mentioned, very
informative. I'm not trying to compare the 2 products, but rather trying to
find out if my app will work with SQLite. I don't necessarily require a
server. My app can work as a web app or web service where clients hit this
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Robel Girma wrote:
> I'm trying to choose the most efficient db for this application and my main
> criteria is response time. Will SQLite do this more efficiently than SQL
> Server. I'm planning to allocate upto 1GB memory.
SQLite doesn't operate
Hello,
I am in need for a database to hold a couple of tables with max 10,000 rows
each that I will update frequently and query frequently.
Example, 5000 users connect to our server every 10 seconds and each time
they connect, I need to update a table with their IP and Last_connect_time.
22 matches
Mail list logo