At 11:20 22/06/2007, you wrote:
HI all.
Thanks for everyones help the problem is now solved. The memory drive worked
like a bomb. Basically the problem on that server was that the insanely high
IO prevented the OS from caching the file which slowed down the performance.
After installing a mem
HI all.
Thanks for everyones help the problem is now solved. The memory drive worked
like a bomb. Basically the problem on that server was that the insanely high
IO prevented the OS from caching the file which slowed down the performance.
After installing a mem drive ( using mfs ) and reducing
That sounds like an awesome trick. I will definitely do as you suggest and
decrease cache_size as even at the moment it does not really seem to help
much.
With regards to the memory being volatile and such. That is not really a big
problem for me as a complete loss of the lookup table is not a
Joe Wilson wrote:
A non-volatile RAM drive is the way to go if you got the bucks.
16 Processor machine
~40Gb ram
EMC storage
suggests he does. ;)
I worked on a project where the end client had Sun kit of this spec, and
they claimed the systems cost 7 figures GBP back in 2005.
Martin
I assumed he meant a volatile system RAM "drive", as opposed to a non-volatile
external RAM drive by his wording. But no point speculating what he meant.
A non-volatile RAM drive is the way to go if you got the bucks.
--- Ken <[EMAIL PROTECTED]> wrote:
> I think the performance of the ram drive
> mmm, I was thinking that I decrease the cache_size to like 20 when using the
> ram drive since I dont need caching anymore then.
>
> I have inserted more timeing code and I am now convinced I have an IO
> problem. When I coax a OS to fully cache my (smaller 40 rows) db file (
> which takes
I understand where you are heading, by putting the entire db on a ram drive.
I think the performance of the ram drive (i'm guessing scsi based) will not be
as good as physical system ram. But certainly better than the I/o speed of disk.
Let us know how it turns out.
pompomJuice <[EMAIL
mmm, I was thinking that I decrease the cache_size to like 20 when using the
ram drive since I dont need caching anymore then.
I have inserted more timeing code and I am now convinced I have an IO
problem. When I coax a OS to fully cache my (smaller 40 rows) db file (
which takes like 2-3
The Ram drive is unlikely to work. It will still have the same cache
invalidation.
You need to get things logically working first. Ram drives are great to help
improve performance where seeks are and rotational access requirements dictate.
pompomJuice <[EMAIL PROTECTED]> wrote:
AArrgh.
AArrgh.
That is the one thing that I wont be able to do. It would require a complete
system redesign. I can adapt my program easy but now to get it to work in
the greater scheme of things would be a nightmare.
My current efforts are being focussed into making a ram drive and putting
the file in
1. Review your oracle 10g db and fix the "HUGE I/O" issues.
2. Why not do the lookups using oracle? Allocate the extra 5 gig to the oracle
buffer cache.
3. If you want good lookup performance, try to use the Array level interface
so that you don't need to take multiple trips (context switch)
Can you consolidate your multiple binaries to a Single Binary?
Then Use threading and sqlite's shared caching to perform the Lookups and
updates.
That way the cache wouldn't get invalidated???
Someone else here correct me if this is a bad idea!!!
pompomJuice <[EMAIL PROTECTED]> wrote:
Does every single process (however insignificant) that reads or writes
to that sqlite database file run on the same 16 processor machine?
> 16 Processor machine
> ~40Gb ram
> EMC storage
> Running a huge Oracle 10G database
> Running a 3rd party application that generates HUGE IO.
> Part of
On 6/19/07, pompomJuice <[EMAIL PROTECTED]> wrote:
Running a huge Oracle 10G database
Running a 3rd party application that generates HUGE IO.
Part of this 3rd party application is my application that does lookups.
1.) Data comes in in the form of files.
2.) 3rd party application decodes and
Thats exactly why I thought this sqlite would work.
16 Processor machine
~40Gb ram
EMC storage
Running a huge Oracle 10G database
Running a 3rd party application that generates HUGE IO.
Part of this 3rd party application is my application that does lookups.
1.) Data comes in in the form of
pompomJuice uttered:
I suspected something like this, as it makes sense.
I have multiple binaries/different connections ( and I cannot make them
share a connection ) using this one lookup table and depending on which
connection checks first, it will update the table.
What is your working
> My question is then, if any one connection makes any change to the database
> ( not neccesarily to the huge lookup table ) will all the other connections
> invalidate their entire cache?
Yes. The entire cache, regardless of what table was modified etc.
Dan.
I suspected something like this, as it makes sense.
I have multiple binaries/different connections ( and I cannot make them
share a connection ) using this one lookup table and depending on which
connection checks first, it will update the table.
My question is then, if any one connection
Hello there.
I need some insight into how SQLite's caching works. I have a database that
is quite large (5Gb) sitting on a production server that's IO is severely
taxed. This causes my SQLite db to perform very poorly. Most of the time my
application just sits there and uses about 10% of a CPU
Hello there.
I need some insight into how SQLite's caching works. I have a database that
is quite large (5Gb) sitting on a production server that's IO is severely
taxed. This causes my SQLite db to perform very poorly. Most of the time my
application just sits there and uses about 10% of a CPU
20 matches
Mail list logo