There was a significant increase in throughput converting the
sqlite3_exec inserts into prepared statements:
I suspect every previous insert was compiling versus now only 1 of
every 10,000 inserts.
I'll bump this up to 100K next revision
I've changed many variables so its hard to base line the di
ESXI 5
Disk is on a local host, running raid 6 on a LSI raid card
Machine has HT turned off so only scheduling full cores
System takes up less than 500mb of RAM and uses less than 1% of CPU
Intel Xeon X5482 3.2Ghz
64GB RAM
6TB Raid 6
10GB NIC
VMXNET NIC#2
Windows Server 2008 R2
Dedicated Developm
Running a single extractor on a VM runs near bare metal speeds. I can
run concurrent extractors up to the # of cores allocated to the VM and
sustain decent throughput. Once I go above this count the VM's get
crippled.. Pegged at 100% - Disk, RAM and LAN loads are miniscule.
For a test we ran 50
No indexes... create db and table structure - process tick tapes
inserting records - than create index. Windows 2008 R2 Servers ... We
can throw hardware at this problem.
We need to be able to run 1000's of extractors concurrently processing
different tick tapes and symbol sets. aka service bur
Yes: Inserts wrapped in transactions 10,000 at a shot.
Tried pragma synchronous=OFF
And 3 different threadsafe compilations for sqlite:
# define SQLITE_THREADSAFE 0 /* IMP: R-07272-22309 */
# define SQLITE_THREADSAFE 1 /* IMP: R-07272-22309 */
# define SQLITE_THREADSAFE 2 /* IMP: R-07272-22309 *
Have a market data extractor running tick tapes and saving data to sqlite db.
All coded in C++ and previously ran fine and fast on bare metal.
Processes tick tapes from NAS and saved to local sqlite db's on c drive.
We moved program onto VMWare VM's... same nas tick tapes. Writing
db's to a shar
Keith,
Trying to work through your suggestions:
I'm getting "Misuse of aggregate max()"
>CREATE VIEW TICKMAX
>AS
> SELECT ASK, BID, TRADEPRICE, TIMESTAMP, SYMBOL
>FROM TICKDATA
> WHERE TIMESTAMP = MAX(TIMESTAMP)
>GROUP BY SYMBOL;
___
sqlite-users
Not sure how to combine the subqueries.
Our TickData table looks close to: TimeStamp, Symbol, Ask, Bid, Last
... IQ_A, IQ_B, IQ_T, IQ_X
We need to update the IQ fields with calculations always made between
the current record and previous symbol record ordered by timestamp. I
don't know how to
Thanks for your suggestion Igor Tandetnik:
Scope creep expanded the original query to the actual trading
instruments and the refactored code has evolved to::
update TICKDATA set IQ_A = ROUND(ASK - (
select t2.ASK from TICKDATA t2
where t2.SYMBOL = TICKDATA.SYMBOL and t2.TIMESTAMP <
TICKDATA
How can I construct a update query to calculate and set a record field
"latency" with the difference between "timestamps" by "deviceid"?
Appears sqlite doesn't support lag and lead.
ie. I have a table with 1,000,000 + records collecting real time
stats from many devices with many columns but the
10 matches
Mail list logo