Re: [sqlite] Transaction speed too slow?

2011-05-09 Thread Nick
Thanks for the suggestion and link. It seems that noatime has already been set 
for both partition types on my test setup. I turned atime back on and re-ran 
the tests. 

The results are within error margins and therefore arguably the same as 
previous.

I'd be interested in the results if others would run the script below.

Thanks in advance
Nick


On 9 May 2011, at 13:31, Black, Michael (IS) wrote:

> You do realize the number they quote is a MAXnot necessarily what you'll 
> get.
> 
> 
> 
> With 16 transactions per second you're writing on transaction every 63ms.  My 
> understanding is that the hard drive in the netbook is 15ms access time so 
> you're seeing 4 accesses per transaction with the way you're doing this test 
> (write caching off, full sync).
> 
> 
> 
> When you write a record you have to update last access times for example.
> 
> 
> 
> 
> 
> Might be interesting to turn off last access updates and see if it changes 
> your results much.
> 
> 
> 
> http://www.cyberciti.biz/faq/linux-noatime-ext3-ext4-fstab-configuration/
> 
> 
> 
> 
> 
> 
> 
> Michael D. Black
> 
> Senior Scientist
> 
> NG Information Systems
> 
> Advanced Analytics Directorate
> 
> 
> 
> 
> From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
> behalf of Nick [maill...@css-uk.net]
> Sent: Sunday, May 08, 2011 3:52 PM
> To: sqlite-users@sqlite.org
> Subject: EXT :[sqlite] Transaction speed too slow?
> 
> 
> From http://www.sqlite.org/faq.html#q19 it says "A transaction normally 
> requires two complete rotations of the disk platter, which on a 7200RPM disk 
> drive limits you to about 60 transactions per second."
> 
> Using Linux/Ubuntu 10.04 on an otherwise idle Atom powered Nettop with a 
> 5400RPM disk drive I was expecting circa 45 transactions per second. However 
> using the basic python script below I seem to be getting a lot slower 
> results, namely:
> 
> ext3 partition: 16 transactions per second
> jfs partition: 9 transactions per second
> :memory: 15798 transaction per second
> 
> Does anyone have an explanation for this?
> 
> The script used is below. I'd be grateful if people could confirm whether it 
> is just my hardware or a common result
> 
> FULL RESULTS:
> 
> nick@Haribo:~$ sudo hdparm -W 0 /dev/sda
> 
> /dev/sda:
> setting drive write-caching to 0 (off)
> write-caching =  0 (off)
> nick@Haribo:~$ python write-transactions-1.py
> Run Number: 1, Location: :memory:
> 0:00:00.000108
> 0:00:00.58
> 0:00:00.57
> 0:00:00.57
> 0:00:00.56
> 0:00:00.56
> 0:00:00.57
> 0:00:00.57
> 0:00:00.57
> 0:00:00.56
> Time Avg: 0.62
> Trans/sec Avg: 16155
> Run Number: 2, Location: write-transactions-1.db
> 0:00:00.099678
> 0:00:00.121630
> 0:00:00.110672
> 0:00:00.099599
> 0:00:00.110782
> 0:00:00.099542
> 0:00:00.121776
> 0:00:00.099599
> 0:00:00.121794
> 0:00:00.099624
> Time Avg: 0.108470
> Trans/sec Avg: 9
> 
> ---
> 
> nick@Haribo:~$ sudo hdparm -W 1 /dev/sda
> 
> /dev/sda:
> setting drive write-caching to 1 (on)
> write-caching =  1 (on)
> nick@Haribo:~$ python write-transactions-1.py
> Run Number: 1, Location: :memory:
> 0:00:00.000113
> 0:00:00.57
> 0:00:00.56
> 0:00:00.56
> 0:00:00.56
> 0:00:00.57
> 0:00:00.57
> 0:00:00.56
> 0:00:00.56
> 0:00:00.56
> Time Avg: 0.62
> Trans/sec Avg: 16129
> Run Number: 2, Location: write-transactions-1.db
> 0:00:00.001438
> 0:00:00.000898
> 0:00:00.000989
> 0:00:00.000984
> 0:00:00.000982
> 0:00:00.001945
> 0:00:00.001059
> 0:00:00.001169
> 0:00:00.000914
> 0:00:00.001063
> Time Avg: 0.001144
> Trans/sec Avg: 874
> 
> 
> --
> 
> SCRIPT
> 
> # Test Benchmark for Transactions speed per second using  built in SQLite
> import sqlite3
> import datetime
> 
> contype = [":memory:", "write-transactions-1.db"]
> 
> for runidx, runtype in enumerate(contype):
> 
># Heading
>print "Run Number: %d, Location: %s" % (runidx + 1, runtype)
> 
>con = sqlite3.connect(runtype)
>con.isolation_level = None
>con.execute("PRAGMA synchronous = FULL")
>times = []
> 
># Create the table
>con.execute("drop table if exists person")
>con.execute("create table person(firstname, lastname)")
> 
># Measure 10 points
>for run in range(10):
> 
># BEGIN transaction
>con.execute("begin")
> 
># Fill the table
>con.execute("insert into person(firstname, lastname) values (?, ?)", 
> ("alex","bob"))
> 
># Start the clock
>begin = datetime.datetime.now()
> 
># COMMIT transaction
>con.execute("commit")
> 
># Stop the clock
>end = datetime.datetime.now()
> 
># Record the time
>times.append(end - begin)
> 
># Measure 10 points
>for run in range(10):
> 
># Display the time results
>print times[run]
> 
># Print Average
>import 

Re: [sqlite] iOS Mem Alloc Issue - sqlite3_extended_errcode

2011-05-09 Thread Richard Hipp
On Mon, May 9, 2011 at 3:19 PM, Rooney, Joe wrote:

> I've used SQLite for some time in iOS apps, but now I'm doing an iPad
> Corporate Directory app, and running into serious memory issues. Using
> Instruments, I'm finding fully 50% of my live memory is Malloc 52 KB,
> entirely allocated to libsqlite3.dylib, and specifically to
> sqlite3_extended_errcode.


sqlite3_extended_errcode does not do any memory allocations.  See for
yourself:

http://www.sqlite.org/src/artifact/f00cee5a27f5df?ln=1603-1611
http://www.sqlite.org/src/artifact/914e860d2149?ln=1068-1080

So there seems to be some kind of instrumentation problem




> The memory is not shown as leaking, but it is not being released. Every
> time I do another employee search, over 1 MB of Malloc 52 is added to live
> allocations.
>
> Any ideas?
>
> Joseph
>
> ___
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>



-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] iOS Mem Alloc Issue - sqlite3_extended_errcode

2011-05-09 Thread Rooney, Joe
I've used SQLite for some time in iOS apps, but now I'm doing an iPad Corporate 
Directory app, and running into serious memory issues. Using Instruments, I'm 
finding fully 50% of my live memory is Malloc 52 KB, entirely allocated to 
libsqlite3.dylib, and specifically to sqlite3_extended_errcode. The memory is 
not shown as leaking, but it is not being released. Every time I do another 
employee search, over 1 MB of Malloc 52 is added to live allocations.

Any ideas?

Joseph

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Compile error's on Visual Studio 2010

2011-05-09 Thread Jan Berger
It's a trick to this.

If you do it correct you can compile sqlite3.c directly into any VC++
project - even CLR - no need for separate dll.

Just include the sqlite.c in your project and change the file's private
properties - set C compile, swith off pre-compiled header and make sure the
file compile as native if the project is CLR.

Took me 3 beers to remember how to do this :)

Jan 

-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Teg
Sent: 8. mai 2011 23:48
To: General Discussion of SQLite Database
Subject: Re: [sqlite] Compile error's on Visual Studio 2010

Hello Jay,

I haven't found this to be the case. I have numerous C only library's
I compile and I don't have to change the defaults to compile them.
There is an option to force C++ compiles but, I don't believe it's on
by default.

JAK>   In most default setups, Visual Studio insists on compiling .c files
JAK>   with the C++ compiler, even though-- as this very example points
JAK>   out-- the languages are different enough that this behavior seems
JAK>   questionable.


-- 
Best regards,
 Tegmailto:t...@djii.com

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction triggers?

2011-05-09 Thread Nico Williams
I've researched firing of these triggers somewhat.  Upshot: firing
anything other than a transaction commit trigger is tricky.

The semantics of transaction triggers are interesting.  Presumably
such triggers should execute in the same transaction as the one they
are related to -- but this would mean that rollback triggers can't
have any side-effects in the database, which begs the question as to
what their utility might be.  Upshot: I'm going to ignore rollback
triggers for now.  And what happens if a rollback occurs after or
while running commit triggers?  IMO: forget about rollback triggers,
and if a commit trigger's work is undone in a rollback, that's fine.

The semantics of database connect triggers should be simple, but they
aren't: what if the trigger can't execute because of SQLITE_BUSY?
Tricky.  Loop?  On the other hand, if these triggers only execute when
the first write transaction on that db handle begins, then that issue
goes away.  That seems much better.

The semantics of database disconnect triggers is also tricky, mostly
because of how sqlite3_close() works (it can fail with SQLITE_BUSY
part-way through). Scenario 1: a disconnect trigger fires but the db
connection stays alive, in which case we may have to re-fire the
connect trigger again later.  Scenario 2: part of the connection may
be torn down when the disconnect triggers fire (e.g., virtual tables,
which may mean having to re-initialize their state, which means that
vtabs could see spurious connects and disconnects).

Transaction start triggers are tricky because the act of firing them
causes a write transaction to be started, so we can't fire them in a
"BEGIN DEFERRED", but then we need to defer firing the trigger, which
means we need to track whether it has fired, and this needs to be done
such that we can decide whether to code the transaction start trigger
while parsing.

Transaction rollback triggers are even trickier because they can be
completely async.  Coding one one of these such that it gets executed
when the transaction is rolled back and not otherwise is tricky (and
then one has to worry about the trigger itself doing a rollback).  I'd
be tempted to re-enter the VM with sqlite3_prepare/step/finalize in
order to fire this trigger, but I worry that that may break various
assumptions.  And then there's the question: should a rollback trigger
fire inside the transaction that's rolling back?  If so the only side
effects that such a trigger could have would be to invoke user-defined
functions with external side-effects.  If not...  Ah, maybe I could
code these trigger firings as alternatives to OP_Halt, terminated by
OP_Halt, and with their own OP_Transaction so they can execute in
their own transaction.

Upshot: I'm going to focus on transaction commit for now, then
transaction start, and I may stop there.

Nico
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Transaction speed too slow?

2011-05-09 Thread Black, Michael (IS)
You do realize the number they quote is a MAXnot necessarily what you'll 
get.



With 16 transactions per second you're writing on transaction every 63ms.  My 
understanding is that the hard drive in the netbook is 15ms access time so 
you're seeing 4 accesses per transaction with the way you're doing this test 
(write caching off, full sync).



When you write a record you have to update last access times for example.





Might be interesting to turn off last access updates and see if it changes your 
results much.



http://www.cyberciti.biz/faq/linux-noatime-ext3-ext4-fstab-configuration/







Michael D. Black

Senior Scientist

NG Information Systems

Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Nick [maill...@css-uk.net]
Sent: Sunday, May 08, 2011 3:52 PM
To: sqlite-users@sqlite.org
Subject: EXT :[sqlite] Transaction speed too slow?


>From http://www.sqlite.org/faq.html#q19 it says "A transaction normally 
>requires two complete rotations of the disk platter, which on a 7200RPM disk 
>drive limits you to about 60 transactions per second."

Using Linux/Ubuntu 10.04 on an otherwise idle Atom powered Nettop with a 
5400RPM disk drive I was expecting circa 45 transactions per second. However 
using the basic python script below I seem to be getting a lot slower results, 
namely:

ext3 partition: 16 transactions per second
jfs partition: 9 transactions per second
:memory: 15798 transaction per second

Does anyone have an explanation for this?

The script used is below. I'd be grateful if people could confirm whether it is 
just my hardware or a common result

FULL RESULTS:

nick@Haribo:~$ sudo hdparm -W 0 /dev/sda

/dev/sda:
 setting drive write-caching to 0 (off)
 write-caching =  0 (off)
nick@Haribo:~$ python write-transactions-1.py
Run Number: 1, Location: :memory:
0:00:00.000108
0:00:00.58
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.57
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16155
Run Number: 2, Location: write-transactions-1.db
0:00:00.099678
0:00:00.121630
0:00:00.110672
0:00:00.099599
0:00:00.110782
0:00:00.099542
0:00:00.121776
0:00:00.099599
0:00:00.121794
0:00:00.099624
Time Avg: 0.108470
Trans/sec Avg: 9

---

nick@Haribo:~$ sudo hdparm -W 1 /dev/sda

/dev/sda:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)
nick@Haribo:~$ python write-transactions-1.py
Run Number: 1, Location: :memory:
0:00:00.000113
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16129
Run Number: 2, Location: write-transactions-1.db
0:00:00.001438
0:00:00.000898
0:00:00.000989
0:00:00.000984
0:00:00.000982
0:00:00.001945
0:00:00.001059
0:00:00.001169
0:00:00.000914
0:00:00.001063
Time Avg: 0.001144
Trans/sec Avg: 874


--

SCRIPT

# Test Benchmark for Transactions speed per second using  built in SQLite
import sqlite3
import datetime

contype = [":memory:", "write-transactions-1.db"]

for runidx, runtype in enumerate(contype):

# Heading
print "Run Number: %d, Location: %s" % (runidx + 1, runtype)

con = sqlite3.connect(runtype)
con.isolation_level = None
con.execute("PRAGMA synchronous = FULL")
times = []

# Create the table
con.execute("drop table if exists person")
con.execute("create table person(firstname, lastname)")

# Measure 10 points
for run in range(10):

# BEGIN transaction
con.execute("begin")

# Fill the table
con.execute("insert into person(firstname, lastname) values (?, ?)", 
("alex","bob"))

# Start the clock
begin = datetime.datetime.now()

# COMMIT transaction
con.execute("commit")

# Stop the clock
end = datetime.datetime.now()

# Record the time
times.append(end - begin)

# Measure 10 points
for run in range(10):

# Display the time results
print times[run]

# Print Average
import operator
td = reduce(operator.add, times)
totalresultsec = (float(td.microseconds) + (td.seconds + td.days * 24 * 
3600) * 10**6) / 10**6
avgsec = totalresultsec / 10.0
print "Time Avg: %f" % avgsec
print "Trans/sec Avg: %0.0f" % (1.0/avgsec)
con.close()

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] EXT : Transaction speed too slow?

2011-05-09 Thread Black, Michael (IS)




Michael D. Black

Senior Scientist

NG Information Systems

Advanced Analytics Directorate




From: sqlite-users-boun...@sqlite.org [sqlite-users-boun...@sqlite.org] on 
behalf of Nick [maill...@css-uk.net]
Sent: Sunday, May 08, 2011 3:52 PM
To: sqlite-users@sqlite.org
Subject: EXT :[sqlite] Transaction speed too slow?


>From http://www.sqlite.org/faq.html#q19 it says "A transaction normally 
>requires two complete rotations of the disk platter, which on a 7200RPM disk 
>drive limits you to about 60 transactions per second."

Using Linux/Ubuntu 10.04 on an otherwise idle Atom powered Nettop with a 
5400RPM disk drive I was expecting circa 45 transactions per second. However 
using the basic python script below I seem to be getting a lot slower results, 
namely:

ext3 partition: 16 transactions per second
jfs partition: 9 transactions per second
:memory: 15798 transaction per second

Does anyone have an explanation for this?

The script used is below. I'd be grateful if people could confirm whether it is 
just my hardware or a common result

FULL RESULTS:

nick@Haribo:~$ sudo hdparm -W 0 /dev/sda

/dev/sda:
 setting drive write-caching to 0 (off)
 write-caching =  0 (off)
nick@Haribo:~$ python write-transactions-1.py
Run Number: 1, Location: :memory:
0:00:00.000108
0:00:00.58
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.57
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16155
Run Number: 2, Location: write-transactions-1.db
0:00:00.099678
0:00:00.121630
0:00:00.110672
0:00:00.099599
0:00:00.110782
0:00:00.099542
0:00:00.121776
0:00:00.099599
0:00:00.121794
0:00:00.099624
Time Avg: 0.108470
Trans/sec Avg: 9

---

nick@Haribo:~$ sudo hdparm -W 1 /dev/sda

/dev/sda:
 setting drive write-caching to 1 (on)
 write-caching =  1 (on)
nick@Haribo:~$ python write-transactions-1.py
Run Number: 1, Location: :memory:
0:00:00.000113
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
0:00:00.57
0:00:00.57
0:00:00.56
0:00:00.56
0:00:00.56
Time Avg: 0.62
Trans/sec Avg: 16129
Run Number: 2, Location: write-transactions-1.db
0:00:00.001438
0:00:00.000898
0:00:00.000989
0:00:00.000984
0:00:00.000982
0:00:00.001945
0:00:00.001059
0:00:00.001169
0:00:00.000914
0:00:00.001063
Time Avg: 0.001144
Trans/sec Avg: 874


--

SCRIPT

# Test Benchmark for Transactions speed per second using  built in SQLite
import sqlite3
import datetime

contype = [":memory:", "write-transactions-1.db"]

for runidx, runtype in enumerate(contype):

# Heading
print "Run Number: %d, Location: %s" % (runidx + 1, runtype)

con = sqlite3.connect(runtype)
con.isolation_level = None
con.execute("PRAGMA synchronous = FULL")
times = []

# Create the table
con.execute("drop table if exists person")
con.execute("create table person(firstname, lastname)")

# Measure 10 points
for run in range(10):

# BEGIN transaction
con.execute("begin")

# Fill the table
con.execute("insert into person(firstname, lastname) values (?, ?)", 
("alex","bob"))

# Start the clock
begin = datetime.datetime.now()

# COMMIT transaction
con.execute("commit")

# Stop the clock
end = datetime.datetime.now()

# Record the time
times.append(end - begin)

# Measure 10 points
for run in range(10):

# Display the time results
print times[run]

# Print Average
import operator
td = reduce(operator.add, times)
totalresultsec = (float(td.microseconds) + (td.seconds + td.days * 24 * 
3600) * 10**6) / 10**6
avgsec = totalresultsec / 10.0
print "Time Avg: %f" % avgsec
print "Trans/sec Avg: %0.0f" % (1.0/avgsec)
con.close()

___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Query question

2011-05-09 Thread Matthew Jones
Thanks guys. I knew it would be simple (and I should have known it already).

>   select a, count(distinct b) from a group by a;
>
> or if you are checking a specific value of a:
>
>   select count(distinct b) from a where a=10;

-- 
Matthew Jones
Hewlett-Packard Ltd
Long Down Avenue
Stoke Gifford
Bristol.  BS34 8QZ
Tel:   +44 (0) 117 312 7490
Email:matthew.jo...@hp.com

Hewlett-Packard Limited registered Office: Cain Road, Bracknell, Berks, 
RG12 1HN. Registered No: 690597 England

The contents of this message and any attachments to it are confidential 
and may be legally privileged. If you have received this message in 
error, you should delete it from your system immediately and advise the 
sender.

To any recipient of this message within HP, unless otherwise stated you 
should consider this message and attachments as "HP CONFIDENTIAL".
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users