The following pure python code does the same thing, memmapping the file when
reading backwards ... works in Python 2 and 3, 32 and 64 bit.
Emulates what sqlite3 is doing as closely as I can manage. As long as the mmap
fits in memory it does not seem to affect performance.
---//---
from __future__ import absolute_import, print_function, division,
unicode_literals
import random
import time
import os
import sys
if sys.version_info.major > 2:
xrange = range
blocksize = 4096
blocks = 1024 * 4096
buffer = []
for i in xrange(blocksize):
buffer.append(chr(random.randint(ord('A'), ord('z'))))
buffer = ''.join(buffer)
if sys.version_info.major > 2:
buffer = buffer.encode('utf_8')
if os.path.exists('junk.dat'):
print('Deleting junk.dat test file')
os.unlink('junk.dat')
if not os.path.exists('junk.dat'):
print('Creating 0 length junk.dat test file')
f = open('junk.dat', 'wb')
f.close()
f = open('junk.dat', 'rb+', buffering=0)
print('Writing File Forward', end=' ')
st = time.time()
for i in xrange(blocks):
f.seek(i * blocksize)
f.write(buffer)
f.flush()
os.fsync(f.fileno())
print(time.time() - st, 'seconds')
print()
def readforward():
print('Reading File Forward', end=' ')
st = time.time()
for i in xrange(blocks):
f.seek(i * blocksize)
f.read(blocksize)
f.flush()
os.fsync(f.fileno())
print(time.time() - st, 'seconds')
print()
def readbackwards():
print('Reading File Backward', end=' ')
st = time.time()
for i in xrange(blocks -1 ,-1, -1):
f.seek(i * blocksize)
f.read(blocksize)
f.flush()
os.fsync(f.fileno())
print(time.time() - st, 'seconds')
print()
readforward()
readbackwards()
readforward()
readbackwards()
readbackwards()
readforward()
f.close()
---//---
---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a
lot about anticipated traffic volume.
>-----Original Message-----
>From: sqlite-users [mailto:sqlite-users-
>[email protected]] On Behalf Of David Raymond
>Sent: Monday, 18 June, 2018 07:10
>To: SQLite mailing list
>Subject: Re: [sqlite] .timer
>
>I haven't grasped all the fancy memory talk that's been going on
>here, but I have one request. Would you try the slowdown tests with a
>SQLite version compiled with...
>SQLITE_DEFAULT_MMAP_SIZE=0
>SQLITE_MAX_MMAP_SIZE=0
>...and see if anything changes? I started compiling with those
>options after some similar speed issues "back when" and things seem
>to have cleared up since then. I'm curious if it's because I added
>that or if it's just a coincidence.
>_______________________________________________
>sqlite-users mailing list
>[email protected]
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
_______________________________________________
sqlite-users mailing list
[email protected]
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users