Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-11 Thread James K. Lowden
On Mon, 9 Dec 2019 22:02:07 -0500
Richard Damon  wrote:

> If we assume that over-committing has been removed, then the fact
> that the fork succeeded is the promise that both processes have the
> right to access all of their address space. Any page that is writable
> needs to have swap space reserved, 

Yes, that's SOP in most systems, and not expensive.  The kernel
need not write anything to swap; it just has to book the space in its
swap account.  

--jkl
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-09 Thread Keith Medcalf

On Monday, 9 December, 2019 20:02, Richard Damon  
wrote:

>On 12/9/19 4:25 PM, Keith Medcalf wrote:

>>> You could still have fast forking without overcommitting, you’d just
>>> pay the cost in unreachable RAM.

>>> If I have 4 GB of RAM in the system, and the kernel takes 1 GB of
>>> that, I start a 2.5 GB user space process, and my process forks 
>>> itself with the intent of starting an 0.1 GB process, that fork 
>>> would have to fail if overcommitting weren’t allowed.

>> No, it wouldn't, and there is no overcommitment.  You are creating a
>> second process that is using the same V:R mapping as the original process
>> thus it is consuming no more virtual memory after the fork operation than
>> before (except for the bytes to track the new process).  You now have two
>> execution paths through the same mapping which may require more real
>> memory working set, but you have not increased the virtual memory size.
>> That is until one of the processes modifies a memory page in which case
>> an additional virtual page must be allocated to hold the modified page.

>> Overcommittment occurs at the R level, not at the second V in the V:V:R
>> mapping.

>> This is why shared libraries (and discontiguous saved segments) were
>> invented.  It permits the per-process mapping (the first V in V:V:R) to
>> use already existing virtual pages (the second V in V:V:R) without
>> increasing the count of Virtual Pages.  It is not overcommittment unless
>> the number of virtual pages (the second V in V:V:R) exceeds the number of
>> pages in R plus backing store.

> Delaying the conversion of shared to distinct (or at least delaying the
> reservation of backing store) is one form of over-committing. If we
> assume that over-committing has been removed, then the fact that the
> fork succeeded is the promise that both processes have the right to
> access all of their address space. Any page that is writable needs to
> have swap space reserved, or you have allowed over committing. The OS
> can delay actually creating the new pages, and thus save some work, but
> if you haven't reserved the space for the virtual page, you are allowing
> an over commit.

Yes, you are correct.  In order to not allow overcommit all the writeable pages 
would have to be committed storage, even if they are presently mapped to read 
from an already existing virtual page (as in CoW).

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-09 Thread Richard Damon
On 12/9/19 4:25 PM, Keith Medcalf wrote:
>> You could still have fast forking without overcommitting, you’d just pay
>> the cost in unreachable RAM.
>>
>> If I have 4 GB of RAM in the system, and the kernel takes 1 GB of that, I
>> start a 2.5 GB user space process, and my process forks itself with the
>> intent of starting an 0.1 GB process, that fork would have to fail if
>> overcommitting weren’t allowed.
> No, it wouldn't, and there is no overcommitment.  You are creating a second 
> process that is using the same V:R mapping as the original process thus it is 
> consuming no more virtual memory after the fork operation than before (except 
> for the bytes to track the new process).  You now have two execution paths 
> through the same mapping which may require more real memory working set, but 
> you have not increased the virtual memory size.  That is until one of the 
> processes modifies a memory page in which case an additional virtual page 
> must be allocated to hold the modified page.
>
> Overcommittment occurs at the R level, not at the second V in the V:V:R 
> mapping.
>
> This is why shared libraries (and discontiguous saved segments) were 
> invented.  It permits the per-process mapping (the first V in V:V:R) to use 
> already existing virtual pages (the second V in V:V:R) without increasing the 
> count of Virtual Pages.  It is not overcommittment unless the number of 
> virtual pages (the second V in V:V:R) exceeds the number of pages in R plus 
> backing store.
>
Delaying the conversion of shared to distinct (or at least delaying the
reservation of backing store) is one form of over-committing. If we
assume that over-committing has been removed, then the fact that the
fork succeeded is the promise that both processes have the right to
access all of their address space. Any page that is writable needs to
have swap space reserved, or you have allowed over committing. The OS
can delay actually creating the new pages, and thus save some work, but
if you haven't reserved the space for the virtual page, you are allowing
an over commit.

-- 
Richard Damon

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-09 Thread Keith Medcalf

>You could still have fast forking without overcommitting, you’d just pay
>the cost in unreachable RAM.
>
>If I have 4 GB of RAM in the system, and the kernel takes 1 GB of that, I
>start a 2.5 GB user space process, and my process forks itself with the
>intent of starting an 0.1 GB process, that fork would have to fail if
>overcommitting weren’t allowed.

No, it wouldn't, and there is no overcommitment.  You are creating a second 
process that is using the same V:R mapping as the original process thus it is 
consuming no more virtual memory after the fork operation than before (except 
for the bytes to track the new process).  You now have two execution paths 
through the same mapping which may require more real memory working set, but 
you have not increased the virtual memory size.  That is until one of the 
processes modifies a memory page in which case an additional virtual page must 
be allocated to hold the modified page.

Overcommittment occurs at the R level, not at the second V in the V:V:R mapping.

This is why shared libraries (and discontiguous saved segments) were invented.  
It permits the per-process mapping (the first V in V:V:R) to use already 
existing virtual pages (the second V in V:V:R) without increasing the count of 
Virtual Pages.  It is not overcommittment unless the number of virtual pages 
(the second V in V:V:R) exceeds the number of pages in R plus backing store.

-- 
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume. 



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-09 Thread Warren Young
On Dec 9, 2019, at 12:54 PM, Richard Damon  wrote:
> 
> But without virtual memory, many applications combinations that work 
> acceptably now would just fail to run at all.

You don’t even have to get into swapping to find such cases.

I once ran a headless app on a small cloud VPS that would run the system out of 
memory because it was linked to some rather large GUI libraries, which provided 
some small but essential bits of functionality.  Because those libraries were 
written to assume they were running on multi-gigabyte desktop computers with 
VMM, they did things that were simply ill-considered on my 256 MB VPS, which 
was hosted on tech without VMM.

If my VPS provider had used a hosting technology that allowed for swap space, 
most of those libraries’ pages could have been swapped out, solving the problem.

Instead, I ended up having to upgrade to a 512 MB plan just to give that 
program some scratch space to play with.

> Virtual memory itself isn’t the issue.

Well, every layer of indirection has a cost.  The question then becomes, what’s 
the cost of the alternatives?

To take my GUI library case again, I could have rebuilt my app statically so 
that only the necessary library functions were linked into my program instead 
of having the whole .so mapped into my process's VM space, but that then means 
I need to relink the program every time the OS updates that library.

I can pay that cost at least three different ways:

1. Download the sources again on each library upgrade, build from source, 
install, and remove the build and source trees.

2. Keep the sources around, build, install, and “make clean” on each library 
upgrade, paying extra for the disk space to hold the sources, but saving some 
on bandwidth and disk I/O from not needing to repeatedly unpack tarballs.

3. Keep the objects around as well, paying more for disk to hold the built but 
unlinked binaries in order to save some CPU on each relink.

TANSTAAFL.  You don’t get to not pay, you only get to choose *where* you pay.

> the Linux system to my understanding doesn’t have an easy call to just start 
> up a brand new process with parameters from you

Whether that’s true depends on your definitions.  I’d say that system() and 
posix_spawn() are easy calls for starting brand new processes.

However, these calls may be implemented in terms of fork() or similar, so we 
must continue down the rabbit hole…

> a process will fork itself, creating two identical copies of itself, one will 
> continue, and the other will exec the new process, replacing itself with the 
> desired process. The act of forking SHOULD allocated all the virtual memory 
> for the copy of the process, but that will take a bit of time.

You’re describing fork() before the mid-1980s, roughly.

> Because most of the time, all that memory is just going to be released in a 
> couple of instructions, it made sense to just postpone the actual allocation 
> until it was actually used (which it likely wasn’t).

It’s better to describe what happens as copy-on-write rather than anything 
being “postponed.”  Modern fork() uses the more powerful VMM features of CPUs 
to mark the forked process’s pages as CoW so that they’re shared between the 
two children until one tries to change them.  At that point, both processes get 
an independent copy of the changed page.

In the case of the fork()/exec() pattern, most pages never do get copied, since 
they’re almost immediately released by the exec().  Thus, the cost is a bit of 
setup and tear-down that strictly speaking didn’t need to happen. It’s tiny.

> the system allowed itself to overcommit memory

Which is fine as long as you don’t run the system into swapping and you keep a 
bit of swap space around.

You don’t run into serious problems under that condition until you run the 
system wholly out of swap, causing all of the bills to come due at once.

> If the system was changed to not allow over committing, then forking would be 
> slower which hits all of the standard system routines.  

You could still have fast forking without overcommitting, you’d just pay the 
cost in unreachable RAM.

If I have 4 GB of RAM in the system, and the kernel takes 1 GB of that, I start 
a 2.5 GB user space process, and my process forks itself with the intent of 
starting an 0.1 GB process, that fork would have to fail if overcommitting 
weren’t allowed.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-09 Thread Richard Damon
But without virtual memory, many applications combinations that work acceptably 
now would just fail to run at all. Virtual memory itself isn’t the issue. Also, 
an OS could fairly easily be set up so an application that start to thrash its 
virtual memory is dropped in priority to get memory, and even getting pages 
swapped in, so that other applications have their operations only minimally 
impacted.

One of the issues is due to the Linux fork/exec model. If a process wants to 
start a process that runs in parallel to it, the Linux system to my 
understanding doesn’t have an easy call to just start up a brand new process 
with parameters from you, but instead a process will fork itself, creating two 
identical copies of itself, one will continue, and the other will exec the new 
process, replacing itself with the desired process. The act of forking SHOULD 
allocated all the virtual memory for the copy of the process, but that will 
take a bit of time. Because most of the time, all that memory is just going to 
be released in a couple of instructions, it made sense to just postpone the 
actual allocation until it was actually used (which it likely wasn’t). This 
‘optimization’ was so ‘complete’ that the system didn’t really keep track of 
how much memory had been promised to the various processes, so the system 
allowed itself to overcommit memory, and if it actually did run out, it didn’t 
have a good way to determine who was at fault, and no way to tell them that the 
memory that was promised prior to them isn’t really available.

Fixing the issue is more of a political problem. With the current system, when 
a problem arises, you can normally find a user program or something the user 
did that was ‘bad’ and can be blamed for the problem. If the system was changed 
to not allow over committing, then forking would be slower which hits all of 
the standard system routines.  

> On Dec 9, 2019, at 8:39 AM, Digital Dog  wrote:
> 
> For reasons which you've described I'm a big fan of removing virtual memory
> from CPUs altogether. That would speed up things considerably.
> 
>> On Sun, Dec 8, 2019 at 6:43 PM James K. Lowden 
>> wrote:
>> 
>> On Sat, 7 Dec 2019 05:23:15 +
>> Simon Slavin  wrote:
>> 
>>> (Your operating system is allowed to do this.  Checking how much
>>> memory is available for every malloc takes too much time.)
>> 
>> Not really.  Consider that many (all?) operating systems before Linux
>> that supported dynamic memory returned an error if the requested amount
>> couldn't be supplied.  Some of those machines had 0.1% of the
>> processing capacity, and yet managed to answer the question reasonably
>> quickly.
>> 
>> The origin of oversubscribed memory rather has its origins in the
>> changed ratio of the speed of RAM to the speed of I/O, and the price of
>> RAM.
>> 
>> As RAM prices dropped, our machines got more RAM and the bigger
>> applications that RAM supported.  As memory got faster, relatively, the
>> disk (ipso facto) has gotten slower. Virtual memory -- the hallmark of
>> the the VAX, 4 decades ago -- has become infeasibly slow both because
>> the disk is relatively slower than it was, and because more is being
>> demanded of it to support today's big-memory applications.  Swapping in
>> Firefox, at 1 GB of memory, who knows why, is a much bigger deal than
>> Eight Megabytes and Constantly Swapping.
>> 
>> If too much paging makes the machine too slow (however measured) one
>> solution is less paging.  One administrative lever is to constrain how
>> much paging is possible by limiting the paging resource: swap space.
>> However, limiting swap space may leave the machine underutilized,
>> because many applications allocate memory they never use.
>> 
>> Rather than prefer applications that use resources rationally or
>> administer machines to prevent thrashing, the best-effort, least-effort
>> answer was lazy allocation, and its infamous gap-toothed cousin, the
>> OOM.
>> 
>> Nothing technical mandates oversubscribed memory.  The problem, as
>> ever, is not with the stars, but with ourselves.
>> 
>> --jkl
>> 
>> 
>> ___
>> sqlite-users mailing list
>> sqlite-users@mailinglists.sqlite.org
>> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>> 
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-09 Thread Digital Dog
For reasons which you've described I'm a big fan of removing virtual memory
from CPUs altogether. That would speed up things considerably.

On Sun, Dec 8, 2019 at 6:43 PM James K. Lowden 
wrote:

> On Sat, 7 Dec 2019 05:23:15 +
> Simon Slavin  wrote:
>
> > (Your operating system is allowed to do this.  Checking how much
> > memory is available for every malloc takes too much time.)
>
> Not really.  Consider that many (all?) operating systems before Linux
> that supported dynamic memory returned an error if the requested amount
> couldn't be supplied.  Some of those machines had 0.1% of the
> processing capacity, and yet managed to answer the question reasonably
> quickly.
>
> The origin of oversubscribed memory rather has its origins in the
> changed ratio of the speed of RAM to the speed of I/O, and the price of
> RAM.
>
> As RAM prices dropped, our machines got more RAM and the bigger
> applications that RAM supported.  As memory got faster, relatively, the
> disk (ipso facto) has gotten slower. Virtual memory -- the hallmark of
> the the VAX, 4 decades ago -- has become infeasibly slow both because
> the disk is relatively slower than it was, and because more is being
> demanded of it to support today's big-memory applications.  Swapping in
> Firefox, at 1 GB of memory, who knows why, is a much bigger deal than
> Eight Megabytes and Constantly Swapping.
>
> If too much paging makes the machine too slow (however measured) one
> solution is less paging.  One administrative lever is to constrain how
> much paging is possible by limiting the paging resource: swap space.
> However, limiting swap space may leave the machine underutilized,
> because many applications allocate memory they never use.
>
> Rather than prefer applications that use resources rationally or
> administer machines to prevent thrashing, the best-effort, least-effort
> answer was lazy allocation, and its infamous gap-toothed cousin, the
> OOM.
>
> Nothing technical mandates oversubscribed memory.  The problem, as
> ever, is not with the stars, but with ourselves.
>
> --jkl
>
>
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-08 Thread James K. Lowden
On Sat, 7 Dec 2019 05:23:15 +
Simon Slavin  wrote:

> (Your operating system is allowed to do this.  Checking how much
> memory is available for every malloc takes too much time.)

Not really.  Consider that many (all?) operating systems before Linux
that supported dynamic memory returned an error if the requested amount
couldn't be supplied.  Some of those machines had 0.1% of the
processing capacity, and yet managed to answer the question reasonably
quickly.  

The origin of oversubscribed memory rather has its origins in the
changed ratio of the speed of RAM to the speed of I/O, and the price of
RAM.  

As RAM prices dropped, our machines got more RAM and the bigger
applications that RAM supported.  As memory got faster, relatively, the
disk (ipso facto) has gotten slower. Virtual memory -- the hallmark of
the the VAX, 4 decades ago -- has become infeasibly slow both because
the disk is relatively slower than it was, and because more is being
demanded of it to support today's big-memory applications.  Swapping in
Firefox, at 1 GB of memory, who knows why, is a much bigger deal than
Eight Megabytes and Constantly Swapping.  

If too much paging makes the machine too slow (however measured) one
solution is less paging.  One administrative lever is to constrain how
much paging is possible by limiting the paging resource: swap space.
However, limiting swap space may leave the machine underutilized,
because many applications allocate memory they never use.  

Rather than prefer applications that use resources rationally or
administer machines to prevent thrashing, the best-effort, least-effort
answer was lazy allocation, and its infamous gap-toothed cousin, the
OOM.  

Nothing technical mandates oversubscribed memory.  The problem, as
ever, is not with the stars, but with ourselves.  

--jkl


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-06 Thread Octopus ZHANG
I fully understand the query now. Thank you all :)


Simon Slavin  于2019年12月7日周六 下午1:23写道:

> On 7 Dec 2019, at 5:10am, Octopus ZHANG  wrote:
>
> > I received no error from the execution. Could I know how to emit the
> error message if it is over length?
>
> There was no error because the command executed successfully.  You told
> SQLite to generate a string which was
>
> 1,000,000,000,000,003
>
> bytes long, which is a many terabytes.  You operating system agreed to
> reserve enough memory for it, though it took it more than a minute to do
> so.  Had you actually tried to access all that memory you would have
> received an error.
>
> (Your operating system is allowed to do this.  Checking how much memory is
> available for every malloc takes too much time.)
>
> Everything is executing correctly.  You discovered a command which really
> does take over a minute to execute.
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 


Apologize to any possible reply delay for time differences.
对于任何由于时差未能及时回复邮件的情况表示抱歉。
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-06 Thread Simon Slavin
On 7 Dec 2019, at 5:10am, Octopus ZHANG  wrote:

> I received no error from the execution. Could I know how to emit the error 
> message if it is over length?

There was no error because the command executed successfully.  You told SQLite 
to generate a string which was

1,000,000,000,000,003

bytes long, which is a many terabytes.  You operating system agreed to reserve 
enough memory for it, though it took it more than a minute to do so.  Had you 
actually tried to access all that memory you would have received an error.

(Your operating system is allowed to do this.  Checking how much memory is 
available for every malloc takes too much time.)

Everything is executing correctly.  You discovered a command which really does 
take over a minute to execute.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-06 Thread Octopus ZHANG
Hi,

I received no error from the execution. Could I know how to emit the error
message if it is over length?

Richard Hipp  于2019年12月6日周五 下午9:24写道:

> On 12/6/19, Octopus ZHANG  wrote:
> > Hi all,
> >
> >
> > I'm trying to fuzz sqlite, and I found the following query was executed
> for
> > more than one minute. (./sqlite3 < query.sql)
>
> This is not a bug or a problem.  SQLite is doing exactly what you
> asked it to do, which is to generate a string that is 1003
> bytes long.  That takes time, even on a fast machine.   (Actually,
> SQLite will error-out with an over-length string error at some point,
> but it still takes some time to reach that point.)
>
> >
> >>> SELECT
> > printf('%*.*c',9||00600&66,1003)""WHERE""/"";
> >
> > I also turned on the timer, but no time was printed. So I used `time` to
> > record:
> > +--+---+
> > | real | 1m38.036s |
> > | user | 1m36.086s |
> > | sys  |  0m1.948s |
> > +--+---+
> >
> > Here is how to reproduce:
> >
> > OS: Linux 18.04.3 LTS, 4.15.0-65-generic
> > SQLite version 3.30.1 2019-10-10 20:19:45 (used default command to build)
> >
> >
> > Yushan
> > ___
> > sqlite-users mailing list
> > sqlite-users@mailinglists.sqlite.org
> > http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
> >
>
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 


Apologize to any possible reply delay for time differences.
对于任何由于时差未能及时回复邮件的情况表示抱歉。
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-06 Thread Jose Isaias Cabrera

Octopus ZHANG, on Friday, December 6, 2019 06:18 AM, wrote...
>
> Hi all,
>
> I'm trying to fuzz sqlite, and I found the following query was executed for
> more than one minute. (./sqlite3, on
>
> >> SELECT
> printf('%*.*c',9||00600&66,1003)""WHERE""/"";
>
> I also turned on the timer, but no time was printed. So I used `time` to
> record:
> +--+---+
> | real | 1m38.036s |
> | user | 1m36.086s |
> | sys  |  0m1.948s |
> +--+---+

> Here is how to reproduce:

> OS: Linux 18.04.3 LTS, 4.15.0-65-generic
> SQLite version 3.30.1 2019-10-10 20:19:45 (used default command to build)

I actually ran out of memory...

 8:18:59.35>sqlite3
SQLite version 3.30.0 2019-10-04 15:03:17
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .timer on
sqlite>  SELECT
   ...> 
printf('%*.*c',9||00600&66,1003)""WHERE""/"";
Run Time: real 12.191 user 11.296875 sys 0.796875
Error: out of memory
sqlite>

josé
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-06 Thread Richard Hipp
On 12/6/19, Octopus ZHANG  wrote:
> Hi all,
>
>
> I'm trying to fuzz sqlite, and I found the following query was executed for
> more than one minute. (./sqlite3 < query.sql)

This is not a bug or a problem.  SQLite is doing exactly what you
asked it to do, which is to generate a string that is 1003
bytes long.  That takes time, even on a fast machine.   (Actually,
SQLite will error-out with an over-length string error at some point,
but it still takes some time to reach that point.)

>
>>> SELECT
> printf('%*.*c',9||00600&66,1003)""WHERE""/"";
>
> I also turned on the timer, but no time was printed. So I used `time` to
> record:
> +--+---+
> | real | 1m38.036s |
> | user | 1m36.086s |
> | sys  |  0m1.948s |
> +--+---+
>
> Here is how to reproduce:
>
> OS: Linux 18.04.3 LTS, 4.15.0-65-generic
> SQLite version 3.30.1 2019-10-10 20:19:45 (used default command to build)
>
>
> Yushan
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SLOW execution: Simple Query Uses More than 1 min

2019-12-06 Thread Octopus ZHANG
Hi all,


I'm trying to fuzz sqlite, and I found the following query was executed for
more than one minute. (./sqlite3 < query.sql)

>> SELECT
printf('%*.*c',9||00600&66,1003)""WHERE""/"";

I also turned on the timer, but no time was printed. So I used `time` to
record:
+--+---+
| real | 1m38.036s |
| user | 1m36.086s |
| sys  |  0m1.948s |
+--+---+

Here is how to reproduce:

OS: Linux 18.04.3 LTS, 4.15.0-65-generic
SQLite version 3.30.1 2019-10-10 20:19:45 (used default command to build)


Yushan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users