Re: [sqlite] Please help me fix the SQLite Git mirror

2019-04-22 Thread Rowan Worth
Richard Hipp wrote (quoting from several emails):

> The problem is that Git now thinks that 9b888fcc is the HEAD of master
> and that the true continuation of master (check-in 4f35b3b7 and
> beyond) are disconnected check-ins
>

Because from the git perspective it _is_ still the HEAD -- there's been no
further changes made on top of that commit. The "true" changes are in a
separate branch hanging off some historic fork point.

I don't understand this part.  From the Fossil perspective, moving a
> check-in from one branch to another is just adding a new tag to that
> check-in.  No history is changed.  The DAG of check-ins (the block-chain)
> is unmodified.


Hm. Initially, the commits on the primary branch looked like this:

1. HISTORY - FORK - MISTAKE

Then you changed it to this:

2. HISTORY - FORK - FIXED - BEYOND

How can you justify the claim that history was unchanged on trunk between
time (1) and time (2)? You haven't just added a new check-in to the branch
in this situation (which git is more than happy to do via cherry-pick),
you've also erased the MISTAKE check-in.

What happens to fossil users who updated trunk while MISTAKE was the head?
Does the next update somehow pathfind to the new BEYOND head, backtracking
via FORK?

-Rowan
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] 1st Call For Papers - 26th Annual Tcl/Tk Conference (Tcl'2019)

2019-04-22 Thread conference

Hello SQLite Users, fyi ...

26th Annual Tcl/Tk Conference (Tcl'2019)
https://www.tcl-lang.org/community/tcl2019/

November 04 - 08, 2019
Crowne Plaza Houston River Oaks
2712 Southwest Freeway, 77098
Houston, Texas, USA

[ NEWS
 * [Submission is open](https://www.tcl-lang.org/community/tcl2019/cfp.html)
]

Important Dates:

Abstracts and proposals due   September 09, 2019
Notification to authors   September 16, 2019
WIP and BOF reservations open August 12, 2019 ** may change **
Registration opensSeptember 09, 2019 ** may change **
Author materials due  October 14, 2019
Tutorials Start   November 04, 2019
Conference starts November 06, 2019

Email Contact:tclconfere...@googlegroups.com

Submission of Summaries

Tcl/Tk 2019 will be held in Houston, Texas, USA from November 04, 2019 to 
November 08, 2019.

The program committee is asking for papers and presentation proposals
from anyone using or developing with Tcl/Tk (and extensions). Past
conferences (Proceedings: https://www.tcl-lang.org/community/conferences.html)
have seen submissions covering a wide variety of topics including:

* Scientific and engineering applications
* Industrial controls
* Distributed applications and Network Managment
* Object oriented extensions to Tcl/Tk
* New widgets for Tk
* Simulation and application steering with Tcl/Tk
* Tcl/Tk-centric operating environments
* Tcl/Tk on small and embedded devices
* Medical applications and visualization
* Use of different programming paradigms in Tcl/Tk and proposals for new
  directions.
* New areas of exploration for the Tcl/Tk language

Submissions should consist of an abstract of about 100 words and a
summary of not more than two pages, and should be sent as plain text
to tclconfere...@googlegroups.com no later than September 09, 2019. Authors of 
accepted
abstracts will have until October 14, 2019 to submit their final
paper for the inclusion in the conference proceedings. The proceedings
will be made available on digital media, so extra materials such as
presentation slides, code examples, code for extensions etc. are
encouraged.

Printed proceedings will be produced as an on-demand book at lulu.com
Online proceedings will appear via
https://www.tcl-lang.org/community/conferences.html

The authors will have 30 minutes to present their paper at
the conference.

The program committee will review and evaluate papers according to the
following criteria:

* Quantity and quality of novel content
* Relevance and interest to the Tcl/Tk community
* Suitability of content for presentation at the conference

Proposals may report on commercial or non-commercial systems, but
those with only blatant marketing content will not be accepted.

Application and experience papers need to strike a balance between
background on the application domain and the relevance of Tcl/Tk to
the application. Application and experience papers should clearly
explain how the application or experience illustrates a novel use of
Tcl/Tk, and what lessons the Tcl/Tk community can derive from the
application or experience to apply to their own development efforts.

Papers accompanied by non-disclosure agreements will be returned to
the author(s) unread. All submissions are held in the highest
confidentiality prior to publication in the Proceedings, both as a
matter of policy and in accord with the U. S. Copyright Act of 1976.

The primary author for each accepted paper will receive registration
to the Technical Sessions portion of the conference at a reduced rate.

Other Forms of Participation

The program committee also welcomes proposals for panel discussions of
up to 90 minutes. Proposals should include a list of confirmed
panelists, a title and format, and a panel description with position
statements from each panelist. Panels should have no more than four
speakers, including the panel moderator, and should allow time for
substantial interaction with attendees. Panels are not presentations
of related research papers.

Slots for Works-in-Progress (WIP) presentations and Birds-of-a-Feather
sessions (BOFs) are available on a first-come, first-served basis
starting in August 12, 2019. Specific instructions for reserving WIP
and BOF time slots will be provided in the registration information
available in August 12, 2019. Some WIP and BOF time slots will be held open
for on-site reservation. All attendees with an interesting work in
progress should consider reserving a WIP slot.

Registration Information

More information on the conference is available the conference Web
site (https://www.tcl-lang.org/community/tcl2019/) and will be
published on various Tcl/Tk-related information channels.

To keep in touch with news regarding the conference, subscribe to the
tclconfere...@googlegroups.com list. See:
https://groups.google.com/forum/#!forum/tclconference for list
information, archive, and subscription.

To keep in touch with Tcl events in general, su

[sqlite] Best way to ALTER COLUMN ADD multiple tables if the columns don't already exist

2019-04-22 Thread Tommy Lane
Hi all,
I'm still working on this journal app and ran across a need to
update our table schema, I want my entries to be _essentially_
immutable. My solution to this problem is a linked list type dependency
where each entry has a parent and a child, which corresponds to an
entry's past and future modifications.
.
I do not want to have to replace all the journal files (sqlite
databases) with a completely new set of tables, I would like to update
them. I know there should be a way to progoramattically update the
table columns with SQL, but I'm not sure the most effective way to go
about doing it.

Current Schema creation command:
String createTableSQL = "CREATE TABLE IF NOT EXISTS entries(\n" +
"id INTEGER PRIMARY KEY, entry_creation_date TEXT NOT NULL, \n" 
+
"entry_last_update_date TEXT NOT NULL, entry_title TEXT, \n" +
"entry_content TEXT, entry_data BLOB);";

How  would I go about updating an existing table to also have a
parent integer and child integer column. How could I follow this
convention of creation/update verification for column modifications
on existing databases.

Thanks for any help!

-Tommy
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Keith Medcalf

Interesting.  If you can guarantee that you will only have a single thread 
accessing a single database only from one single thread, give it a try with 
SQLITE_OPEN_NOMUTEX | SQLITE_OPEN_READONLY in the flags parameter of 
sqlite3_open_v2 ...

Don't know if it will make a difference, but it might.

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Lee, Jason
>Sent: Monday, 22 April, 2019 17:33
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Multiple Independent Database Instances
>
>> How does each thread know whether the file has been "previously
>processed" or not?
>
>
>The paths are pushed onto a queue and each thread pops the top off. I
>am also looking into the queuing code to see if there are issues
>
>
>> In other words, if you "get rid of" all the sqlite3 processing and
>replace it with a 5 ms sleep, does increasing the number of threads
>exhibit the same symptom?
>
>
>The timings were for sqlite3_open_v2, not for the whole process. The
>current code is effectively just an sqlite3_open_v2 followed by an
>sqlite3_close, and yet the time it takes to complete sqlite3_open_v2
>still increases with the number of threads.
>
>
>> Even with gobs of RAM and solid-state storage, I/O will quickly
>> bottleneck because the processor is 3 orders of magnitude faster
>than
>> RAM and 6 orders faster than the disk.  Once you exhaust the I/O
>bus,
>> it's exhausted.
>
>
>I/O is not the bottleneck. I have 8 NVMe drives in RAID0. I have not
>been able to drive the disks in the slightest because the threads
>spend the majority of their time in sqlite3_open_v2, sqlite3_close,
>and sqlite3_prepare_v2.
>
>
>Jason Lee
>
>
>From: sqlite-users  on
>behalf of James K. Lowden 
>Sent: Monday, April 22, 2019 4:53:42 PM
>To: sqlite-users@mailinglists.sqlite.org
>Subject: Re: [sqlite] Multiple Independent Database Instances
>
>On Mon, 22 Apr 2019 21:25:31 +
>"Lee, Jason"  wrote:
>
>> I have a set of several million database files sitting on my
>> filesystem. Each thread will open a previously unprocessed database
>> file, do some queries, close the database, and move on to the next
>> unprocessed database file.
>
>Fascinating.  One wonders what Feynman would have said.
>
>Even with gobs of RAM and solid-state storage, I/O will quickly
>bottleneck because the processor is 3 orders of magnitude faster than
>RAM and 6 orders faster than the disk.  Once you exhaust the I/O bus,
>it's exhausted.
>
>I would build a pipeline, and let processes do the work. Write a
>program
>to process a single database: open, query, output, close.  Then
>define
>a make(1) rule to convert one database into one output file.  Then
>run
>"make -j dd" where "dd" is the number of simultaneous processes (or
>"jobs").  I think you'll find ~10 processes is all you can sustain.
>
>You could use the sqlite3 utility as your "program", but it's not
>very
>good at detecting errors and returning a nonzero return status to the
>OS. Hence a bespoke program.  Also, you can get the data into binary
>form, suitable for concatenation into one big file for input into
>your
>numerical process.  That will go a lot faster.
>
>Although there's some overhead to invoking a million processes, it's
>dwarfed by the I/O time.
>
>The advantage of doing the work under make is that it's reusable and
>restartable.  if you bury the machine, you can kill make and restart
>it
>with a lower number of jobs.  If you find some databases are corrupt
>or
>incomplete, you can replace them, and make will reprocess only the
>new
>ones.  If you add other databases at a later time, make will process
>only those.  You can add subsequent steps, too; make won't start from
>square 1 unless it has to.
>
>With millions of inputs, the odds are you will find problems.
>Perfectly good input over a dataset that size probably occured before
>in recorded history, but not frequently.
>
>I assume your millions of databases are not in a single directory;
>I'd
>guess you have 1000s of directories.  They offer convenient work
>partitions, which you might need; I have no idea how make will
>respond
>to a dependency tree with millions of nodes.
>
>--jkl
>
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Simon Slavin
On 23 Apr 2019, at 12:32am, Lee, Jason  wrote:

> The current code is effectively just an sqlite3_open_v2 followed by an 
> sqlite3_close

Then either your code is faulty, and doesn't actually do this, or your problem 
has nothing to do with SQLite.

SQLite doesn't open a database file when you use sqlite3_open_v2().  It doesn't 
even see whether the file, or even the path, exists.  The file is opened only 
when you use an API function which needs to read or write the file.  _open() 
followed by _close() just uses up a little memory to store the file path and 
some other settings, then releases it again.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Lee, Jason
> How does each thread know whether the file has been "previously processed" or 
> not?


The paths are pushed onto a queue and each thread pops the top off. I am also 
looking into the queuing code to see if there are issues


> In other words, if you "get rid of" all the sqlite3 processing and replace it 
> with a 5 ms sleep, does increasing the number of threads exhibit the same 
> symptom?


The timings were for sqlite3_open_v2, not for the whole process. The current 
code is effectively just an sqlite3_open_v2 followed by an sqlite3_close, and 
yet the time it takes to complete sqlite3_open_v2 still increases with the 
number of threads.


> Even with gobs of RAM and solid-state storage, I/O will quickly
> bottleneck because the processor is 3 orders of magnitude faster than
> RAM and 6 orders faster than the disk.  Once you exhaust the I/O bus,
> it's exhausted.


I/O is not the bottleneck. I have 8 NVMe drives in RAID0. I have not been able 
to drive the disks in the slightest because the threads spend the majority of 
their time in sqlite3_open_v2, sqlite3_close, and sqlite3_prepare_v2.


Jason Lee


From: sqlite-users  on behalf of 
James K. Lowden 
Sent: Monday, April 22, 2019 4:53:42 PM
To: sqlite-users@mailinglists.sqlite.org
Subject: Re: [sqlite] Multiple Independent Database Instances

On Mon, 22 Apr 2019 21:25:31 +
"Lee, Jason"  wrote:

> I have a set of several million database files sitting on my
> filesystem. Each thread will open a previously unprocessed database
> file, do some queries, close the database, and move on to the next
> unprocessed database file.

Fascinating.  One wonders what Feynman would have said.

Even with gobs of RAM and solid-state storage, I/O will quickly
bottleneck because the processor is 3 orders of magnitude faster than
RAM and 6 orders faster than the disk.  Once you exhaust the I/O bus,
it's exhausted.

I would build a pipeline, and let processes do the work. Write a program
to process a single database: open, query, output, close.  Then define
a make(1) rule to convert one database into one output file.  Then run
"make -j dd" where "dd" is the number of simultaneous processes (or
"jobs").  I think you'll find ~10 processes is all you can sustain.

You could use the sqlite3 utility as your "program", but it's not very
good at detecting errors and returning a nonzero return status to the
OS. Hence a bespoke program.  Also, you can get the data into binary
form, suitable for concatenation into one big file for input into your
numerical process.  That will go a lot faster.

Although there's some overhead to invoking a million processes, it's
dwarfed by the I/O time.

The advantage of doing the work under make is that it's reusable and
restartable.  if you bury the machine, you can kill make and restart it
with a lower number of jobs.  If you find some databases are corrupt or
incomplete, you can replace them, and make will reprocess only the new
ones.  If you add other databases at a later time, make will process
only those.  You can add subsequent steps, too; make won't start from
square 1 unless it has to.

With millions of inputs, the odds are you will find problems.
Perfectly good input over a dataset that size probably occured before
in recorded history, but not frequently.

I assume your millions of databases are not in a single directory; I'd
guess you have 1000s of directories.  They offer convenient work
partitions, which you might need; I have no idea how make will respond
to a dependency tree with millions of nodes.

--jkl

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread James K. Lowden
On Mon, 22 Apr 2019 21:25:31 +
"Lee, Jason"  wrote:

> I have a set of several million database files sitting on my
> filesystem. Each thread will open a previously unprocessed database
> file, do some queries, close the database, and move on to the next
> unprocessed database file.

Fascinating.  One wonders what Feynman would have said.  

Even with gobs of RAM and solid-state storage, I/O will quickly
bottleneck because the processor is 3 orders of magnitude faster than
RAM and 6 orders faster than the disk.  Once you exhaust the I/O bus,
it's exhausted.  

I would build a pipeline, and let processes do the work. Write a program
to process a single database: open, query, output, close.  Then define
a make(1) rule to convert one database into one output file.  Then run
"make -j dd" where "dd" is the number of simultaneous processes (or
"jobs").  I think you'll find ~10 processes is all you can sustain.  

You could use the sqlite3 utility as your "program", but it's not very
good at detecting errors and returning a nonzero return status to the
OS. Hence a bespoke program.  Also, you can get the data into binary
form, suitable for concatenation into one big file for input into your
numerical process.  That will go a lot faster.  

Although there's some overhead to invoking a million processes, it's
dwarfed by the I/O time.  

The advantage of doing the work under make is that it's reusable and
restartable.  if you bury the machine, you can kill make and restart it
with a lower number of jobs.  If you find some databases are corrupt or
incomplete, you can replace them, and make will reprocess only the new
ones.  If you add other databases at a later time, make will process
only those.  You can add subsequent steps, too; make won't start from
square 1 unless it has to.  

With millions of inputs, the odds are you will find problems.
Perfectly good input over a dataset that size probably occured before
in recorded history, but not frequently.  

I assume your millions of databases are not in a single directory; I'd
guess you have 1000s of directories.  They offer convenient work
partitions, which you might need; I have no idea how make will respond
to a dependency tree with millions of nodes.  

--jkl

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Keith Medcalf

Interesting ... How does each thread know whether the file has been "previously 
processed" or not?

In other words, if you "get rid of" all the sqlite3 processing and replace it 
with a 5 ms sleep, does increasing the number of threads exhibit the same 
symptom?


That is if your thread code does this:

ThreadCode:
  while filename = getnextunprocessedfile()
 open the database filename
 do some stuff
 finalize all the statements
 close the database
 mark filename as being processed
  terminate cuz there is naught more to do

then replace it with this:

ThreadCode:
   while filename = getnextunprocessedfile()
 sleep 5 milliseconds
 mark filename as being processed
   terminate cuz there is naught more to do

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.


>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Lee, Jason
>Sent: Monday, 22 April, 2019 15:26
>To: SQLite mailing list
>Subject: Re: [sqlite] Multiple Independent Database Instances
>
>I have a set of several million database files sitting on my
>filesystem. Each thread will open a previously unprocessed database
>file, do some queries, close the database, and move on to the next
>unprocessed database file.
>
>
>Jason Lee
>
>
>From: sqlite-users  on
>behalf of Keith Medcalf 
>Sent: Monday, April 22, 2019 3:13:57 PM
>To: SQLite mailing list
>Subject: Re: [sqlite] Multiple Independent Database Instances
>
>
>This is somewhat unclear.  You make two conflicting statements:
>
>"I have been testing with 16, 32, and 48 threads/databases at once
>..."
>and
>"time it takes for all of the threads to just open all (millions) of
>the databases"
>
>So, are you:
>(a) opening one independently and uniquely named database per thread
>as would be apparent from the first conflicting statement above; or,
>(b) opening the same "millions" of databases per thread as indicated
>by the second conflicting statement above
>
>?
>
>Per my testing the time taken to spin up a thread and open a database
>with a unique database name is constant and linear up to the thread
>limit of a process (about 800 threads).  This is even true if you
>execute SQL against the connection within that thread and also count
>that execution time in the "Time Taken".
>
>---
>The fact that there's a Highway to Hell but only a Stairway to Heaven
>says a lot about anticipated traffic volume.
>
>>-Original Message-
>>From: sqlite-users [mailto:sqlite-users-
>>boun...@mailinglists.sqlite.org] On Behalf Of Lee, Jason
>>Sent: Monday, 22 April, 2019 14:08
>>To: SQLite mailing list
>>Subject: Re: [sqlite] Multiple Independent Database Instances
>>
>>Thanks for the quick responses!
>>
>>
>>I am on a machine with many many cores, 500GB RAM, and lots of NVMe
>>drives raided together, so the system should not be the issue. I
>have
>>been testing with 16, 32, and 48 threads/databases at once, and the
>>cumulative time it takes for all of the threads to just open all
>>(millions) of the databases goes from 1200 seconds to 2200 seconds
>to
>>3300 seconds.
>>
>>
>>As mentioned, this is likely to be something else, but I was hoping
>>that I was somehow using SQLite wrong.
>>
>>
>>Jason Lee
>>
>>
>>From: sqlite-users  on
>>behalf of Jens Alfke 
>>Sent: Monday, April 22, 2019 12:52:28 PM
>>To: SQLite mailing list
>>Subject: Re: [sqlite] Multiple Independent Database Instances
>>
>>
>>
>>> On Apr 22, 2019, at 11:39 AM, Lee, Jason 
>wrote:
>>>
>>> Hi. Are there any gotchas when opening multiple independent
>>databases from within one process using the C API?
>>
>>Do you mean different database files, or multiple connections to the
>>same file?
>>
>>> I am opening one database per thread in my code, and noticed that
>>sqlite3_open_v2 and sqlite3_close slow down as the number of threads
>>increase, indicating there might be some resource contention
>>somewhere, even though the databases should be independent of each
>>other.
>>
>>How many databases/threads? With huge numbers I’d expect slowdowns,
>>since you’ll be bottlenecking on I/O.
>>
>>But otherwise there shouldn’t be any gotchas. I would troubleshoot
>>this by profiling the running code to see where the time is being
>>spent. Even without knowledge of the SQLite internals, looking at
>the
>>call stacks of the hot-spots can help identify what the problem is.
>>
>>—Jens
>>___
>>sqlite-users mailing list
>>sqlite-users@mailinglists.sqlite.org
>>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>>___
>>sqlite-users mailing list
>>sqlite-users@mailinglists.sqlite.org
>>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
>
>
>___
>sqlite-users mailing list
>sqlite-users@ma

Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Simon Slavin
On 22 Apr 2019, at 10:25pm, Lee, Jason  wrote:

> I have a set of several million database files sitting on my filesystem. Each 
> thread will open a previously unprocessed database file, do some queries, 
> close the database, and move on to the next unprocessed database file.

If this process is getting slower and slower, you have a resource leak 
somewhere in your program.  It's possible to make SQLite do this using faulty 
programming.

For instance, you may have a statement that reads a table.  Statements much be 
finalized or reset using sqlite3_finalize() or sqlite3_reset() and you still 
need to do this even if SQLite returned  SQLITE_DONE to tell you there are no 
more rows to return.

If you do not do this, even though sqlite3_close() will run and return 
SQLITE_OK, it cannot release the resources used by the file because a statement 
is still pending.  It waits until the statement is terminated and then 
automatically closes the file.  So you get a resource leak until that's done.

You can start analysis by using a monitoring program to monitor memory usage of 
the process.  Does it gradually use more and more memory the longer it runs ?  
If so, it shouldn't be too difficult to figure out what memory isn't being 
released.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Lee, Jason
I have a set of several million database files sitting on my filesystem. Each 
thread will open a previously unprocessed database file, do some queries, close 
the database, and move on to the next unprocessed database file.


Jason Lee


From: sqlite-users  on behalf of 
Keith Medcalf 
Sent: Monday, April 22, 2019 3:13:57 PM
To: SQLite mailing list
Subject: Re: [sqlite] Multiple Independent Database Instances


This is somewhat unclear.  You make two conflicting statements:

"I have been testing with 16, 32, and 48 threads/databases at once ..."
and
"time it takes for all of the threads to just open all (millions) of the 
databases"

So, are you:
(a) opening one independently and uniquely named database per thread as would 
be apparent from the first conflicting statement above; or,
(b) opening the same "millions" of databases per thread as indicated by the 
second conflicting statement above

?

Per my testing the time taken to spin up a thread and open a database with a 
unique database name is constant and linear up to the thread limit of a process 
(about 800 threads).  This is even true if you execute SQL against the 
connection within that thread and also count that execution time in the "Time 
Taken".

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Lee, Jason
>Sent: Monday, 22 April, 2019 14:08
>To: SQLite mailing list
>Subject: Re: [sqlite] Multiple Independent Database Instances
>
>Thanks for the quick responses!
>
>
>I am on a machine with many many cores, 500GB RAM, and lots of NVMe
>drives raided together, so the system should not be the issue. I have
>been testing with 16, 32, and 48 threads/databases at once, and the
>cumulative time it takes for all of the threads to just open all
>(millions) of the databases goes from 1200 seconds to 2200 seconds to
>3300 seconds.
>
>
>As mentioned, this is likely to be something else, but I was hoping
>that I was somehow using SQLite wrong.
>
>
>Jason Lee
>
>
>From: sqlite-users  on
>behalf of Jens Alfke 
>Sent: Monday, April 22, 2019 12:52:28 PM
>To: SQLite mailing list
>Subject: Re: [sqlite] Multiple Independent Database Instances
>
>
>
>> On Apr 22, 2019, at 11:39 AM, Lee, Jason  wrote:
>>
>> Hi. Are there any gotchas when opening multiple independent
>databases from within one process using the C API?
>
>Do you mean different database files, or multiple connections to the
>same file?
>
>> I am opening one database per thread in my code, and noticed that
>sqlite3_open_v2 and sqlite3_close slow down as the number of threads
>increase, indicating there might be some resource contention
>somewhere, even though the databases should be independent of each
>other.
>
>How many databases/threads? With huge numbers I’d expect slowdowns,
>since you’ll be bottlenecking on I/O.
>
>But otherwise there shouldn’t be any gotchas. I would troubleshoot
>this by profiling the running code to see where the time is being
>spent. Even without knowledge of the SQLite internals, looking at the
>call stacks of the hot-spots can help identify what the problem is.
>
>—Jens
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Keith Medcalf

This is somewhat unclear.  You make two conflicting statements:

"I have been testing with 16, 32, and 48 threads/databases at once ..."
and
"time it takes for all of the threads to just open all (millions) of the 
databases"

So, are you:
(a) opening one independently and uniquely named database per thread as would 
be apparent from the first conflicting statement above; or,
(b) opening the same "millions" of databases per thread as indicated by the 
second conflicting statement above

?

Per my testing the time taken to spin up a thread and open a database with a 
unique database name is constant and linear up to the thread limit of a process 
(about 800 threads).  This is even true if you execute SQL against the 
connection within that thread and also count that execution time in the "Time 
Taken".

---
The fact that there's a Highway to Hell but only a Stairway to Heaven says a 
lot about anticipated traffic volume.

>-Original Message-
>From: sqlite-users [mailto:sqlite-users-
>boun...@mailinglists.sqlite.org] On Behalf Of Lee, Jason
>Sent: Monday, 22 April, 2019 14:08
>To: SQLite mailing list
>Subject: Re: [sqlite] Multiple Independent Database Instances
>
>Thanks for the quick responses!
>
>
>I am on a machine with many many cores, 500GB RAM, and lots of NVMe
>drives raided together, so the system should not be the issue. I have
>been testing with 16, 32, and 48 threads/databases at once, and the
>cumulative time it takes for all of the threads to just open all
>(millions) of the databases goes from 1200 seconds to 2200 seconds to
>3300 seconds.
>
>
>As mentioned, this is likely to be something else, but I was hoping
>that I was somehow using SQLite wrong.
>
>
>Jason Lee
>
>
>From: sqlite-users  on
>behalf of Jens Alfke 
>Sent: Monday, April 22, 2019 12:52:28 PM
>To: SQLite mailing list
>Subject: Re: [sqlite] Multiple Independent Database Instances
>
>
>
>> On Apr 22, 2019, at 11:39 AM, Lee, Jason  wrote:
>>
>> Hi. Are there any gotchas when opening multiple independent
>databases from within one process using the C API?
>
>Do you mean different database files, or multiple connections to the
>same file?
>
>> I am opening one database per thread in my code, and noticed that
>sqlite3_open_v2 and sqlite3_close slow down as the number of threads
>increase, indicating there might be some resource contention
>somewhere, even though the databases should be independent of each
>other.
>
>How many databases/threads? With huge numbers I’d expect slowdowns,
>since you’ll be bottlenecking on I/O.
>
>But otherwise there shouldn’t be any gotchas. I would troubleshoot
>this by profiling the running code to see where the time is being
>spent. Even without knowledge of the SQLite internals, looking at the
>call stacks of the hot-spots can help identify what the problem is.
>
>—Jens
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>___
>sqlite-users mailing list
>sqlite-users@mailinglists.sqlite.org
>http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users



___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Simon Slavin
On 22 Apr 2019, at 9:08pm, Lee, Jason  wrote:

> the cumulative time it takes for all of the threads to just open all 
> (millions) of the databases goes from 1200 seconds to 2200 seconds to 3300 
> seconds.

I'm guessing that it's the number of file handles which increases.  Most OSes 
maintain a linked list of file buffer metadata.  Opening a thousandth file 
involves offsetting for a thousand pieces of file metadata.  Use 'lsof' and 
check the output.

You might try to explore this problem by writing a program to open the same 
number of text files at once.  See whether that program gets slower similarly 
to the SQLite one.

The exception would be if you're using the C API for SQLite without a library 
suited to your programming language.  The C interface to SQLite does not 
actually open a file just because you told it to.  The file is actually opened 
the first time SQLite needs to do something to it.  So a million uses of 
sqlite3_open_v2 might still do no file access.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help me fix the SQLite Git mirror

2019-04-22 Thread Carl Edquist


I would like to better understand how rewiring the refs this way 
constitutes "changing history".  The refs/heads entries are all 
ephemeral - they are constantly changing on their own, and no historical 
record of their past values is retained.


The key bit here is that in git, every commit references its parent commit 
hash, and thus implies the entire past history for that commit.  And local 
git repos contain that entire history.  So when a branch head is updated 
upstream, git can tell if history has moved forward (ie, the new head 
commit contains the previous head commit in its history), and if not, it's 
considered a "forced update", which is to say, history was re-written -- 
since the last previously know state of history is nolonger part of 
history.


Carl

On Mon, 22 Apr 2019, Carl Edquist wrote:


Hi Richard,

As Jonathan mentioned, in git land, if you have already published a 
"mistake" commit publicly, the proper way to revert it is to make another 
commit to reverse/undo the change.


By removing a commit from the public history of the published 'master' 
branch, it forces everyone downstream to manually fix their history.


If they do a normal "git pull", git will attempt to merge their master 
(the mistake commit) with the latest upstream master, which is not 
actually your intention.


But if you make an "revert" commit to undo the change, history will 
continue forward for the master branch from the downstream perspective.




I fixed the recent breakage of the SQLite Git mirror as follows:

(1) cd into the refs/heads directory
(2) run "cat master >mistake"
(3) run "echo a9a5465eb44d0d8f1c3c9d288b7f23f628ddb50b >master"
(4) run "git push --mirror https://github.com/sqlite/sqlite.git";


Not that you want to do it this way again if you can avoid it, but the 
safe git way to do (2),(3) is:


2) git update-ref refs/heads/mistake refs/heads/master
3) git update-ref refs/heads/master a9a5465eb44d0d8f1c3c9d288b7f23f628ddb50b


Carl

On Mon, 22 Apr 2019, Richard Hipp wrote:


Thanks for the help.  See additional questions and remarks below

On 4/22/19, Jonathan Brandmeyer  wrote:

```
# Construct the matching branch name
git branch mistake 9b888fc
# Push the name alone to the remote
git push -u origin mistake
# Move the name of master
git checkout master && git reset --hard 
# Push the new name of master
git push --force
```

Git reset --hard will move the name of the current working branch to
another branch SHA, which is why you need to first check out the
branch being moved: Its context sensitive.  You are re-writing
history, though.


I don't understand this part.  From the Fossil perspective, moving a
check-in from one branch to another is just adding a new tag to that
check-in.  No history is changed.  The DAG of check-ins (the
block-chain) is unmodified.

Subsequent to your message, I fixed the recent breakage of the SQLite
Git mirror as follows:

(1) cd into the refs/heads directory
(2) run "cat master >mistake"
(3) run "echo a9a5465eb44d0d8f1c3c9d288b7f23f628ddb50b >master"
(4) run "git push --mirror https://github.com/sqlite/sqlite.git";

This was a one-time fix.  I have not yet enhanced the mirroring
mechanism to make this happen automatically, but probably I will soon.

But before I proceed, I would like to better understand how rewiring
the refs this way constitutes "changing history".  The refs/heads
entries are all ephemeral - they are constantly changing on their own,
and no historical record of their past values is retained.  So if I
modify the refs to synchronize with the canonical Fossil repository,
how is that changing history, exactly?

Any further explanation is appreciated.

--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Lee, Jason
Thanks for the quick responses!


I am on a machine with many many cores, 500GB RAM, and lots of NVMe drives 
raided together, so the system should not be the issue. I have been testing 
with 16, 32, and 48 threads/databases at once, and the cumulative time it takes 
for all of the threads to just open all (millions) of the databases goes from 
1200 seconds to 2200 seconds to 3300 seconds.


As mentioned, this is likely to be something else, but I was hoping that I was 
somehow using SQLite wrong.


Jason Lee


From: sqlite-users  on behalf of 
Jens Alfke 
Sent: Monday, April 22, 2019 12:52:28 PM
To: SQLite mailing list
Subject: Re: [sqlite] Multiple Independent Database Instances



> On Apr 22, 2019, at 11:39 AM, Lee, Jason  wrote:
>
> Hi. Are there any gotchas when opening multiple independent databases from 
> within one process using the C API?

Do you mean different database files, or multiple connections to the same file?

> I am opening one database per thread in my code, and noticed that 
> sqlite3_open_v2 and sqlite3_close slow down as the number of threads 
> increase, indicating there might be some resource contention somewhere, even 
> though the databases should be independent of each other.

How many databases/threads? With huge numbers I’d expect slowdowns, since 
you’ll be bottlenecking on I/O.

But otherwise there shouldn’t be any gotchas. I would troubleshoot this by 
profiling the running code to see where the time is being spent. Even without 
knowledge of the SQLite internals, looking at the call stacks of the hot-spots 
can help identify what the problem is.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help me fix the SQLite Git mirror

2019-04-22 Thread Carl Edquist

Hi Richard,

As Jonathan mentioned, in git land, if you have already published a 
"mistake" commit publicly, the proper way to revert it is to make another 
commit to reverse/undo the change.


By removing a commit from the public history of the published 'master' 
branch, it forces everyone downstream to manually fix their history.


If they do a normal "git pull", git will attempt to merge their master 
(the mistake commit) with the latest upstream master, which is not 
actually your intention.


But if you make an "revert" commit to undo the change, history will 
continue forward for the master branch from the downstream perspective.




I fixed the recent breakage of the SQLite Git mirror as follows:

(1) cd into the refs/heads directory
(2) run "cat master >mistake"
(3) run "echo a9a5465eb44d0d8f1c3c9d288b7f23f628ddb50b >master"
(4) run "git push --mirror https://github.com/sqlite/sqlite.git";


Not that you want to do it this way again if you can avoid it, but the 
safe git way to do (2),(3) is:


2) git update-ref refs/heads/mistake refs/heads/master
3) git update-ref refs/heads/master a9a5465eb44d0d8f1c3c9d288b7f23f628ddb50b


Carl

On Mon, 22 Apr 2019, Richard Hipp wrote:


Thanks for the help.  See additional questions and remarks below

On 4/22/19, Jonathan Brandmeyer  wrote:

```
# Construct the matching branch name
git branch mistake 9b888fc
# Push the name alone to the remote
git push -u origin mistake
# Move the name of master
git checkout master && git reset --hard 
# Push the new name of master
git push --force
```

Git reset --hard will move the name of the current working branch to
another branch SHA, which is why you need to first check out the
branch being moved: Its context sensitive.  You are re-writing
history, though.


I don't understand this part.  From the Fossil perspective, moving a
check-in from one branch to another is just adding a new tag to that
check-in.  No history is changed.  The DAG of check-ins (the
block-chain) is unmodified.

Subsequent to your message, I fixed the recent breakage of the SQLite
Git mirror as follows:

(1) cd into the refs/heads directory
(2) run "cat master >mistake"
(3) run "echo a9a5465eb44d0d8f1c3c9d288b7f23f628ddb50b >master"
(4) run "git push --mirror https://github.com/sqlite/sqlite.git";

This was a one-time fix.  I have not yet enhanced the mirroring
mechanism to make this happen automatically, but probably I will soon.

But before I proceed, I would like to better understand how rewiring
the refs this way constitutes "changing history".  The refs/heads
entries are all ephemeral - they are constantly changing on their own,
and no historical record of their past values is retained.  So if I
modify the refs to synchronize with the canonical Fossil repository,
how is that changing history, exactly?

Any further explanation is appreciated.

--
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help me fix the SQLite Git mirror

2019-04-22 Thread Jonathan Brandmeyer
On Mon, Apr 22, 2019 at 12:22 PM Richard Hipp  wrote:

> But before I proceed, I would like to better understand how rewiring
> the refs this way constitutes "changing history".  The refs/heads
> entries are all ephemeral - they are constantly changing on their own,
> and no historical record of their past values is retained.  So if I
> modify the refs to synchronize with the canonical Fossil repository,
> how is that changing history, exactly?
>

Certainly no new SHA's were created, so this is much less obvious of a
re-write than if you had performed a rebase of some kind.
Nonetheless, I claim that this constitutes rewriting history because
it has a similar impact to downstream users.  Some user-visible
symptoms, after a user had already synchronized to the master which
was later abandoned:

- From a context of master, `git pull` alone would construct a merge
commit between the abandoned branch and the new master.  `git pull
--ff-only` would fail.

- From a context of a custom patch series, `git rebase master` has
unexpected effects, in that it also rebases the mistake you tried to
orphan.

- `git fetch` shows a forced update to origin/master.

- A user who was using a merge-based workflow and had merged to your
mistake branch would have a rough time following the change in branch
name.  One method would be to construct a new merge to the newly
corrected master and then rebase any of their subsequent changes onto
the new merge commit.  Their workflow is no longer strictly
merge-based and they still have to deal with the impacts of re-writing
their history.  Alternatively, they could construct the inverse of the
mistake via `git revert` onto their own working branch and then merge
again against the new master.

These user-visible impacts and the recovery actions are almost the
same as what a Git user would see if you had initially constructed (A,
B, C, D) and re-written it to be (A, C', D') instead via a rebase.

IMO, the proper corrective action after pushing the commit with a
mistake in it would have been to commit the inverse of the mistake and
then merge it to the alternate path.  Yes, it would have constructed a
merge commit in the history, which is unfortunate when you are trying
to maintain a clean and linear history.  But the impact to downstream
users would have been negligible.  `git pull --ff-only` would have
Just Worked, `git rebase master` from a patch series would have Just
Worked, and a merge-based workflow would have Just Worked, too.

--
Jonathan Brandmeyer
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Jens Alfke


> On Apr 22, 2019, at 11:39 AM, Lee, Jason  wrote:
> 
> Hi. Are there any gotchas when opening multiple independent databases from 
> within one process using the C API?

Do you mean different database files, or multiple connections to the same file?

> I am opening one database per thread in my code, and noticed that 
> sqlite3_open_v2 and sqlite3_close slow down as the number of threads 
> increase, indicating there might be some resource contention somewhere, even 
> though the databases should be independent of each other.

How many databases/threads? With huge numbers I’d expect slowdowns, since 
you’ll be bottlenecking on I/O.

But otherwise there shouldn’t be any gotchas. I would troubleshoot this by 
profiling the running code to see where the time is being spent. Even without 
knowledge of the SQLite internals, looking at the call stacks of the hot-spots 
can help identify what the problem is.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Multiple Independent Database Instances

2019-04-22 Thread Simon Slavin
On 22 Apr 2019, at 7:39pm, Lee, Jason  wrote:

> Hi. Are there any gotchas when opening multiple independent databases from 
> within one process using the C API? I am opening one database per thread in 
> my code, and noticed that sqlite3_open_v2 and sqlite3_close slow down as the 
> number of threads increase, indicating there might be some resource 
> contention somewhere, even though the databases should be independent of each 
> other.

SQLite is designed to cope with the opening of many different databases at the 
same time.  SQLite does not maintain its own list of open connections, so it 
doesn't have to iterate through the list each time you execute another command.

Those routines will get slower, since your operating system keeps a list of 
open files and has to walk through the list.  But that should be tiny and 
unnoticeable in any modern OS.

Can you give us any numbers as examples ?  I presume your setup gives you one 
thread per database.  Are you sure that this slowdown is not just the result of 
some OS function, i.e. you've run out of real memory and the OS has to keep 
swapping every time you switch to another thread ?
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLITE_MAX_MMAP_SIZE 2GB default

2019-04-22 Thread Jens Alfke


> On Apr 19, 2019, at 12:46 PM, Carl Edquist  wrote:
> 
> For instance - if you have a 30GB db file on a 64bit system with <= 2GB ram, 
> you can still mmap the whole file, and benefit from that mmap.  If the 
> portion of the db that gets used for a query fits within the available 
> pagecache ram, it's a clear win.  (It's not like the whole file automatically 
> gets read from disk into the pagecache for the mmap.)

Oops, you’re right. I somehow lost sight of the obvious when replying…

> But even if the whole file is used for the query (that is, more than fits 
> into pagecache/ram), it still has the benefit of avoiding the system calls 
> for the file seeks/reads.  (Either way the kernel needs to swap disk pages 
> into/out of of the pagecache.)

Sort of. Most current OSs have a universal buffer cache, wherein filesystem 
caches and VM pages use the same RAM buffers. A page-fault and a file read will 
incur similar amounts of work. The big benefit is that the memory-mapped pages 
can be evicted from RAM when needed for other stuff, whereas a malloc-ed page 
cache is considered dirty and has to be swapped out before the RAM page can be 
reused. (That said, I am not a kernel or filesystem guru so I am probably 
oversimplifying.)

But yeah, I agree with you that it seems odd to have a compiled-in restriction 
on the maximum memory-map size.

—Jens
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Multiple Independent Database Instances

2019-04-22 Thread Lee, Jason
Hi. Are there any gotchas when opening multiple independent databases from 
within one process using the C API? I am opening one database per thread in my 
code, and noticed that sqlite3_open_v2 and sqlite3_close slow down as the 
number of threads increase, indicating there might be some resource contention 
somewhere, even though the databases should be independent of each other.


Jason Lee
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help me fix the SQLite Git mirror

2019-04-22 Thread Richard Hipp
Thanks for the help.  See additional questions and remarks below

On 4/22/19, Jonathan Brandmeyer  wrote:
> ```
> # Construct the matching branch name
> git branch mistake 9b888fc
> # Push the name alone to the remote
> git push -u origin mistake
> # Move the name of master
> git checkout master && git reset --hard 
> # Push the new name of master
> git push --force
> ```
>
> Git reset --hard will move the name of the current working branch to
> another branch SHA, which is why you need to first check out the
> branch being moved: Its context sensitive.  You are re-writing
> history, though.

I don't understand this part.  From the Fossil perspective, moving a
check-in from one branch to another is just adding a new tag to that
check-in.  No history is changed.  The DAG of check-ins (the
block-chain) is unmodified.

Subsequent to your message, I fixed the recent breakage of the SQLite
Git mirror as follows:

(1) cd into the refs/heads directory
(2) run "cat master >mistake"
(3) run "echo a9a5465eb44d0d8f1c3c9d288b7f23f628ddb50b >master"
(4) run "git push --mirror https://github.com/sqlite/sqlite.git";

This was a one-time fix.  I have not yet enhanced the mirroring
mechanism to make this happen automatically, but probably I will soon.

But before I proceed, I would like to better understand how rewiring
the refs this way constitutes "changing history".  The refs/heads
entries are all ephemeral - they are constantly changing on their own,
and no historical record of their past values is retained.  So if I
modify the refs to synchronize with the canonical Fossil repository,
how is that changing history, exactly?

Any further explanation is appreciated.

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help me fix the SQLite Git mirror

2019-04-22 Thread Jonathan Brandmeyer
```
# Construct the matching branch name
git branch mistake 9b888fc
# Push the name alone to the remote
git push -u origin mistake
# Move the name of master
git checkout master && git reset --hard 
# Push the new name of master
git push --force
```

Git reset --hard will move the name of the current working branch to
another branch SHA, which is why you need to first check out the
branch being moved: Its context sensitive.  You are re-writing
history, though.  It shouldn't construct any new SHA's, but the impact
on a downstream user's workflow is rough.  Once it got published to
public git the least impactful way forward would be to construct the
inverse of the mistake and push that as its own commit instead of
orphaning it.  `git revert` does this in git-land.

If I'm maintaining some patches against your master, then my normal
workflow might be to rebase them against the current master every once
in a while, with just `git rebase master`.  If I did that once to
rebase against the SHA which was is currently named `master`, and then
invoke `git rebase master` again after your change to history, then
the second rebase will also attempt to rebase your mistake onto the
corrected master.  User's would need to perform a one-time `git rebase
--onto master mistake ` instead.


-- 
Jonathan Brandmeyer
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help me fix the SQLite Git mirror

2019-04-22 Thread Richard Hipp
On 4/22/19, Jeffrey Schiller  wrote:
> So if I understand you correctly, you just want to make "master" point to a
> particular known commit. To do this, you can issue the commands (in a local
> copy):
>
> git branch -m master oldmaster # Move it out of the way
> git branch master 4f35b3b7
>

This is to be done via automation.  I don't want to have to write,
debug, and test the code to detect whether or not there is an existing
"master" that needs to be moved out of the way.  I'd rather do the
equivalent of REPLACE INTO - overwriting the existing ref it exists
and create a new one if it does not.  How might that be done in Git?


-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] Please help me fix the SQLite Git mirror

2019-04-22 Thread Jeffrey Schiller
So if I understand you correctly, you just want to make "master" point to a
particular known commit. To do this, you can issue the commands (in a local
copy):

git branch -m master oldmaster # Move it out of the way
git branch master 4f35b3b7

Then do a "git push -f origin master" (assuming that the github repo is
defined as "origin", replace that with whatever name you use for the remote
if it isn't origin).

-Jeff

On Mon, Apr 22, 2019 at 12:05 PM Richard Hipp  wrote:

> The Git mirror of SQLite found at https://github.com/sqlite/sqlite is
> busted.  I don't know how to fix it and would appreciate advice from
> people who have more experience with Git internals.
>
> To describe the problem, consider this excerpt from the check-in
> sequence for SQLite:
>
> https://www.sqlite.org/src/timeline?d=6de980a09c3a7adf&n=5
>
> Notes to Git-ers:  (1) Graph arrows in Fossil point forwards in time,
> not backwards as Git does.  In other words, the arrows point from
> parent to child, not from child to parent.  (2) The main branch is
> called "trunk" in Fossil instead of "master".  The name is changed
> automatically during the mirroring process.
>
> What happened here is that the 9b888fcc check-in was originally on
> trunk/master.  But after it was checked in, I discovered a problem
> with it.  So I diverted that check-in off into the "mistake" branch
> (which you can do in Fossil by adding a special tag.)  Then the
> check-in sequence for trunk/master continued with 6cf16703 and
> 4f35b3b7 and so forth.
>
> The problem is that Git now thinks that 9b888fcc is the HEAD of master
> and that the true continuation of master (check-in 4f35b3b7 and
> beyond) are disconnected check-ins, awaiting garbage collection.
> There is no "ref" pointing to the HEAD of the true continuation.
>
> I think what I need to do is change refs/heads/master to point to
> 4f35b3b7 (or whatever check-ins come afterwards - the snippet shown is
> not the complete graph).  Then create a new entry refs/heads/mistake
> that points to 9b888fcc.
>
> Question 1:  Does my analysis seem correct.  Or have I misinterpreted
> the malfunction?
>
> Question 2:  Assuming that my analysis is correct, what is the
> preferred way of rewiring the refs in Git?
>
> --
> D. Richard Hipp
> d...@sqlite.org
> ___
> sqlite-users mailing list
> sqlite-users@mailinglists.sqlite.org
> http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users
>
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] Please help me fix the SQLite Git mirror

2019-04-22 Thread Richard Hipp
The Git mirror of SQLite found at https://github.com/sqlite/sqlite is
busted.  I don't know how to fix it and would appreciate advice from
people who have more experience with Git internals.

To describe the problem, consider this excerpt from the check-in
sequence for SQLite:

https://www.sqlite.org/src/timeline?d=6de980a09c3a7adf&n=5

Notes to Git-ers:  (1) Graph arrows in Fossil point forwards in time,
not backwards as Git does.  In other words, the arrows point from
parent to child, not from child to parent.  (2) The main branch is
called "trunk" in Fossil instead of "master".  The name is changed
automatically during the mirroring process.

What happened here is that the 9b888fcc check-in was originally on
trunk/master.  But after it was checked in, I discovered a problem
with it.  So I diverted that check-in off into the "mistake" branch
(which you can do in Fossil by adding a special tag.)  Then the
check-in sequence for trunk/master continued with 6cf16703 and
4f35b3b7 and so forth.

The problem is that Git now thinks that 9b888fcc is the HEAD of master
and that the true continuation of master (check-in 4f35b3b7 and
beyond) are disconnected check-ins, awaiting garbage collection.
There is no "ref" pointing to the HEAD of the true continuation.

I think what I need to do is change refs/heads/master to point to
4f35b3b7 (or whatever check-ins come afterwards - the snippet shown is
not the complete graph).  Then create a new entry refs/heads/mistake
that points to 9b888fcc.

Question 1:  Does my analysis seem correct.  Or have I misinterpreted
the malfunction?

Question 2:  Assuming that my analysis is correct, what is the
preferred way of rewiring the refs in Git?

-- 
D. Richard Hipp
d...@sqlite.org
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite error while fetching the data from a table

2019-04-22 Thread Tommy Lane





Hi All,

Hi Ananta


Need quick help to resolve one issue i am getting now.
I am a new user of SQLite.

my code:
connection =
DriverManager.getConnection("jdbc:sqlite:C:\\sqllite\\sqlite-tools-win32-x86-328\\Stories.db");
 Statement st = connection.createStatement();
 ResultSet b = st.executeQuery("select count(*) from stories;");

Have you tried running the query using the cli on Stories.db?

sqlite3 'path-to-Stories.db'

sqlite> select count(*) from stories;

What output do you get?



-Tommy
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] "Table Not Found" when multiple reader processes written in C++ accessing the same DB file in Ubuntu

2019-04-22 Thread Simon Slavin
On 22 Apr 2019, at 3:19pm, Polly Tang  wrote:

> I have an urgent issue with multiple reader processes in C++ accessing the 
> same DB file in Ubuntu and all reader experience "Table Not Found".

/All/ say "Table Not Found" ?  Including if you open just one reader process ?  
Are you sure your processes are opening the file with your database in, and not 
a different, blank, file ?

Please set a timeout for each connection by executing this command

PRAGMA busy_timeout = 1

before you execute your SELECT command.  Does that change things ?

Also, you are not checking the result codes returned by 
"sqlite3pp::transaction".  Check it and make sure it equals 0.  I have no idea 
how to do that in the library you're using.
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] "Table Not Found" when multiple reader processes written in C++ accessing the same DB file in Ubuntu

2019-04-22 Thread Polly Tang
Hi :

I have an urgent issue with multiple reader processes in C++ accessing the same 
DB file in Ubuntu and all reader experience "Table Not Found". Do I need 
special configuration in Sqlite3 to make it accessible by different processes ? 
This is a DB file created and stored inside a file directory. My objects from 
different processes try to access the same database file using :



   sqlite3pp::database db(DB_NAME);   //the name of database file   {      
sqlite3pp::transaction xct(db, true);  //allow multiple readers      {         
query =               "SELECT *  from mytable where id = 1;              Then 
there will be an error saying that no such table. I am not sure whether I need 
to specify anything in the configuration to make it accessible by multiple 
reader processess such as mmap ?

Your help is much appreciated. Thanks!
___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


Re: [sqlite] SQLite error while fetching the data from a table

2019-04-22 Thread Luuk


On 22-4-2019 14:03, Ananta Jena wrote:

Hi All,

Need quick help to resolve one issue i am getting now.
I am a new user of SQLite.

my code:
connection =
DriverManager.getConnection("jdbc:sqlite:C:\\sqllite\\sqlite-tools-win32-x86-328\\Stories.db");
  Statement st = connection.createStatement();
  ResultSet b = st.executeQuery("select count(*) from stories;");

Note : Connection is established successfully and also table STORIES has 1
record as well.

while executing this ,i am geting below error:
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no
such table: stories)



The error says: "SQL error or missing database"

on the next line is: "no such table: stories"


Conclusion: there must be something wrong in the code you did not post.

(but i'm not a java programmer)

___
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users


[sqlite] SQLite error while fetching the data from a table

2019-04-22 Thread Ananta Jena
Hi All,

Need quick help to resolve one issue i am getting now.
I am a new user of SQLite.

my code:
connection =
DriverManager.getConnection("jdbc:sqlite:C:\\sqllite\\sqlite-tools-win32-x86-328\\Stories.db");
 Statement st = connection.createStatement();
 ResultSet b = st.executeQuery("select count(*) from stories;");

Note : Connection is established successfully and also table STORIES has 1
record as well.

while executing this ,i am geting below error:
java.sql.SQLException: [SQLITE_ERROR] SQL error or missing database (no
such table: stories)
at org.sqlite.DB.newSQLException(DB.java:383) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.DB.newSQLException(DB.java:387) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.DB.throwex(DB.java:374) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.NativeDB.prepare(Native Method) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.DB.prepare(DB.java:123) ~[sqlite-jdbc-3.7.2.jar:na]
at org.sqlite.Stmt.executeQuery(Stmt.java:121) ~[sqlite-jdbc-3.7.2.jar:na]
at
com.sabre.ngp.devx.storyeditor.service.StoryEditorServiceImpl.getData(StoryEditorServiceImpl.java:49)
~[classes/:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
~[na:1.8.0_91]
at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
~[na:1.8.0_91]
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
~[na:1.8.0_91]
at java.lang.reflect.Method.invoke(Method.java:498) ~[na:1.8.0_91]
at
org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
~[spring-web-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.method.support.InvocableHandlerMethod.invokeForRequest(InvocableHandlerMethod.java:136)
~[spring-web-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.servlet.mvc.method.annotation.ServletInvocableHandlerMethod.invokeAndHandle(ServletInvocableHandlerMethod.java:114)
~[spring-webmvc-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.invokeHandlerMethod(RequestMappingHandlerAdapter.java:827)
~[spring-webmvc-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.servlet.mvc.method.annotation.RequestMappingHandlerAdapter.handleInternal(RequestMappingHandlerAdapter.java:738)
~[spring-webmvc-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
~[spring-webmvc-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:963)
~[spring-webmvc-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:897)
~[spring-webmvc-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:970)
~[spring-webmvc-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.servlet.FrameworkServlet.doGet(FrameworkServlet.java:861)
~[spring-webmvc-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:622)
~[tomcat-embed-core-8.5.5.jar:8.5.5]
at
org.springframework.web.servlet.FrameworkServlet.service(FrameworkServlet.java:846)
~[spring-webmvc-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at javax.servlet.http.HttpServlet.service(HttpServlet.java:729)
~[tomcat-embed-core-8.5.5.jar:8.5.5]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:230)
~[tomcat-embed-core-8.5.5.jar:8.5.5]
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
~[tomcat-embed-core-8.5.5.jar:8.5.5]
at org.apache.tomcat.websocket.server.WsFilter.doFilter(WsFilter.java:52)
~[tomcat-embed-websocket-8.5.5.jar:8.5.5]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
~[tomcat-embed-core-8.5.5.jar:8.5.5]
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
~[tomcat-embed-core-8.5.5.jar:8.5.5]
at
org.springframework.web.filter.RequestContextFilter.doFilterInternal(RequestContextFilter.java:99)
~[spring-web-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
~[spring-web-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:192)
~[tomcat-embed-core-8.5.5.jar:8.5.5]
at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:165)
~[tomcat-embed-core-8.5.5.jar:8.5.5]
at
org.springframework.web.filter.HttpPutFormContentFilter.doFilterInternal(HttpPutFormContentFilter.java:89)
~[spring-web-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.springframework.web.filter.OncePerRequestFilter.doFilter(OncePerRequestFilter.java:107)
~[spring-web-4.3.3.RELEASE.jar:4.3.3.RELEASE]
at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterCh