On Fri, Apr 7, 2017 at 3:02 PM, Richard Hipp wrote:
> On 4/7/17, Tony Papadimitriou wrote:
>>
>> sql xxx.db "CREATE TABLE xxx(`time zone`)" "insert into xxx values('1')"
>> sql .dump xxx.db | sql
>>
>
> I'm unable to repro.
>
Is this possibly because the shell that Tony is using is evaluating `tim
On Fri, Mar 3, 2017 at 1:13 PM, Andrew Brown wrote:
>
> Any tips to handle massively multithreaded side by side chunked
> queries on the same database?
In my (limited) experience, it seems that multithreaded SQLite
acquires a lock on a shared, in-memory b-tree every time a
statement is prepared.
I accidentally sent this message to the sqlite-dev mailing list and finally
found this thread to post it in the appropriate location:
So I was having a peruse of SQLite documentation and found this FTS5 branch
in the timeline.
http://www.sqlite.org/src/timeline?n=100&r=fts5
>From what I gather,
enising the
results. Technically it'd get the job done but I'd like it to still have the
speed matchinfo has so the proximity ranking isn't waiting on tokenising
documents all the time.
So if it is to be precalculated it will have to be stored in a shadow table
somewhe
Yeah I had thought about using the byte distance between words but you get
these instances:
[Example A]
|word1|10charword|word2|
[Example B]
|word1|3charword|4charword|3charword|word2|
By using byte distances, both of these score the same, where Example A
should score more highly.
But it would
Ok so I have found the following guides:
http://www.sqlite.org/privatebranch.html
http://www.sqlite.org/src/doc/trunk/README.md
So as far as creating a private branch 'the proper way' this should be
sufficient but as far as getting the token positions for FTS MATCH results
any advice on the top
I know that sqlite FTS provides proximity operators with NEAR and I also know
that the querying tokenizer section says you can get a token position
(http://www.sqlite.org/fts3.html#section_8_2).
Although when using the offsets() function in FTS you get the byte position
as opposed to the token pos
urse nobody will complain until
> they realize.
>
> Von: [hidden email] [[hidden email]] im Auftrag von Josh Wilson [[hidden
> email]]
> Gesendet: Montag, 16. Juni 2014 04:38
> An: [hidden email]
> Betreff: Re: [sqlite] Unicode61 Tokenizer
>
> Righteo thanks for the sa
Righteo thanks for the sanity check that it must be me at fault and that this
is indeed possible without ICU.
I have a separate XCode project for rolling the latest SQLite amalgamation
and copy that built library out of the Derived Data folder into our main App
project.
It would appear I kept cop
https://bitbucket.org/ottersoftware/fts-diacritic-marks
Ok so the above project successfully includes a build of sqlite v3.7.15.2
without ICU but the `unicode61` tokenizer works.
So I tried the same #defines they used with v3.8.5 and still get 'unknown
tokenizer: unicode61'
Has something happene
Thanks Ben.
Ok that makes sense. Normally if you roll the source code yourself there are
no issues since you are taking responsibility for the code being delivered
to the AppStore.
I know that sometimes Apple insta-rejects based upon the symbols in your
binary. Collisions with private APIs makes
g a custom tokenizer that hooks into the allowed
NSString API?
Kind Regards
--
Josh Wilson
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
I reported this a while ago and forgot about this until today while I
was doing some debugging and once again got the report of leaked memory.
I'm using the c amalgamation code from 3.7.10 with VStudio 2010, and
always start up my databases setting a temp directory to be used in the
form:
PR
Thanks, but I can't do that because I'm batching up multiple writes in
transactions to get performance. The errors cause the whole transaction
to need to be rolled back.
On 8/04/2012 11:20 a.m., Igor Tandetnik wrote:
Josh Gibbs wrote:
The method that must be used is as follow
Looking for some performance advice before I go testing this myself.
I'm porting some code that's currently running with SQLite as its DB
engine over to postgres. SQLite will still be an option so I need to
maintain compatibility across both engines. I've run into the common
postgres problem
On 13/11/2011 3:45 a.m., Igor Tandetnik wrote:
Josh Gibbs wrote:
To reiterate from my original question, if we don't add the order by then
the results come back at a very acceptable speed. But of course we'd
then have to iterate through the results ourselves to siphon off the
to
On 13/11/2011 3:37 a.m., Igor Tandetnik wrote:
Josh Gibbs wrote:
The timestamps are already integers. We stumbled across that CAST operation
optimization purely by accident. I don't remember what led to it, but
we found
that it gave a measurable performance boost casting the integer
On 12/11/2011 5:02 p.m., Simon Slavin wrote:
On 12 Nov 2011, at 3:43am, Josh Gibbs wrote:
We are struggling to find a way to rework this query in a way
that performs efficiently for large data sets.
It's all about the indexes.
The problem with this query seems to be related t
e and sender records and this can take hours to
complete the query.
Any ideas would be greatly appreciated.
Thanks, Josh
SELECT
EMailAddress, COUNT(*) AS Total
FROM
senders
INNER JOIN
messages ON messages.message_ID = senders.message_ID
INNER JOIN
email_addresses ON senders.email_addre
We've just introduced some memory leak detection into our
code and have discovered that this pragma call is not having its
resources cleaned up at shutdown. It's not a critical leak since
it's only called once at program start, but it would be nice to
have the system report zero memory leaks on ex
Hi all. We're trying to get the following query working at a better
speed and
I'm wondering if anyone has any ideas on optimizations we might be able
to do.
The query groups e-mail addresses and gives us the total number of each
address seen within a given time range of messages, ordering from
I would say that couple should be a 3 column table, with coupleID,
partner1ID, partner2ID. It looks like right now, you have just coupleID and
partnerID, that doubles the number of rows you have.
On Thu, Feb 24, 2011 at 7:02 AM, Igor Tandetnik wrote:
> Aric Bills wrote:
> > Using the results o
I would create a tagList table (integer tagID, string tagName,
unique(tagName))
Before performing your batch of inserts, query the tagList table (integer
tagID, string tagName), and generate a map (key on tagName, value of tagID).
For each tag you want to insert, see if it exists in the map. If i
Excellent and perfect solution to my problem. Thanks Richard.
On 23/01/2011 2:16 a.m., Richard Hipp wrote:
> On Fri, Jan 21, 2011 at 6:53 PM, Josh Gibbs wrote:
>
>> Could someone please clarify for me if the the resulting order of a UNION
>> query will come back with the lef
which should take precedence, such as:
select value from param_overrides where key='setting' UNION
select value from params where key='setting'
I'd like the resulting recordset to always contain the override parameter
first if it exists so I can simply use
Thank you both for the advice, I had never thought to join on the same table
using 3 different names like that, will have to keep that in mind!
On Thu, Jan 13, 2011 at 9:27 PM, Jay A. Kreibich wrote:
> On Thu, Jan 13, 2011 at 01:44:12PM -0600, Josh Marell scratched on the
> wall:
>
>
Hi everyone. I am new to this mailing list, so hopefully I will be able to
present my idea clearly to you:
I have two tables designed as:
Schedule {
date TEXT UNIQUE NOT NULL
problem_set INTEGER
literature INTEGER
research INTEGER}
Presenters {
p_id INTEGER PRIMARY KEY
short_name TEXT UNIQUE NO
AND NOT EXISTS(SELECT 1 FROM message_recipient
> WHERE recipient.recipient_id=
> message_recipient.recipient_id);
>
That SQL statement (with minor corrections) works within and AFTER DELETE
trigger. The key references
On 23/09/2010 11:52 p.m., Richard Hipp wrote:
>> Josh Gibbs wrote:
>>> CREATE TABLE Message (message_id INTEGER PRIMARY KEY AUTOINCREMENT,
>>> Subject TEXT);
>>> CREATE TABLE MessageRecipient (message_id INTEGER REFERENCES
>>> Message(message_i
On 23/09/2010 3:15 p.m., Igor Tandetnik wrote:
> Josh Gibbs wrote:
>> CREATE TABLE Message (message_id INTEGER PRIMARY KEY AUTOINCREMENT,
>> Subject TEXT);
>> CREATE TABLE MessageRecipient (message_id INTEGER REFERENCES
>> Message(message_id) ON DELETE CASCADE, reci
undrum would be greatly appreciated.
Thanks, Josh
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
Good point. Thanks.
> -BEGIN PGP SIGNED MESSAGE-
> Hash: SHA1
>
> On 09/10/2010 01:11 PM, Josh wrote:
>> A saw the backup API's but they looked overly complicated for my situation.
>
> How so?
>
> There is sample code at this link (see the seco
quot;sync database somehow?"); //do I need this?
runsql("BEGIN EXCLUSIVE");
copydatabasefile();
runsql("ROLLBACK");
Thanks!
Josh
> On Fri, Sep 10, 2010 at 12:09:58PM -0700, Josh scratched on the wall:
>> Hello all,
>>
>> I think this is a simple qu
x27;t any open transactions by going through my
code, but I was just wondering if there is a way for SQLite to tell this
as well)? Thanks for any thoughts.
Josh
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/ma
g the transactions if there is a power failure.
I know I can have either one of these, but can I have both!?
Josh
>> I'm new to the list and had a question. I know the default behavior for
>> savepoints (or any transactions) is that if they have not been committed, if
>> t
olled back in case of an
explicit rollback statement, not due to program crash/power failure, etc. Does
anyone know of a way of doing this? Thanks!
Josh
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/lis
I get a segfault using a particular rtree query. Here's the simplest way I
could find to reproduce it.
$ uname -a
Linux odysseus 2.6.18-6-xen-vserver-amd64 #1 SMP Fri Jun 6 07:07:31 UTC 2008
x86_64 GNU/Linux
$ sqlite3 -version
3.6.7
$ cat >bug.sql
create table foo (id integer primary key);
create
On 9/10/08 11:11 AM, "Dennis Cote" <[EMAIL PROTECTED]> wrote:
> Josh Millstein wrote:
>> On 9/9/08 11:46 AM, "Igor Tandetnik" <[EMAIL PROTECTED]> wrote:
>>> Josh Millstein <[EMAIL PROTECTED]>
>>> wrote:
>>>> Is t
>
> On Tue, Sep 9, 2008 at 1:52 PM, P Kishor <[EMAIL PROTECTED]> wrote:
>> On 9/9/08, Josh Millstein <[EMAIL PROTECTED]> wrote:
>>> Hello,
>>>
>>> Is there anyway to perform a trim to everything that is entered into a
>>> table
>
The trim before I put data in was based on using a programming language and
not the db language. I want to do it all in sql syntax
On 9/9/08 12:52 PM, "P Kishor" <[EMAIL PROTECTED]> wrote:
> On 9/9/08, Josh Millstein <[EMAIL PROTECTED]> wrote:
>> Hello,
>&
On 9/9/08 11:46 AM, "Igor Tandetnik" <[EMAIL PROTECTED]> wrote:
> Josh Millstein <[EMAIL PROTECTED]>
> wrote:
>> Is there anyway to perform a trim to everything that is entered into
>> a table instead of trimming before I put data in?
>
> I'm
Hello,
Is there anyway to perform a trim to everything that is entered into a table
instead of trimming before I put data in?
Thanks,
Josh
--
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo
Luca Olivetti wrote:
> En/na Josh Gibbs ha escrit:
>
>> Hi there,
>>
>> We are having some problem with DB corruption occurring
>> using 3.5.4. I don't know the source of the corruption, however
>> after extensive testing and updating to 3.5.6 in the
the 3.5.x code track?
Thanks, Josh
___
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
With MySQL I would simply do:
SHOW COLUMNS FROM `war3users` LIKE 'playerip';
Can I do something similar in SQLite ?
Thanks!!
Josh
I have read http://www.sqlite.org/sqlite.html
I want to get something like this working:
[C:\test.js]
WS=new ActiveXObject('WScript.Shell')
WS.Run('C:\\sqlite.exe "C:\\test.db" .read "C:\\query.txt" .output
"C:\\OUT.txt" ')
[C:\query.txt] contains...
select * from sqlite_master;
What's up
I have read http://www.sqlite.org/sqlite.html
I want this to work:
[C:\test.js]
WS=new ActiveXObject('WScript.Shell')
WS.Run('C:\\sqlite.exe "C:\\test.db" .read "C:\\query.txt" .output
"C:\\OUT.txt" ')
[C:\query.txt] contains...
select * from sqlite_master;
What's up?
I have searched fo
47 matches
Mail list logo