Thanks to all for suggestions.
> My guesses:
> - Your unix_time values are not successive. In this case your first fast
> results are due to advantages of memory caching. The following slowness is
> the result of the internal fragmentation
All unix_time values should be successive, but in the par
On Thu, Feb 9, 2012 at 2:08 AM, Steinar Midtskogen wrote:
>
> When I build my database from scratch using millions of inserts, one
> table causes problems. Inserts get slower and slower. I have about
> 830,000 inserts for that table. It gets to 300,000 pretty fast, but
> then it gets slower and
On 8 Feb 2012, at 10:22pm, Oliver Peters wrote:
> It's the Primary Key that you're using cause for every INSERT it is checked
> if unix_time is already present in a record.
>
> So the question is if you really need unix_time as a PK
If you're batching your INSERTs up into transactions, try doi
Am 08.02.2012 23:08, schrieb Steinar Midtskogen:
Hello
I'm having trouble with one table in my database.
When I build my database from scratch using millions of inserts, one
table causes problems. Inserts get slower and slower. I have about
830,000 inserts for that table. It gets to 300,000
Hello
I'm having trouble with one table in my database.
When I build my database from scratch using millions of inserts, one
table causes problems. Inserts get slower and slower. I have about
830,000 inserts for that table. It gets to 300,000 pretty fast, but
then it gets slower and slower, an
On 08.02.2012 00:57 CE(S)T, Simon Slavin wrote:
> If most of the time really is spent reading from the file, then it
> may not be worth parallelising your code. All your data is on the
> same disk and you can read only one sector of the disk at a time.
Most of the file should be in the file syste
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 08/02/12 01:47, Steinar Midtskogen wrote:
> OK. But then I wont have an SQL interface for accessing interpolated
> data. It's acceptable. An SQL interface is mostly a "nice to have",
> so I could do some quick queries in SQL instead of having t
On 07.02.2012 23:36 CE(S)T, Joe Mistachkin wrote:
> That issue was fixed prior to release 1.0.77.0, here:
>
> http://system.data.sqlite.org/index.html/info/13a3981ec0
>
> If possible, I recommend using the latest released version, 1.0.79.0.
Thanks, that did help. :-)
--
Yves Goergen "LonelyPix
On 02/09/2012 12:49 AM, George Ionescu wrote:
Hello Dan,
yes, I thought of that. But wouldn't this break the snippet's function?
If the tokenizer will return text without diacritics, wouldn't the snippet
return the same?
Should be Ok. Snippet should be based on the original content.
__
On Wed, Feb 8, 2012 at 10:35 AM, Tim Leland wrote:
> Hey all,
>
>
>
> Im trying to find a way to connect an as400 to a sqlite database and import
> the data into db2/physical files. Does anyone have any experience with
> something similar to this. I found this site and not sure if this will do
>
I will try the check, they may get what I want. Thanks!
-Original Message-
From: sqlite-users-boun...@sqlite.org [mailto:sqlite-users-boun...@sqlite.org]
On Behalf Of Igor Tandetnik
Sent: Wednesday, February 08, 2012 12:39 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Question on
Hello Dan,
yes, I thought of that. But wouldn't this break the snippet's function?
If the tokenizer will return text without diacritics, wouldn't the snippet
return the same?
Thanks,
George.
2012/2/8 Dan Kennedy
> On 02/08/2012 11:34 PM, George Ionescu wrote:
>
>> Hello all,
>> I would like to
On 2/8/2012 12:30 PM, Tilsley, Jerry M. wrote:
I creating a new database and would like to use triggers on a couple of the
tables. Below are two of the tables I will be working with in the triggers;
CREATE TABLE accounts (mrn char(8), acct char(12), discharge_dt int, Primary
Key(acct), Foreig
All,
I creating a new database and would like to use triggers on a couple of the
tables. Below are two of the tables I will be working with in the triggers;
CREATE TABLE accounts (mrn char(8), acct char(12), discharge_dt int, Primary
Key(acct), Foreign Key(mrn) references Patient_Info(mrn));
C
So you're wanting to FTP into the AS400 directly? That makes sense.
All you need to do is download the amalgamation to compile the shell. Just
sqlite3.c and shell.c
Hopefully you already know how to use the C compiler on your AS400.
Then transfer one of your sqlite databases over and ru
We just want to simplify the process. If pc goes down our files wont be
converted and its just another piece to troubleshoot. As400 can compile C so
that is an option. I don't think anyone has done something like this and
there is not much info on it.
-Original Message-
From: sqlite-users-
Do you have a C compiler on the AS400 you can use?
I couldn't find anybody doing this so if you want to avoid the PC side you may
be on your own to build the shell.
I assume there's some logical reason for not doing the sql dump on the PC side?
Michael D. Black
Senior Scientist
Advanc
On 02/08/2012 11:34 PM, George Ionescu wrote:
Hello all,
I would like to know how are diacritics handled in FTS, specifically if I
can index text with diacritics and search for terms without them.
For example, given the queries
CREATE VIRTUAL TABLE fts_pages USING fts4(tokenize=snowball ro_RO);
Hello all,
I would like to know how are diacritics handled in FTS, specifically if I
can index text with diacritics and search for terms without them.
For example, given the queries
CREATE VIRTUAL TABLE fts_pages USING fts4(tokenize=snowball ro_RO);
INSERT INTO fts_pages (docid,content) VALUES (
Ya I just realize that. Its SQL Lite with two 'L's
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Dan Kennedy
Sent: Wednesday, February 08, 2012 11:11 AM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] Connect As400/Iseries
We are trying to avoid running anything on the pc side. The files are put in
a folder with ftp and then we want to import them into the as400. Attached
is the file format.
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Black,
On 02/08/2012 10:35 PM, Tim Leland wrote:
Hey all,
Im trying to find a way to connect an as400 to a sqlite database and import
the data into db2/physical files. Does anyone have any experience with
something similar to this. I found this site and not sure if this will do
what Im looking for. h
On Wed, Feb 8, 2012 at 10:57 AM, Timothy Anderson <
t...@systemstechnologyresearch.com> wrote:
> I'm using sqlite 3.7.10 to collect a message log file--basically one
> table, with some metadata (timestamp and message type) plus a blob
> containing the raw message; we use this to play sessions back
Yesjust use the sqlite shell to .dump the database on whatever other
platform you have and copy the file over.
You need to do this in the shell.
.output sqlout.txt
.dump
.quit
If there's nothing crazy in the database it may well be 100% compatible with
db2.
Michael D. Black
Sen
Have a file system that lacks file sharing and based on responses from this
group, I've implemented a VFS that has all connections using the same file
share a file descriptor for that file.
It seems to me that this has the same effect as a memory mapped file would.
Does this mean that I could
No we do not need an ongoing connection. The dump method would be fine but
does it have to first be run on the pc side. Anyone try compile sqlite3 to
run on an as400?
-Original Message-
From: sqlite-users-boun...@sqlite.org
[mailto:sqlite-users-boun...@sqlite.org] On Behalf Of Black, Micha
I'm using sqlite 3.7.10 to collect a message log file--basically one table,
with some metadata (timestamp and message type) plus a blob containing the raw
message; we use this to play sessions back.
For one of the message types (video frames), the blob size is ~100K; for
everything else, no more
Any reason you can't just .dump your SQLite database and import into db2?
Or do you need an ongoing connection?
There's a guy in this thread that connected his Linux system to AS400 using PHP
but the .dump/import would be a lot easier.
http://php.net/manual/en/function.odbc-connect.php
M
Hey all,
Im trying to find a way to connect an as400 to a sqlite database and import
the data into db2/physical files. Does anyone have any experience with
something similar to this. I found this site and not sure if this will do
what Im looking for. http://www.rpgiv.com/sqllite.html Any help w
Rob Richardson wrote:
> The example of a CREATE TRIGGER statement from the help page is:
>
> CREATE TRIGGER update_customer_address UPDATE OF address ON customers
> BEGIN
>UPDATE orders SET address = new.address WHERE customer_name = old.name;
> END;
>
> The use of BEGIN and END to wrap th
On Wed, Feb 8, 2012 at 9:12 AM, Rob Richardson wrote:
>
> And is it possible to have a conditional statement?
>
> CREATE TRIGGER record_big_order AFTER INSERT ON orders
> BEGIN
>IF new.value > 100 THEN
> INSERT INTO big_orders (customer_name, salesman_id, value)
>
The example of a CREATE TRIGGER statement from the help page is:
CREATE TRIGGER update_customer_address UPDATE OF address ON customers
BEGIN
UPDATE orders SET address = new.address WHERE customer_name = old.name;
END;
The use of BEGIN and END to wrap the statement leads me to believe that
[Kit]
> SELECT a.temp+(b.temp-a.temp)/(b.time-a.time)*(strftime('%s','2012-02-08
> 11:37:00')-a.time) FROM
> (select time, temp FROM tp
> WHERE strftime('%s','2012-02-08 11:37:00')*1>=time
> ORDER BY time DESC LIMIT 1) AS a,
> (select time, temp FROM tp
> WHERE strftime('%s','2012-
[Kevin Martin]
> For the quick and dirty solution, I think you can use something like this to
> create your view. You would need to index time, and even with the index, I'm
> not too sure about speed.
>
> select x1.time, x1.value, x2.time from x as x1 left join x as x2 on
> x2.time=(select max(
On Tue, Feb 7, 2012 at 4:09 PM, Orit Alul wrote:
> Hi,
>
> I'm using freelist_count pragma to test whether my sqlite db needs
> vacuuming.
> I'm running intensive inserts and deletes to the DB very often.
> I run the freelist_count pragma periodically and I get different values
> every time - som
On Tue, Feb 7, 2012 at 11:42 AM, Scholz Maik (CM-AI/PJ-CF42) <
maik.sch...@de.bosch.com> wrote:
> Hi,
>
> I have some problem with my understanding of "UPDATE OR IGNORE" and the
> sqlite3_changes() function.
> My expectation is, that changes() gives the number of changed rows.
> When I use "UPDATE
Hi,
I'm using freelist_count pragma to test whether my sqlite db needs vacuuming.
I'm running intensive inserts and deletes to the DB very often.
I run the freelist_count pragma periodically and I get different values every
time - sometimes I get large number and sometimes I get 0 or 1.
Does this
Hi,
I have some problem with my understanding of "UPDATE OR IGNORE" and the
sqlite3_changes() function.
My expectation is, that changes() gives the number of changed rows.
When I use "UPDATE OR IGNORE" with same data, so no data change is needed, I
expect that
the number of changes is null.
I a
2012/2/8 Steinar Midtskogen :
> 1. I'd like to be able to look up any timestamp between the oldest and
> the newest in the database, and if there is no value stored for that
> timestamp, the value given should be an interpolation of the two
> closest. So, if the table has:
>
> 1325376000 (Jan 1 20
For the quick and dirty solution, I think you can use something like this to
create your view. You would need to index time, and even with the index, I'm
not too sure about speed.
select x1.time, x1.value, x2.time from x as x1 left join x as x2 on
x2.time=(select max(time) from x where time Rel
Related to this thread, I wonder if it's possible to create a view
which can give me a value from the row immediately above. E.g. given
the table:
unix_time val
--+---
1325376000|val1
1325376300|val2
1325376600|val3
1325376900|val4
(the first column is a unix timestamp and unique)
can
[Roger Binns]
> I'd recommend you write code in your application first that knows how to
> calculate the values you want. That way you can ensure the calculations
> are correct, you have something for test harnesses that produces "good"
> values and you have something to port to your final soluti
42 matches
Mail list logo