On Wed, 2006-06-07 at 00:03 -0500, David Wollmann wrote:
> Mark Drago wrote:
> > Hello,
> >
> > I'm writing a web cache and I want to use SQLite to store the log of all
> > of the accesses made through the web cache. The idea is to install this
> > web cache
> I have been in contact with the developer of your flash filesystem
> and we are working on a solution now...
That really is excellent news. Thanks for your interest and effort. I look
forward to the solution.
DISCLAIMER:
This information and any attachments contained in this email message
.
Here is a reply from the author of EFFS:-
"
Hi Mark - changes to the file are uilt in mirror chains - when the file is
closed or flushed then the new file replaces the old - i.e. the file is
updated atomically. So I think you do not need this roll back.
We take the view that thisis how a fai
that journalling is
important with most OS's to prevent possible database corruption but in our
case this is not possible anyhow.
If anybody can help me I will be most grateful.
Mark
DISCLAIMER:
This information and any attachments contained in this email message is
intended only for the use
ma holds data about things that are internal to
the web cache (profile*, ad*, etc.).
Thank you very much for any ideas,
Mark.
TABLE SCHEMA:
CREATE TABLE log(
log_no integer primary key,
add_dte datetime,
profile_name varchar(255),
workstation_ip integer,
workstation_ip_txt varchar(20),
verdict integer,
Jay is correct here, only use a blob if you do not want to search on that
field. Your example may not be best suited to BLOB but that is for you to
decide.
We store medical test data as a blob. This is not much more complicated than an
array of C structs. However we have a primary key field, a
on this on the Web site and in the
API header file.
At least this is what I have to do to store a blob. We use binary blobs to
store a binary dump of test data which is basically only an array of data.
Hope this helps ,but I'm sure someone else will post something more helpful
than me.
Mark
isting thread on this archived at:-
http://www.mail-archive.com/sqlite-users@sqlite.org/msg10818.html
Thanks
Mark
> -Original Message-
> From: Jay Sprenkle [mailto:[EMAIL PROTECTED]
> Sent: 10 May 2006 15:16
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] accurate progress i
the time.
Mark
> -Original Message-
> From: Jay Sprenkle [mailto:[EMAIL PROTECTED]
> Sent: 10 May 2006 14:10
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] accurate progress indication
>
>
> On 5/10/06, Allan, Mark <[EMAIL PROTECTED]> wrote:
> >
&
representation of progress so user
will not think the unit has locked up.
Currently the method specified by Dr Hipp seems to work well for us.
Thanks for all your help and interest.
Mark
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: 09 May 2
but it appears that the problem is
knowing how many opcodes are required to complete the transaction before it is
run.
Has anyone tried something like this before?
Any help will be gratefully received.
Mark
DISCLAIMER:
This information and any attachments contained in this email message is
intended only
oper locking this should NOT cause a problem - it should
simply serialise the transactional operations (or so I thought).
As it is, I've actually tried to port this to MySQL (using Mysql5 and InnoDB),
but I'm getting some problems there too - I think I'll have to review my use
of transactions etc.
Regards
Mark
00 ms).
What else can I do to prevent this?
If the answer is "nothing", I'm going straight over to MySQL :)
Mark
> But I wonder :
>
> if I have a db ~ 1gb and I delete all the data in the tables ( db is
> than nearly empty )
> Issuing a vacuum command takes a long time ( several minutes ).
> Why ?
> Is there a way to "vacuum" faster ?
We found that vacuuming the database was also slow. We no longer vaccum
Cool thanks,
Mark
On 1/11/06, Kurt Welgehausen <[EMAIL PROTECTED]> wrote:
>
> You may want
>
> WHERE julianday(date('now')) - julianday(date(arrival_date)) > 7
>
> so that time of day isn't part of the comparison; otherwise,
> you're correct.
>
> Regards
>
'
Thanks,
Mark
Got it, thank you very much all,
Mark
On 1/4/06, Henry Miller <[EMAIL PROTECTED]> wrote:
>
> On Wednesday 04 January 2006 02:54 pm, Mark Wyszomierski wrote:
> > Hi all,
> >
> > I switched to sqlite from mysql awhile ago, I maintained the field types
&
field, it seems to be accepted ok:
insert into students values('hello');
Does sqlite have any problem regarding setting a field defined as INTEGER
from a text string (any limits etc?), are there any performance gains to be
had with specifying the field type?
Thanks,
Mark
not supposed to be happening.
Regards
Mark
> -Original Message-
> From: Gerry Snyder [mailto:[EMAIL PROTECTED]
> Sent: 01 November 2005 14:58
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] disabling journalling of the database - side
> affects?
>
&
y we want to be able to use the latest versions of SQLite as they are
released and as such don't want to stay with 3.2.1 especially as we may have
been inadvertently benefiting from what was actually a bug anyhow.
Regards
Mark
> -Original Message-
> From: [EMAIL PROTECTED]
is a little annoying.
If this is a bug in 3.2.7 of SQlite can it be fixed? If I cannot disable the
journal file safely by the mechanism described in my previous email then can
somebody please indicate how I can disable journalling of the database safely.
Regards
Mark
> -Original Mess
sqlite3BtreeFactory. We have done this as we do not want the performance
overhead of doubling the amount of writes we make as we are using an NOR flash
filing system and this is not particularly quick.
Can anybody help me?
Thanks
Mark
DISCLAIMER:
This information and any attachments contained in this email
On Sun, 30 Oct 2005, Dan Kennedy wrote:
> When you execute this SQL: "delete from v_items where item='me'",
> SQLite essentially does:
>
> FOR EACH ROW IN "select FROM v_items where item='me'" {
>
> Execute trigger program
>
> }
That makes perfext se
he trigger. Assuming and using
'NULL' would not work, so what does sqlite do? Just ignore those parts of
the where clause that it does not have all the values for?
Thnx for your time & Regards,
Mark.
.
So these times are not affected by vacuum.
Thanks
Mark
> -Original Message-
> From: Brett Wilson [mailto:[EMAIL PROTECTED]
> Sent: 26 October 2005 19:22
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] Very Slow delete times on larger
> databases, please
> he
There doesn't appear to be any real documentation over what page size to use. I
think it is more of a case of experimenting and determining which is best for
your system/application.
In the archive I found an article stating that for optimum performance on Win32
to match the page size with
but any time
saving is welcome, especially as the test is for a 50% full scenario so at 99%
we can expect it to take 6 minutes.
Thanks again for your help.
If there are any other ideas on how we can optimise this further then please
let me know.
Mark
DISCLAIMER:
This information and any
any configuration of SQLite, our
filesystem code or the hardware to try and get this figure down. Can anyone
give me a reasonably detailed description of what is happening during delete.
The documentation on the website has not helped us diagnose where our problem
lies.
Best Regards
Mark
-Origi
@sqlite.org
Subject: Re: [sqlite] Very Slow delete times on larger databases, please
help!
"Allan, Mark" <[EMAIL PROTECTED]> wrote:
> We are experiencing incredibly slow delete times when deleting a
> large number of rows:-
>
> We are using SQLite on an embdedded platform with
it handles sqlite3_open() and sqlite3_close() just
fine. I'm sure I'm
missing something elementary but could anyone point me in the right
direction?
Mark.
he trouble is, it
> > includes the 2 rows from A which match the rows in B. I'd like to get
> > rid of them and see only the non-matching rows.
> >
> > Thanks a lot for your help!
> >
> > Bob Cochran
> > Greenbelt, Maryland, USA
> >
> >
>
Regards,
Mark
bigger than one page (1024 bytes) as we are running on an
embedded system and are a little limited for RAM.
Thanks again for your help
Mark
> -Original Message-
> From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
> Sent: 04 October 2005 11:47
> To: sqlite-users@sqlite.org
&
system and we wish to improve the
performance of writing records with SQlite. It would be helpful to know why
SQlite needs to update so much of the database file on each update.
Thanks in advance
Mark
Fedora Core 3, but may have been
moved in to core for Fedora Core 4. If you need any help getting this
package installed on Fedora, better to ask on fedora-list:
http://www.redhat.com/mailman/listinfo/fedora-list
Mark.
signature.asc
Description: This is a digitally signed message part
Thanks Dennis
On 9/30/05, Dennis Cote <[EMAIL PROTECTED]> wrote:
>
> Mark Wyszomierski wrote:
>
> >Hi all,
> > Does sqlite allow multiple keys? When I created a table I did:
> > CREATE TABLE test (name, address, fav_color, primary key(name, address))
> >
in SQLite
Database Browser
and it complained that the table has multiple primary fields.
Thanks,
Mark
s on my system that would
both need to be updated when a new sqlite is released. In fact, libgda
will be adding support for this shortly:
http://mail.gnome.org/archives/gnome-db-list/2005-August/msg00048.html
Mark Drago.
signature.asc
Description: This is a digitally signed message part
Excellent! This is exactly what I am looking for. Thanks
> -Original Message-
> From: Dennis Jenkins [mailto:[EMAIL PROTECTED]
> Sent: 16 September 2005 12:58
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] determining number of 'used' pages?
>
>
> Ma
on
the filesystem?
Thanks in advance for your help.
Mark
() - then there are no memory leaks. But opening and closing the
database everytime I have to post a message may be too burdensome,
Thank you!
On 9/15/05, Dennis Jenkins <[EMAIL PROTECTED]> wrote:
>
> Mark Wyszomierski wrote:
>
> > app1:
> >SomeThread()
> >
,
replacing PostMessage() with SendMessage() would be a huge penalty. Any
ideas why there is a problem here?
Thanks,
Mark
On 9/15/05, Reid Thompson <[EMAIL PROTECTED]> wrote:
>
> Jay Sprenkle wrote:
> > The premier analysis tool that I know about is valgrind:
> >
&g
65 20 69 73 20 6C 6F 63 6B
Is there anyway to tell how this is happening? Otherwise cleanup is working
fine. I don't have the message anywhere in my app, so I
guess this must be from sqlite.
Thanks,
Mark
On Thu, 2005-09-08 at 14:36 -0400, D. Richard Hipp wrote:
> On Thu, 2005-09-08 at 10:48 -0400, Mark Drago wrote:
> > However, it seems that for every rollback that I do there is a file left
> > in the directory with the databases. I have 30-something files named
> &g
of this is that the database could possible store the
data in a more compact form, correct? If that's not a concern, leaving it
all as text would not make a difference?
Thanks,
Mark
Ah excellent, thanks Jay,
Mark
On 9/13/05, Jay Sprenkle <[EMAIL PROTECTED]> wrote:
>
> See the wiki section of the documentation on the web site. There's a page
> devoted to this.
>
> On 9/13/05, Mark Wyszomierski < [EMAIL PROTECTED]> wrote:
> >
&g
Hi all,
Moving from a mysql database to sqlite. I had some date/time fields in my
mysql database. I would just populate them using the now() function. How
could I achieve the same in my new sqlite database?
Thanks!
Mark
I would like to second this opinion. I think a pragma to tell SQLite
wether or not 'I realy know what I'm dooing' and want help or not is the
preferred method.
I don't like the idea of overloading a syntax to have add-on non-obvious
implications/meaning.
Regards,
Mark.
> For example, some database
the ALL THE
REASON jou need to to just follow the standard!
Rgds,
Mark.
> -Original Message-
> From: Puneet Kishor [mailto:[EMAIL PROTECTED]
> Sent: Thursday, September 08, 2005 6:50 PM
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] SUM and NULL values
>
>
returns the following. So,
if you have some means of getting the md5sum of the file, make sure it
matches this:
9c79b461ff30240a6f9d70dd67f8faea sqlite-2.8.16.tar.gz
If you can't get the md5sum, at the very least make sure that the file
size is exactly 981834 bytes.
Mark.
On Thu, 2005-09-08 at 13
On Wed, 2005-09-07 at 17:20 -0400, Mark Drago wrote:
> On Tue, 2005-09-06 at 16:07 -0400, D. Richard Hipp wrote:
> > On Tue, 2005-09-06 at 15:49 -0400, Mark Drago wrote:
> >
> > > 2. I could continue to write to the database in the single thread, but
> > >
On Tue, 2005-09-06 at 16:07 -0400, D. Richard Hipp wrote:
> On Tue, 2005-09-06 at 15:49 -0400, Mark Drago wrote:
>
> > 2. I could continue to write to the database in the single thread, but
> > if the write fails, add the data to a queue and continue. Then, when
> > a
SHARED lock after a read?
>
> And a general survey to everyone... in your applications, what is the
> 'standard' practice to handle a SELECT statement that may return more than a
> few rows? Can temporary tables be used without still holding the
> database-level lock?
>
> Sorry Mar
like that.
Like I said, I'm interested to know how other people have handled such
situations. Any ideas are greatly appreciated.
Thanks,
Mark Drago
signature.asc
Description: This is a digitally signed message part
is. But if
the common case is that the database will be opened anyway, why not do it
at a time and in a way that can be put to good use by the application
(programmer)?
Just my 0.02
Regards,
Mark.
> - Original Message -
> From: "D. Richard Hipp" <[EMAIL PROTECTED]>
&g
On Sun, 21 Aug 2005, Mark de Vries wrote:
> > > I have tried two versions of the trigger:
> > >
> > > CREATE TRIGGER task_list_1
> > > AFTER INSERT ON task_list
> > > BEGIN
> > > UPDATE task_list
> >
ame question I had. And I realized that this is just a
limitation of sqlite. No problem, I will just have to do things a little
different than I'm used to. I my case there is no need to be absolutely
'secure' about the value in the these fields. And the pros of using sqlite
for the project I'm working on outweigh these cons.
Thnx to all who responded to my version of this question.
Regards,
Mark
level errors/bug and saves (duplicate)
code in the apps accessing the database.
Regards,
Mark
to the above
must be possible in plain SQL also?
Regards,
Mark
; From: Christian Smith [mailto:[EMAIL PROTECTED]
> Sent: 27 July 2005 17:35
> To: sqlite-users@sqlite.org
> Subject: Re: [sqlite] UPDATE - crash when many columns
>
>
> On Wed, 27 Jul 2005, Mark Allan wrote:
>
> >
> >Hi,
> >
> >I am using the SQL command:
this happens. Has anybody had the same or a similar
problem? Can anyone help me or help me confirm what the actual problem may be?
Thanks in advance for any help
Mark
nux) check. Older versions of configure has a check for
PowerPC linux to conver a situation that occurred WAY back in 1996 or
so... a lot of folks are still running into this issue.)
Fix the configure script, or aclocal files.. and that should correct the
issue.
--Mark
Robert P. J. Day wrote
to figure out "where".
--Mark
[EMAIL PROTECTED] wrote:
Mark Hatle <[EMAIL PROTECTED]> writes:
Does anyone have any suggestions on how to solve this? Inform sqlite of
the root portion of the path? Temporarily disable the journal when the
chroot operation will be performed
There is no copyright statement or license stated in the article
or in the download.
So it isn't clear what the legal status is of CppSQLite?
-mda
On Fri, 23 Apr 2004 14:56:45 +0100, "Rob Groves"
<[EMAIL PROTECTED]> said:
> For those that are interested, a new version of CppSQLite and
>
These disk access issues are why no database I know of actually
stores large objects inline. It would be crazy to do so.
mysql, postgres, and oracle all have support for blobs, and
none of them store them inline.
(btw, if you care about disk io performance for blobs,
you can tune the fs
On Thu, 15 Apr 2004 20:16:32 -0400, "Doug Currie" <[EMAIL PROTECTED]> said:
> I used this design in a proprietary database in the late 1980s. The
> only reason I didn't consider modifying SQLite this way up until now
> is that I was anticipating BTree changes for 3.0, so I confined my
> efforts
On Wed, 14 Apr 2004 08:13:39 -0400, "D. Richard Hipp" <[EMAIL PROTECTED]>
said:
>* Support for atomic commits of multi-database transactions,
> which gives you a limited kind of table-level locking,
> assuming you are willing to put each table in a separate
> database.
and
Wednesday, April 14, 2004, 1:16:54 AM, Andrew Piskorski wrote:
> as far as I can tell, it seems to be describing a system with
> the usual Oracle/PostgreSQL MVCC semantics, EXCEPT of course that
> Currie proposes that each Write transaction must take a lock on the
> database as a whole.
Well, i
On Wed, 31 Mar 2004 12:15:36 +1000, [EMAIL PROTECTED] said:
> G'day,
> [snip of Ben's pseudo-code]
Just to check my understanding: the suggestion here is to reduce
reader-writer conflict windows by buffering of writes.
The writer acquires a read lock at the start of the transaction,
and
-mark (BOM).
Furthermore, it is not a fixed width encoding.
(Java, and Windows prior to win2k, behaved as it if it was
fixed length, by neglecting surrogate pairs, but they are
just broken.)
UTF-8 is not a fixed width encoding either, but it does
not have embedded nulls, and it is supported "nat
ven more vulnerable to NFS
than it is already.
I would not suggest mmap as the only solution; as with web servers,
I would suggest the strategy as a configurable option.
Also, even I would hesitate over suggesting mmap for writers, without
a lot of experimentation.
On Mon, 22 Mar 2004 14:56:35 -0500,
Ok I had no problem compiling the program, and the test program it
creates works fine.. but I can't seem to get the libraries to function..
I've tried moving them to various directories, but have accomplished
nothing of great substance. The closest I come is having it successfully
include
501 - 571 of 571 matches
Mail list logo