be too
hard to apply this when building temp tables from subqueries as you must
have parsed the join criteria to be able join the tables.
Thanks for the reply,
S
On Tue, Dec 2, 2008 at 3:57 PM, P Kishor <[EMAIL PROTECTED]> wrote:
> On 12/2/08, Da Martian <[EMAIL PROTECTED]> wrote:
&g
Hi
I have continious issues with subquery performance when subqueries are used
for joins. It crops up all the time my daily work.
If you create a derived table using a subquery and use it in a join SQLite
performance is abysmal. However if you make a temp table from said subquery
and index this
Tandetnik <[EMAIL PROTECTED]> wrote:
> Da Martian <[EMAIL PROTECTED]> wrote:
> > I have two tables as follows:
> >
> > CREATE TABLE ROT_DIM_CONTRACT_LINE (ContractNo Integer,ContractLine
> > Integer,StartDate Integer,EndDate Integer,NCRAmt Float,Item
> > Inte
detnik <[EMAIL PROTECTED]> wrote:
> "Da Martian" <[EMAIL PROTECTED]> wrote
> in message
> news:[EMAIL PROTECTED]<[EMAIL PROTECTED]>
> > Does anyone have any ideas on how to optimise this type of process in
> > SQLite?
>
Hi
I have two queries. Each returns a full result set in around 1 minute when
run on there own. Making each of these queries a subquery and joining them
with an inner join, and the query takes more than 10 minutes. I cancelled it
after 10 minutes.
Does anyone have any ideas on how to optimise
Hi
I suspect most of you know this, but since it helped me I decided to post
it. I have around 5 million records in a sqlite file of about 400MB. Running
joins over this was taking hours! And the disk went mental. Tweaking the
following two pragmas solved the problem though. Hours to seconds!
, it was all your suggestions which lead
me to the solution.
Much appreciated.
Have a good weekend.
S
On 12/8/06, Trevor Talbot <[EMAIL PROTECTED]> wrote:
On 12/7/06, Da Martian <[EMAIL PROTECTED]> wrote:
> Yeah I am currently using VirtualTree from Mikes Delphi Gems. Its fully
&g
It looks fine to me. To help check it, one thing you can try is
writing the result of FieldAsString directly to a file as raw bytes,
then in notepad open that with "encoding" set to "Unicode". E.g.
something logically equivalent to:
size := Length(field) * 2;
SetLength(buffer, size );
Hi
You seem really confused about the whole encoding issue.
Yes definatly confused, I had always hope unicode would simplify the world,
but my experiences have shown no such luck :-) Codepages haunted my past and
encodings haunt my future :-)
Ok, that does answer one of my questions I think.
I am still having issues trying to get my charaters standardizsed. I spent
> all of yesterday playing with ideas but it still in the dark.
Whatever you were doing the first time was fine:
I have been having that very thought!
So if I look at a name with umlaughts in the database via
I think std function for convertions would be very helpful.
I am still having issues trying to get my charaters standardizsed. I spent
all of yesterday playing with ideas but it still in the dark.
Part of my problem is I dont have a clue what my source data is encoded as.
Does anyone know of a
Hi
I still cant seem to locate any, but I used the main icon from the homepage
and it looks pretty good on my about box.
Thanks,
On 11/29/06, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
"Da Martian" <[EMAIL PROTECTED]> wrote:
> Hi
>
> Are there any powered by
Hi
I have a system up and working using sqlite3, but I think I am having
unicode issues and I am not sure how I should go about coding the solution.
I was hoping someone could share the approach needed. Here is my situation:
I have german characters which "Umlauts" which I would like to get
Hi
Are there any powered by images for SQLite3?
Google seems at a loss which is unsual?
S
probably won't) you can do it in another thread
and update the interface as appropriate.
I'm not seeing a downside here.
Isaac
On 10/26/06, Da Martian <[EMAIL PROTECTED]> wrote:
> No there isnt, but RDBM systems are a generalised data retrieval
mechanism.
> As such they suffer from that g
No there isnt, but RDBM systems are a generalised data retrieval mechanism.
As such they suffer from that generality.
Dont get me wrong, RDBM systems are appropriate for 95% of all data
requirements I have had to deal with and I would never dream of trying to
write one from scratch, nor can I
Indeed,
Thanks to all for the responses. Currently I use the seperate count(*), but
think I will change to the "prepare, step(n), reset" option only because the
query is a little slow.
Utlimatly sqlite is brilliant, hands down the best embeddable and standalone
db I have yet to encounter.
cander, discussing is whats its all about.
Stephen
On 10/25/06, A. Pagaltzis <[EMAIL PROTECTED]> wrote:
* Da Martian <[EMAIL PROTECTED]> [2006-10-25 15:05]:
> Its was not meant as an insult, however you did set the tone
> with your post (ala: Either you want the data from the que
Its was not meant as an insult, however you did set the tone with your post
(ala: Either you want the data
from the query, or you don't.). I mearly responded in kind. If you live in
glass houses dont throw stones and all that. I mean its not hard to see that
loading 20 million records into memory
Your extremly simplistic view on this is a result of never dealing in
volumous data and result sets and quick running queries. Once you >put
volumes into your thinking cap you will begin to see why you dont just read
everything into memory for the hell of it.
Just to complete the above thought.
Its quiet simple:
1) If there are too many results the user will have to wait a long time
before they see anything because we will be buffering away results. The
application will appear slugish. The user would get bored. I can point you
to dozens of studies which show the correlation between
.
But yes paging and using LIMIT and OFFSET is also a solution. Again not as
efficent though, cause of all the repeated queris :-)
On 10/25/06, Martin Jenkins <[EMAIL PROTECTED]> wrote:
Da Martian wrote:
> But to return all the rows just to count them requires N calls to
> step. If
Hi
Thanks for your resposne.
In the end its not important as you point out as many options are available,
I guess I allowed myself to indulge in "idealic" fantasy for a moment :-)
S
On 10/24/06, Dennis Cote <[EMAIL PROTECTED]> wrote:
Da Martian wrote:
> Hi
>
>
What prevents you from doing the same yourself? Just collect the
data in a memory buffer as you get it, before processing it.
Nothing but effiency as discussed. Basically, as Mr Cote has said, its
either a count(*) query or O(n) step calls. The former means two statements
have to be run, if
, [EMAIL PROTECTED] <[EMAIL PROTECTED]> wrote:
"Da Martian" <[EMAIL PROTECTED]> wrote:
> Hi
>
>
> >The optimal way is that you prepare the statement, fetch and
> > count the results with sqlite3_step.
>
> How would I "fetch and count" the resu
Hi
The optimal way is that you prepare the statement, fetch and
count the results with sqlite3_step.
How would I "fetch and count" the results via sqlite3_step?
Do you mean fetch all the records first? What if my result set is huge, and
I would only like to show the first few records but
Hi
I have found that using an in clause with a subquery can be twice as fast as
a straght join. Can enyone explain the logic of this to me? I am curious to
understand it so I can optimise other queries for better performance.
I have included the queries below:
OT_TARGETS has 20 rows for
Hi
I have 3 million rows in a table which takes up about 3.1GB on disk. The
count(*) is slow.
I have run the analyze, but apart from creating the stats table it does
nothing.
Any reason why this is? Can it be improved ?
28 matches
Mail list logo