ng a modulo but could not manage to
get it work.
Anybody for a ray of light on a different approach ? This look like a
recurrent problem, isn't there an experienced sql programmer here who
tackled this issued a couple of time ?
Thanks for any help,
Vincent
--
Sent via pgsql-sql mailing list
ly in
PostgreSQL.
The error is "ERROR: invalid reference to FROM-clause entry for table "c"
Hint: There is an entry for table "c", but it cannot be referenced
from this part of the query.
Character: 475" - it's the "on ctots.cid = c.customer_id " pa
2004 3:12 PM
To: [EMAIL PROTECTED]
Subject: Re: [SQL] staggered query?
Hi Try this..
SELECT Col1 , Col2
FROM yourtable
WHERE to_number(to_char(col1, 'SS'),'99') / 10 ) in
(10,20,30,40,50,00);
HTH
Denis
> - Original Message -
> From: Vincent Ladlad &l
hi! im new to SQL, and i need to find a solution
to this problem:
i have a table with two columns, the first column
is of type timestamp.
the table contains hundreds of thousands of records.
i need to get all the entries/records at every 10 seconds
interval. example, given a table:
hh/mm/ss |
I have a table that has a frild called ID.
I will be inserting data into that field that will
either be a unique number or blank, Is there a way to
do this either at table creation time or by using some
check() voodoo?
Thanks.
--
Vincent Stoessel
Linux Systems Developer
vincent xaymaca.com
date), so that I can
make a query / index set like :
CREATE INDEX INDEX1 ON TABLE 1 (INVERT(X), Y ASC);
SELECT * FROM TABLE1 ORDER BY INVERT(X) ASC, Y ASC;
Wouldn't it be great to have a mySQL, SAPDB-like syntax of the sort :
CREATE INDEX INDEX1 ON TABLE 1 (X DESC, Y ASC);
Thanks,
vi
Hello.
I wonder if someone could give me a tip how should I dump db with LO. I use pg_dump
and pg_dumpall and evrything is dumped but not LO. What should I do with that.
I will be very greatful fot answer.
P.S.
Sorry for my english ;)
Mateusz Mazur
[EMAIL PROTECTED]
POLAND
Use the unlimited length PostgreSQL type "text" (In 7.1 it's unlimited,
before there were limits).
-Mitch
- Original Message -
From: "Maurizio Ortolan" <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Sunday, April 15, 2001 1:18 PM
Subject: How to simulate MEMO d
Aren't OIDs just integers? Isn't this limit just the limit of the value an
int4 can hold?
2,147,483,647 is the max for an int4 (I think) so at 500 million a day
you're looking at more like 4.29 (and change) days
If I'm correct in all the above, there wouldn't be any way to increase the
limit wi
Aren't there a pretty big concerns when using OIDs as IDs to relate records
in different tables to each other? Wouldn't the OIDs be totally re-assigned
if you had to dump/restore your database?
Just a question to satisfy my own curiosity, thanks!
-Mitch
> Folks,
>
> Because it's a very elegant
> Hello all,
>
> How would I prevent a user from submitting information to a table once
they
> have already done so for that day.
The best you could probably do is to go back and delete undesired recoords
at the end of the day because if it is as you said, they've already put the
information into
I'm curious, I know PG doesn't have support for 'full' text indexing so I'm
wondering at what point does indexing become ineffective with text type
fields?
-Mitch
- Original Message -
From: "Stephan Szabo" <[EMAIL PROTECTED]>
To: "User Lenzi" <[EMAIL PROTECTED]>
Cc: "pgsql-sql" <[EMAIL P
I emailed the list a while back about doing some weighted searching, asking
if anyone had implemented any kind of weighted search in PostgreSQL.. I'm
still wondering the same thing and if anyone has, I would greatly appreciate
a private email, I'd like to discuss it in detail.. I have several idea
Removing indexes will speed up the INSERT portion but slow down the SELECT
portion.
Just an FYI, you can INSERT into table (select whatever from another
table) -- you could probably do what you need in a single query (but would
also probably still have the speed problem).
Have you EXPLAINed the
As I've found through countless trial and error and many emails to this
list, performance is mostly in how you structure queries and how you use the
backend (indexes, proper VACUUMing etc etc)..
Increasing the size passed as -S and -B options will help -- there is
probably much more that can be d
ive... I'll look for some other
ways to speed up the query a bit..
Thanks!
-Mitch
- Original Message -
From: "Tom Lane" <[EMAIL PROTECTED]>
To: "Mitch Vincent" <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Friday, July 21, 2000 1:26 PM
S
select * from applicants as a where (a.created::date > '05-01-2000' or
a.resubmitted::date > '05-01-2000') order by (case when a.resubmitted >
a.created then a.resubmitted else a.created end) desc limit 10 offset 0
There is one of the queries.. I just remembered that the order by was added
since
A while back I as told (by Tom Lane I *think*) that timestamp (previously
datetime) fields couldn't be indexed as such and that I should index them
using this method :
CREATE INDEX "applicants_resubmitted" on "applicants" using btree ( date
("resubmitted") "date_ops" );
Since almost all the que
SELECT whatever FROM wherever WHERE lower(yourfield) = 'this';
You can do it with a case inseneitive regex search but they can't use
indexes and can become very slow on large tables..
SELECT whatever FROM wherever WHERE yourfield ~* 'this';
lower() does leak a bit of memory from what I've hear
anything that could be correctable?
Thanks!!
-Mitch
- Original Message -
From: Bruce Momjian <[EMAIL PROTECTED]>
To: Mitch Vincent <[EMAIL PROTECTED]>
Cc: <[EMAIL PROTECTED]>
Sent: Saturday, June 24, 2000 2:33 PM
Subject: Re: [SQL] More full text index..
> I wou
I hadn't concentrated on the INSERT/UPDATE/DELETE speed of this until today
and I find that it's amazingly slow. Of course the time it takes is relative
to the size of the text but still, almost a minute to delete one record on a
Dual celeron 600 with 256 Megs of RAM and an ATA/66 7200 RPM 30 GIG
I just noticed this in some testing..
When I use my PHP application to insert text into the field that's used in
the full text index it takes 9 full seconds, when investigating resource
usage using 'top' I see this :
Development from a completely idle start up :
PID USERNAME PRI NICE SIZE
> vacuum;
> vacuum analyze;
> select f1.id from app_fti f1, app_fti f2 where f1.string~*'visual' and
> f2.string~*'basic' and f1.id=f2.id;
Use ~*'^basic'
It will use the indexes I believe. Also, enable likeplanning (look in
contrib/likeplanning) -- it will speed things up too.. If that doesn't h
I ran into a problem today that I hope someone can help me with...
I have a database (and application) that is used to track 'applicants'..
These applicants have two timestamp fields associated with their records and
both have relevance as to how long the applicant has been available..
The resub
Lets see your queries you're running and their plan, I'd bet there are ways
to speed them up (that's always been the case with mine!).. fields
- Mitch
"The only real failure is quitting."
- Original Message -
From: Alessandro Rossi <[EMAIL PROTECTED]>
To: <[EMAIL PROTECTED]>
Sent: Fri
> > Does anyone know why when I am in a particular DB as user postgres and
use
> > the following statement, why I get this error?"
> >
> > This is the statement;
> > SELECT * FROM some_file where ID = 1;
> >
>
> --
--
> > Erro
It can't be any larger than 8k (minus a bit of overhead). You can increase
this now to 32k (again, minus the same overhead) by changing BLKSZ to 32k in
the config.h header..
I'm successfully doing this in my database (which is pretty high-traffic and
pretty large).
Good luck!
-Mitch
- Origi
Using PostgreSQL 7.0 I'm Doing ...
ipa2=# CREATE INDEX "app_stat_month" on applicant_stats(date(month));
ERROR: SQL-language function not supported in this context.
ipa2=# \d applicant_stats
Table "applicant_stats"
Attribute | Type| Modifier
---+---+--
app_i
28 matches
Mail list logo