[SQL] Given 02-01-2006 to 02-28-2006, output all days.
Is there a real quick way to do a query that will show me all the dates given a startdate and an end date?Given: 02-01-2006 and 02-28-2006it should give me:02-01-200602-02-2006..02-27-2006 02-28-2006Can this be done by a built-in function perhaps?
Re: [SQL] Given 02-01-2006 to 02-28-2006, output all days.
Henry Ortega wrote:
(question about set of all days between two dates)
I don't know of a builtin way to do it off the top of my head, but it's a
pretty simple function to write:
create function days (start date, finish date) returns setof date as $$
declare
curdate date;
begin
curdate := start;
while (curdate <= finish) loop
return next curdate;
curdate := curdate + 1;
end loop;
return;
end;
$$ language plpgsql;
# select * from days ('2006-02-01', '2006-02-07');
days
2006-02-01
2006-02-02
2006-02-03
2006-02-04
2006-02-05
2006-02-06
2006-02-07
(7 rows)
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Re: [SQL] Given 02-01-2006 to 02-28-2006, output all days.
On Fri, Feb 17, 2006 at 04:07:28PM -0500, Henry Ortega wrote: > Is there a real quick way to do a query that will show me all the dates > given a startdate and an end date? You could use generate_series(), which is built-in since 8.0 and easily written in earlier versions. SELECT date'2006-02-01' + x FROM generate_series(0, date'2006-02-28' - date'2006-02-01') AS g(x); -- Michael Fuhr ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
Re: [SQL] Given 02-01-2006 to 02-28-2006, output all days.
That usually indicates that, for whatever reason, plpgsql.so is from a
different version of PostgreSQL than the database server. If you installed
PostgreSQL from source, make sure you configured the server to look in the same
lib dir as its libs were installed to; if you've installed from package
management of some kind (RPM?) make sure you have the same versions of all
postgres-related packages.
You should also upgrade, if possible. 7.3 is effectively obsolete (37 releases
old); there are a number of bugfixes and performance improvements in more
recent versions.
-Owen
-Original Message-
From: Henry Ortega [mailto:[EMAIL PROTECTED]
Sent: Friday, February 17, 2006 2:06 PM
To: Owen Jacobson
Subject: Re: [SQL] Given 02-01-2006 to 02-28-2006, output all days.
This sounds good. I don't have plpgsql loaded though.
I am trying to load plpgsql and it's giving me:
ERROR: Load of file /usr/lib/pgsql/plpgsql.so failed:
/usr/lib/pgsql/plpgsql.so: undefined symbol: xlateSqlType
createlang: language installation failed
I have pgsql 7.3.2
I am googling and can't seem to find the answer. Any help would be appreciated.
On 2/17/06, Owen Jacobson <[EMAIL PROTECTED]> wrote:
Henry Ortega wrote:
(question about set of all days between two dates)
I don't know of a builtin way to do it off the top of my head, but it's a
pretty simple function to write:
create function days (start date, finish date) returns setof date as $$
declare
curdate date;
begin
curdate := start;
while (curdate <= finish) loop
return next curdate;
curdate := curdate + 1;
end loop;
return;
end;
$$ language plpgsql;
# select * from days ('2006-02-01', '2006-02-07');
days
2006-02-01
2006-02-02
2006-02-03
2006-02-04
2006-02-05
2006-02-06
2006-02-07
(7 rows)
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
[SQL] two count columns?
Hello all, I have written a small firewall, and sshd, logging system which uses postgresql for storing the logs. The table that stores the firewall logs has, among other columns, from_ip, port and a timestamp. I want to construct a query which returns the columns: from_ip | port 22 entries | total entries | last_access I have managed to put this together: select from_ip, count(from_ip) as entries, max(ts)::timestamp(0) as last_access from log where to_port=22 and direction='in' group by from_ip So the only thing I'm missing is the total number of log entries matching the from_ip, but that's where I'm stuck. My instinct is to try to use subqueries: select from_ip, count(from_ip) as entries, count(select * from log where ...) as tot_entries, max(ts)::timestamp(0) as last_access from log where to_port=22 and direction='in' group by from_ip ..but count() doesn't take a subquery afaik, and how would I match the from_ip in the inner select with the outer one? So I assume that subqueries aren't the way yo go. Is there even a query to return those columns in that configuration? Thankful for any hints or tips. -- Kind Regards, Jan Danielsson Te audire non possum. Musa sapientum fixa est in aure. signature.asc Description: OpenPGP digital signature
Re: [SQL] two count columns?
Jan Danielsson <[EMAIL PROTECTED]> writes: >So the only thing I'm missing is the total number of log entries > matching the from_ip, but that's where I'm stuck. My instinct is to try > to use subqueries: > select from_ip, count(from_ip) as entries, count(select * from log where > ...) as tot_entries, max(ts)::timestamp(0) as last_access from log where > to_port=22 and direction='in' group by from_ip > ...but count() doesn't take a subquery afaik, and how would I match the > from_ip in the inner select with the outer one? I think you want something like select ..., (select count(*) from log loginner where from_ip = log.from_ip) ... from log ... You need to attach an alias to either the inner or the outer use of "log" so that you can refer to the outer one from the inner SELECT. Here I chose to alias the inner one, but it might be clearer to alias the outer: select ..., (select count(*) from log where from_ip = logouter.from_ip) ... from log logouter ... Note that anything like this is going to be pretty expensive if your log table is large. You might want to think about something involving another layer of GROUP BY instead. regards, tom lane ---(end of broadcast)--- TIP 2: Don't 'kill -9' the postmaster
Re: [SQL] two count columns?
Jan Danielsson <[EMAIL PROTECTED]> writes: > select from_ip, count(from_ip) as entries, count(select * from log where > ...) as tot_entries, max(ts)::timestamp(0) as last_access from log where > to_port=22 and direction='in' group by from_ip select from_ip, count(from_ip) as entries, (select count(*) from log as l where l.from_ip = log.from_ip ) as tot_entries, max(ts) as last_access from log where to_port=22 and direction='in group by from_ip expect it to be pretty slow though. For every from_ip it has to look up every other entry with that from_ip. > Thankful for any hints or tips. There is a trick you could use to make it faster but it gets cumbersome and pretty tricky to use when you're doing more than one thing at a time: select from_ip sum(case when to_port=22 and direction='in' then 1 else 0 end) as entries, count(*) as tot_entries, max(case when to_port=22 and direction='in' then ts::timestamp(0) else null end) as last_access from log group by from_ip having entries > 0 Note that in either case you might want to look at ANALYZE results for the query and try raising work_mem for this query using SET until you see the plan using a hash aggregate. If it can use a hash aggregate for your query (more likely for the first query than the second) without swapping it'll be faster than sorting. -- greg ---(end of broadcast)--- TIP 6: explain analyze is your friend
[SQL] Tough Problem -- Record Checkouts
Imagine a library of books. Each book is a record. The library is the table. A user may check out a book and the shelf space it once occupied will store the minute that the user checked the book out. Every 5 minutes, a magical librarian walks through the library and when a book has been checked out longer than 15 minutes, she has the power to zap it back out of the user's hands and put it back on the shelf for someone else. How do you efficiently achieve this in a WHERE clause in SQL? For instance, here's a table of several minute columns. CO, in this case, is the checked out minute. N, in this case, is the current minute. This translates to, "If the CO = x, and N is within this range, then clear the CO column." CO| N --+--- 0 | 15-59 1 | 0, 16-59 2 | 0-1, 17-59 15| 0-14, 30-59 16| 0-15, 31-59 30| 0-29, 45-59 31| 0-30, 46-59 45| 0-44 46| 1-45 59| 14-58 This becomes some kind of UPDATE statement with a complex WHERE clause. ---(end of broadcast)--- TIP 1: if posting/reading through Usenet, please send an appropriate subscribe-nomail command to [EMAIL PROTECTED] so that your message can get through to the mailing list cleanly
[SQL] Btrieve to PostgreSQL
Hey, Been looking around to do this for a while, haven't gotten concrete information. I'm interested in taking data from Peachtree Accounting 2003 (Which is stored Btrieve DAT files) and importing them into a Postgres SQL database. I have looked around on the net about this, but haven't gotten any concrete methods on doing this. Figured with Pervasive's involvement in PostgreSQL, might have some luck on this list. Peachtree also stores the DDL (Dictionary) files for the Btrieve database in the directory. I'm not sure what version of Btrieve the files are. Does anyone have any experience in doing this? I read somewhere about Pervasive's SDK for their DB server which can convert Btrieve files to SQL (or something like that), but looking through the SDK and documentation, haven't found any real information. This is just an experimental project, so any commercial solutions to do this really doesn't apply. Regards, -- Adam Alkins http://www.rasadam.com ---(end of broadcast)--- TIP 3: Have you checked our extensive FAQ? http://www.postgresql.org/docs/faq
[SQL] ORDER BY with LTREE
Hello! Im working with the ltree [1] datatype and have labels that can not used directly in ltree and a want to get a tree like strukture ordered by the labels. IE, I've got a table CREATE TABLE t ( treeLTREE, label TEXT ); and data like treelabel -- root.1 z root.2 c root.2.1a root.2.2b root.3 i root.4 f root.4.1k root.4.2c I need a VIEW that ordery by "by tree but by label in the same hirachie", so the output should be root.2 c root.2.1 a root.2.2 b root.4 f root.4.2 c root.4.1 k root.3 i root.1 z Any idea? Thanks, AXEL. [1] http://www.sai.msu.su/~megera/postgres/gist/ltree/ ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
[SQL] Postgres for Dummies - a new request
Hi! I have postgres installed in my machine and have a simple task to do. Step1) Fetch all the tables in the Database with name staerting with "AA" Step2) On each table you get fire a simple SQL query: select * from ; Step3) Right these into a flat file. I have to carry out all these steps using a prcedure/function. The function would be called by a trigger. Now i need to know how to make a postgres function for that. My problem: My experience with database is almost 5 days and I can't seem to understand the documentation, like how to catch the return value. How to run the procedure for testing without firing the trigger. So could anyone help me with that. Regards, Nalin.
Re: [SQL] Trigger/Sequence headache
Stephen,
You don't need to use a seperate batch to clean up the table. As
Stephan pointed out, you can call nextval after you determine that the
new row isn't a duplicate.
In case you misunderstood what Stephan had suggested let me try to
explain what is happening.
When PostgreSQL receives an INSERT request it aggregates field values
for the new row from several sources.
1) The statement itself (INSERT INTO ... VALUES ...)
2) The fields' DEFAULT values (provided by the CREATE TABLE statement)
3) Any CONSTRANTs are taken into consideration
4) BEFORE triggers are called in alphabetical order
5) The new row is inserted
6) AFTER triggers are called in alphabetical order
The sequence is being incrimented when NEXTVAL is evaluated. NEXTVAL
can be evaluated in steps 1, 2 or 4 depending on how your application
is written. Since you want to be able to cancel the operation in step 4
without NEXTVAL having been evaluated, this is where you should call
NEXTVAL (instead of in steps 1 or 2). So in your trigger you want to do
something like this:
-- Check for duplicates here...RETURN NULL;
NEW.ID = NEXTVAL('SEQ_MYTABLE_ID');
RETURN NEW;
Also, are you sure you want to return NULL rather than raise an
exception? Either way will work depending on what the desired behavour
is. The thing is that silently dropping rows might be the source of a
debugging nightmare later on where-as raising an exception is a bit
easier to detect. Just me $0.02. To raise an exception see RAISE
EXCEPTION in the postgresql manual. Its really easy but will cancel the
current transaction so it may not be desirable for your uses.
-Robert
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
[SQL] passing array(java) to postgre sql function
Hi All,
in postgreSql, i have written a function which accepts a String array
and inserts the contents to DB.
when i tested that function by passing the values like:
SELECT TEST_ARRAY('{xxx,,,ww}');
where TEST_ARRAY is my function name.
the records get inserted correctly. But now the problem is how to pass
the values from java to this function?
I have implemented the java.sql.Array interface, and given the actual
implementation to getArray() method, like this:
public Object getArray(){
ArrayList temp = new ArrayList();
temp.add("");
temp.add("");
temp.add("T");
temp.add("Q");
return temp.toArray();
}
After this i am passing this.getArray() values to database.
but at this point of time i get the following Exception:
org.postgresql.util.PSQLException: Unsupported Types value: 2,003
at
org.postgresql.jdbc2.AbstractJdbc2Statement.setObject(AbstractJdbc2Statement.java:1630)
at
org.postgresql.jdbc3.AbstractJdbc3Statement.setObject(AbstractJdbc3Statement.java:1436)
at
org.postgresql.jdbc2.AbstractJdbc2Statement.setObject(AbstractJdbc2Statement.java:1636)
Is this the problem with the driver wht i am usng?
Can anyonme please help me to overcome this??
Waiting for your response,
---(end of broadcast)---
TIP 2: Don't 'kill -9' the postmaster
[SQL] SELECT on a to-be-determined table
Hi, I'm a new at this. So please forgive if I mess up. Also, if there is already a reference/tutorial somewhere, feel free to point me to it. Here's my situation: db=# select * from projects; projid | projname +-- 1 | cars 2 | houses 3 | pets (3 rows) db=# select * from cars; carid | carname ---+- 1 | ford 2 | mazda (2 rows) db=# select * from houses; houseid | housename -+--- 1 | semi 2 | trailer 3 | mansion (3 rows) db=# select * from pets; petid | petname ---+- 1 | dog 2 | cat 3 | bird (3 rows) Is it possible to do this: Give me all the rows of the table whose project id is 2 (or whatever projid). Thanks ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
[SQL] view of weekly data
Hi, friend, Suppose there is table of daily transaction data with 5 fields, time(date), open(float8), high(float8), low(float8), close(float8) Is it possible to create a view of weekly data, i.e. open is the first day'open, high is the highest of the week, low is the lowest of the week, close is the last day's close. Thank you. ---(end of broadcast)--- TIP 9: In versions below 8.0, the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match
Re: [SQL] view of weekly data
"Ding Xiangguang" <[EMAIL PROTECTED]> writes:
> Hi, friend,
>
> Suppose there is table of daily transaction data with 5 fields,
>
> time(date), open(float8), high(float8), low(float8), close(float8)
>
> Is it possible to create a view of weekly data, i.e. open is the first
> day'open, high is the highest of the week, low is the lowest of the week,
> close
> is the last day's close.
low and high are easy, they're just min() and max(). so you would get
something like:
select date_trunc('week', time) as startofweek,
min(low) as weeklylow,
max(high) as weeklyhigh
from dailydata
group by date_trunc('week', time)
Unfortunately showing the open and close is much much harder. To get them
efficiently requires a feature set called OLAP that Postgres doesn't have and
isn't likely to get soon.
In Postgres 8.1 (and 8.0?) you could actually write some custom aggregate
functions using RECORD data type to store the earliest and latest time found
so far and the corresponding open and close to get them efficiently. Maybe
someone else would be able to show how to do that, I haven't tried it yet.
The only way to do it in standardish SQL would be with terribly inefficient
subqueries:
select date_trunc('week', time) as startofweek,
min(low) as weeklylow,
max(high) as weeklyhigh,
(select open
from dailydata as d
where date_trunc('week',time)=date_trunc('week',dailydata.time)
order by time asc
limit 1
) as weeklyopen,
(select close
from dailydata as d
where date_trunc('week',time)=date_trunc('week',dailydata.time)
order by time desc
limit 1
) as weeklyclose
from dailydata
group by date_trunc('week', time)
--
greg
---(end of broadcast)---
TIP 4: Have you searched our list archives?
http://archives.postgresql.org
Re: [SQL] Tough Problem -- Record Checkouts
"Alfred" <[EMAIL PROTECTED]> writes: > CO| N > --+--- > 0 | 15-59 > 1 | 0, 16-59 > 2 | 0-1, 17-59 > 15| 0-14, 30-59 > 16| 0-15, 31-59 > 30| 0-29, 45-59 > 31| 0-30, 46-59 > 45| 0-44 > 46| 1-45 > 59| 14-58 > > This becomes some kind of UPDATE statement with a complex WHERE clause. That's usually a sign you're storing the data in the wrong form. Why are you storing just the minutes and not the absolute timestamp of the checkout? In any case the where clause isn't all that complex. I think you've confused yourself by drawing the inverse table of what you need. You need the range of CO to expire for a given N, not the range of N in which you should expire a given CO. WHERE co BETWEEN 0 AND n-15 OR co BETWEEN n+1 AND n+45 -- greg ---(end of broadcast)--- TIP 4: Have you searched our list archives? http://archives.postgresql.org
