Re: [SQL] update syntax error?
query should have a comma instead of the word and...
HTH
mikeo
At 04:27 PM 8/20/00 -0300, The Hermit Hacker wrote:
>
>UPDATE SET field1=,field2=
>
>On Tue, 15 Aug 2000, k.c. hemelstrand wrote:
>
>> Can anybody help with why I am receiving the error below?
>>
>> Thanks
>> K.C.
>>
>>
>> parts=# UPDATE av_parts
>> parts-# SET rawalternaternumber = 'NULL' AND alternatepartnumber = 'NULL'
>> parts-# WHERE rawalternatenumber = 'IVA81A';
>> ERROR: left-hand side of AND is type 'unknown', not 'bool'
>>
>
>Marc G. Fournier ICQ#7615664 IRC Nick:
Scrappy
>Systems Administrator @ hub.org
>primary: [EMAIL PROTECTED] secondary:
scrappy@{freebsd|postgresql}.org
>
[SQL] Re: Beginner problems with functions
On Thu, 17 Aug 2000, Stephan Szabo wrote:
> What you may need to do is declare a variable of type record
> and do SELECT INTO * From ... rather than just
> the SELECT.
Thanks, that worked.
> Yeah, setof seems fairly wierd. SETOF basetype if
> you do a SELECT FROM table seems to work though.
> I sort of expected that the ones in the regression test would
> either do something understandable or at least error if they
> are testing for brokenness.
Is there any kind of documentation how to cope with that problem?
I try to explain my problem once more:
My servlets contain code like:
rs = stmt.executeQuery("stored_procedure arg1, arg2");
while ( rs.next() )
do_something(rs.getString("col1"), rs.getString("col2"),
rs.getString("col3"), rs.getString("col4") );
I have to decide:
1) Can I use PostgreSQL for such querys?
a) If yes, could someone give any pointer to docs/examles/something else
b) If no, please tel me so. That would let switch to b) or c)
2) Backport the MS SQL server functions into plain SQL text inside
my servlets just to get them working. Not very clever, but should
work, hopefully.
3) Use another database server, could be Interbase but I would prefer
PostgreSQL.
Could somebody please help me to decide which strongle depends from the
SQL procedure problem.
Kind regards
Andreas.
Re: [SQL] 8K Limit, whats the best strategy?
Jan Wieck wrote: > Poul L. Christiansen wrote: > > I've just split the text up (in 8 pieces), so it fits into 8K rows. But thats > > only a viable solution if your text is less than a couple of 100K's. > > > > You could try to be a daredevil and use the Toast code, even if it's beta. But > > I don't know how far the Toast project has come. > > TOAST is finished and will be shipped with 7.1. It's not a > solution for huge items, but medium sized text up to some > hundred K works fine. What do you mean by "..not a solution for huge items"? Does TOAST have a size limit? > > > Jan > > > > > Keith Wong wrote: > > > > > Hi everyone, > > > > > > Just wondering what strategies people have used to get around the 8K row > > > limit in Postgres. > > > If anyone has been troubled by this limitation before and has a nice > > > solution around it, I would love to hear it. > > > > > > My application is a web-based system in which needs to store large amounts > > > of text. > > > The text stored needs to searchable as well. > > > > > > Cheers, > > > Keith. > > > > -- > > #==# > # It's easier to get forgiveness for being wrong than for being right. # > # Let's break this rule - forgive me. # > #== [EMAIL PROTECTED] #
[SQL] Best way to create DML/DDL log?
I would like to create a long of all successful DML/DDL statements. Has anyone accomplished this before? No need to reinvent the wheel ... If not ..I'm wondering what the best approach might be. I have looked at the tcop/postgres.c source, thinking I might start by modifying some of the -dX routines for generating debugging output. I've also tried working a little with the -E option to the postgres backend, (passing it through postmaster as -o -E) but can't seem to redirect its output. Finally, another option might be to create a trigger, but I haven't really explored this option. One of the main requirements is that any function which returns a value which is determined by system environment must be written in the log as the evaluated statement, and not written as the function itself. e.g. now() returns a text string of the current datetime which is written to the log in place of the literal character string 'now()' As far as modifying postgres.c, I'm fairly confident I can create a log of the query buffer, but I haven't yet enough exposure to the source to find where I should check to see whether or not a statement was successful, or from where I might be able to pull text data from evaluated functions. Any commentary on this would be much appreciated. Thanks, Bryan Ingram
Re: [SQL] Best way to create DML/DDL log?
I think there is either a switch to the psql client or a "\" command in the session that allows you to log to a file. Even if you have to do some redirection, it seems a lot better than modifying postgres.c (though I am quite impressed that you would think about it...) --- "Ingram, Bryan" <[EMAIL PROTECTED]> wrote: > I would like to create a long of all successful > DML/DDL statements. > > Has anyone accomplished this before? No need to > reinvent the wheel ... > > If not ..I'm wondering what the best approach might > be. > > I have looked at the tcop/postgres.c source, > thinking I might start by > modifying some of the -dX routines for generating > debugging output. > > I've also tried working a little with the -E option > to the postgres backend, > (passing it through postmaster as -o -E) but can't > seem to redirect its > output. > > Finally, another option might be to create a > trigger, but I haven't really > explored this option. > > One of the main requirements is that any function > which returns a value > which is determined by system environment must be > written in the log as the > evaluated statement, and not written as the function > itself. e.g. now() > returns a text string of the current datetime which > is written to the log in > place of the literal character string 'now()' > > As far as modifying postgres.c, I'm fairly confident > I can create a log of > the query buffer, but I haven't yet enough exposure > to the source to find > where I should check to see whether or not a > statement was successful, or > from where I might be able to pull text data from > evaluated functions. > > Any commentary on this would be much appreciated. > > Thanks, > Bryan Ingram > > > > __ Do You Yahoo!? Yahoo! Mail Free email you can access from anywhere! http://mail.yahoo.com/
Re: [SQL] Re: Beginner problems with functions
On Mon, 21 Aug 2000, Andreas Tille wrote:
> On Thu, 17 Aug 2000, Stephan Szabo wrote:
>
> > What you may need to do is declare a variable of type record
> > and do SELECT INTO * From ... rather than just
> > the SELECT.
> Thanks, that worked.
>
> > Yeah, setof seems fairly wierd. SETOF basetype if
> > you do a SELECT FROM table seems to work though.
> > I sort of expected that the ones in the regression test would
> > either do something understandable or at least error if they
> > are testing for brokenness.
> Is there any kind of documentation how to cope with that problem?
>
> I try to explain my problem once more:
>
> My servlets contain code like:
>
> rs = stmt.executeQuery("stored_procedure arg1, arg2");
> while ( rs.next() )
> do_something(rs.getString("col1"), rs.getString("col2"),
> rs.getString("col3"), rs.getString("col4") );
>
I haven't thought of an elegant way to do it, although you could
fake some of it with a table of the appropriate structure with a sequence.
It's really ugly, but the pl/sql(tcl/perl/etc...) function gets the
next value of the sequence and inserts the results into a table with
the sequence number and returns the number to you.
So, it'd be something like
select stored_procedure(arg1, arg2);
(get the value into variable)
select * from table_sp_ where intval=
while (...)
do_something(...)
delete from table_sp_ where intval=
That might be safe in so far as the sequence number should stop
concurrent transactions from clobbering each other, but it requires
that you do the deletes manually and that table will need to be
vacuumed fairly often probably.
[SQL] NULL function arguments?
It seems that why I provide a NULL argument to a PL/pgSQL function
it makes the rest of the arguments NULL, too!
Consider this function:
CREATE FUNCTION callme(text, text) RETURNS boolean AS
'
BEGIN
RAISE NOTICE ''$1: %'', $1;
RAISE NOTICE ''$2: %'', $2;
RETURN TRUE;
END;
' LANGUAGE 'plpgsql';
So that when I try SELECT callme('hello', 'world');
I get back:
NOTICE: $1: hello
NOTICE: $2: world
But when I do SELECT callme('hello', NULL);
I get back:
NOTICE: $1:
NOTICE: $2:
I'm using Postgres 7.0. Possible bug?
Mark
[SQL] Speed or configuration
The Hermit Hacker wrote: "use cut-n-paste please, and send us the results of the EXPLAIN ... stuff like the cost estimates and whatnot tell us *so* much ..." This is it: exis=# \d pvdprcod NUMART| integer | not null NUMDEP| smallint| not null NUMPRO| smallint| not null MODELO| varchar(20) | not null TALLA | varchar(4) | not null COLOR | varchar(3) | not null exis=# \d venart cvetda| smallint | not null numdep| smallint | not null numart| integer| not null mes | smallint | not null anio | integer| not null tipotr| varchar(2) | importe | float8 | cantidad | float8 | exis=# explain select cvetda,anio,mes,sum(importe),sum(cantidad) exis-# from venart where numart in exis-# (select "NUMART" from pvdprcod where "NUMDEP"=7 and "NUMPRO"=108) exis-# group by cvetda,numdep,anio,mes; NOTICE: QUERY PLAN: Aggregate (cost=79015875401357.48..79015875413208.91 rows=79010 width=26) -> Group (cost=79015875401357.48..79015875409258.44 rows=790095 width=26) -> Sort (cost=79015875401357.48..79015875401357.48 rows=790095 width=26) -> Seq Scan on venart (cost=1.00..79015875283591.09 rows=790095 width=26) SubPlan -> Materialize (cost=17942.42..17942.42 rows=34 width=4) -> Seq Scan on pvdprcod (cost=1.00..17942.42 rows=34 width=4) EXPLAIN Why Seq Scan if indexes have been created on the columns used for access? Best regards, Franz J Fortuny
[SQL] Upper case or lower case....
This statement: create table tabname ( field1 integer, field2 integer ); creates a table with tablename "tabname", lower case or IRRELEVANT CASE? Also, field1 is "field1" (all lower case) or is it case irrelevant? The problem we are having is that those queries we have working for our commercial database, create, at the commercial database, a table with name "TABNAME", and the fields are "FIELD1" and "FIELD2". If you use C++ Builder from Borland/Inprise, you will notice that the column names come in lower case when the origing is a postgreSQL database, and in UPPER CASE when the database is the commercial one we are using. Is there a way to configure the behavior of PostgreSQL so that it will not consider case relevant anything that goes without quotes? All of the commercial products return UPPER CASE (with the the borland/inprice c++ 5 tool). Only PostgreSQL returns lower case names for the columns. This really means a serious drawback for programmers: This code, qQuery1COLUMNSO->AsInteger... is automatically created by the builder when the column names are retrieved from the database, if the database is a commercial one. If it is PostgreSQL, it returns this: qQuery1columnso->AsInteger... As this can be very easily seen, it doesn't help in making things easy when one is intending to develop applications that should run against any SQL Server. I have been trying to find ways to make PostgreSQL apply the policy of giving out always upper case names, but I have not been able to find this. It is a hassle to have to transform all the queries which were "select col,col2,col3 from tabname" to "select "COL","COL2","COL3" from "TABNAME" " An unthinkable job. Any suggestions? Franz J Fortuny
Re: [SQL] Best way to create DML/DDL log?
Thanks Webb and Yury .. Those are good ideas, but as it stands, I believe the debugging facility is inadequate for my needs. I have several databases set up on one machine (postgres calls this a "cluster") and I need the log to specify which database, and ideally, which user, the log entry is for. Another thing I will need to do is to have the log specify some kind of session identifier. I wouldn't want atomic transactions from another session to be inserted into the middle of a begin/commit/rollback group without a way to separate them. By now you've probably figured out that I am hoping to create a redo-log, for the purpose of recovering a database in a crash scenario, and possibly for creating a read-only standby database. I think my best bet from here is to print out the postgres.c source and begin to rummage through it. Several things I'm looking for, in case anyone can help me out on this: 1) Build a query buffer which includes the results of functions such as now() and random() rather than the function name itself (This might be the most complex part of the whole thing .. any comments?) 2) Stamp log entries with a session id and a unique sequential number. (Session ID information would later be used by the process which applies the logs to fork new client sessions. This would ensure data consistency by enabling the creation of a log-applier-process which could create separate sessions and maintain the correct sequential order of statement execution.) 3) Determine if the above two steps are sufficient to accurately recreate the state of a database if the logs are re-applied to a backup. Thanks, Bryan Ingram -Original Message- From: Yury Don [mailto:[EMAIL PROTECTED]] Sent: Monday, August 21, 2000 11:29 AM To: Ingram, Bryan Subject: Re: [SQL] Best way to create DML/DDL log? "Ingram, Bryan" wrote: > > I would like to create a long of all successful DML/DDL statements. > > Has anyone accomplished this before? No need to reinvent the wheel ... > > If not ..I'm wondering what the best approach might be. > > I have looked at the tcop/postgres.c source, thinking I might start by > modifying some of the -dX routines for generating debugging output. > > I've also tried working a little with the -E option to the postgres backend, > (passing it through postmaster as -o -E) but can't seem to redirect its > output. > > Finally, another option might be to create a trigger, but I haven't really > explored this option. > > One of the main requirements is that any function which returns a value > which is determined by system environment must be written in the log as the > evaluated statement, and not written as the function itself. e.g. now() > returns a text string of the current datetime which is written to the log in > place of the literal character string 'now()' > > As far as modifying postgres.c, I'm fairly confident I can create a log of > the query buffer, but I haven't yet enough exposure to the source to find > where I should check to see whether or not a statement was successful, or > from where I might be able to pull text data from evaluated functions. > > Any commentary on this would be much appreciated. > > Thanks, > Bryan Ingram Hello Bryan You can make the log of all SQL queries by setting pgecho=on and pgdebug=2 (or > 2) in postmaster init. You will get all queries and many other information in postgresql.log, then you can select only DDL/DML statements using shell scripts (I have an examples of such scripts and if you interesting, I'll send them). Unfortunately, looks like it's not solution for you main task, because queries will be in original from, and functons like now() will be functions but not the values. -- Sincerely yours, Yury
Re: [SQL] Creating sequences
This is what I wanted to do. When I can have a primary key that is meaningful AND won't change, I would like to try to use it, otherwise I'd probably use an OID. So... I'm developing a database that will have the primary key like a "case number" (like when you call for tech support). Say your call center has 3 products: Word processor, Spread Sheet, Database. When you open a case for a product, that product's code will be the beginning of the case number. Say, Word processor will be WP1, Spreadsheet will be SS1, etc. (Each product would be in its own table). Each time a case is opened, it would increment to the next number. Granted, I could probably just code something where I set up a sequence and when I do an insert, it concatenates the prefix to the number and then inserts the value, but I was curious if there was a way where you can set the initial value to alphanumeric and it will increment from there. Adam Lang Systems Engineer Rutgers Casualty Insurance Company - Original Message - From: "Gannon, Terry" <[EMAIL PROTECTED]> To: "'Adam Lang'" <[EMAIL PROTECTED]> Sent: Friday, August 18, 2000 4:15 PM Subject: RE: [SQL] Creating sequences > Adam -- suggest that a way to do this would be to use a stored procedure > that would read the sequence number associated with the table, and build the > variable as you require. You would then setup a trigger that would launch > the stored procedure everytime you insert a new record into the table? If > that's a little foggy, then I would suggest that if you can part with the > definition of the table that I could put together a little example to > illustrate...regards... > > Terry Gannon > [EMAIL PROTECTED] > > > -Original Message- > > From: Adam Lang [SMTP:[EMAIL PROTECTED]] > > Sent: Friday, August 18, 2000 12:25 PM > > To: [EMAIL PROTECTED] > > Subject: [SQL] Creating sequences > > > > Is it possible to have a sequence (or something like it) when it > > increments > > alpha-numeric? > > > > Say the first value is set at A01 and it will increment to A02, A03 by > > default. Also, it would be good if it could be made the primary key. > > > > Adam Lang > > Systems Engineer > > Rutgers Casualty Insurance Company
Re: [SQL] 8K Limit, whats the best strategy?
Poul L. Christiansen wrote: > Jan Wieck wrote: > > > Poul L. Christiansen wrote: > > > I've just split the text up (in 8 pieces), so it fits into 8K rows. But thats > > > only a viable solution if your text is less than a couple of 100K's. > > > > > > You could try to be a daredevil and use the Toast code, even if it's beta. But > > > I don't know how far the Toast project has come. > > > > TOAST is finished and will be shipped with 7.1. It's not a > > solution for huge items, but medium sized text up to some > > hundred K works fine. > > What do you mean by "..not a solution for huge items"? Does TOAST have a size limit? > Not an explicit one. But imagine you really want to store an MP3 of - let's say 9M in the database. 1. Your client application must quote it somehow to put it into an INSERT querystring. The quoting makes it a 10MB thing (think positive). 2. The query is sent to the backend. Now you have this string a second time in memory. 3. The query is parsed and a 10MB text datum is built for the querytrees Const node. 4. The query is executed, builds a 10MB tuple to store. 5. Toast jumps in and moves it out of the tuple again. So if the client and DB are running on the same system, you'll have the MP3 four times in memory. Now do it concurrent with 20 backends and you'll need 800+ ... you see why I said it's "not a solution for huge"? Jan -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] #
[SQL] Copying data with triggers
Hi all, In the database I'm designing, I want to have audit tables that keep a log of all inserts, updates and deletes that occur on any table. e.g. If i had a table Info, create table Info ( info_id SERIAL, some_data text ) I would also have a corresponding audit table create table AudInfo (aud_key_id SERIAL, info_id int4, some_data text, aud_operation_type, aud_log_time timestamp default now() ) now I tried creating a trigger on Info, so that whenever an insert occurs, the records are copied to the audit table. create function tp_info () returns opaque as ' begin -- insert into audit table insert into AudInfo (info_id, some_data, aud_operation_type) values (new.info_id, new.some_data, ''i''); return new; end; ' language 'plpgsql'; create trigger tp_info before insert on Info for each row execute procedure tp_info(); This doesn't work however. A parse error occurs in the first line. I suspect that I cannot insert a record into another table in a trigger. I'm not sure why though. Anybody else done similar operations within a trigger procedure? Or know of a work around? Cheers, Keith.
[SQL] copy DELETES to audit table
I'd like to have a trigger that does something like serialize any row to a text string, then copys it into an audit table befor allowing the delete... having an restore function that can deserialize would be nice as well, but not criticle.. the goal would be to allow a couple people who need the ability to remove rows the ability to do so, but to not loose the information for a week or so, so that the choice can be double checked by someone else. jason watkins
Re: [SQL] Copying data with triggers
Keith Wong <[EMAIL PROTECTED]> writes: > create function tp_info () returns opaque as ' > begin > -- insert into audit table > insert into AudInfo (info_id, some_data, aud_operation_type) values > (new.info_id, new.some_data, ''i''); > return new; > end; > ' language 'plpgsql'; > create trigger tp_info before insert on Info > for each row execute procedure tp_info(); > This doesn't work however. A parse error occurs in the first line. > I suspect that I cannot insert a record into another table in a trigger. You certainly can --- I have apps that do it all the time. I suspect maybe the plpgsql parser gets confused by SQL-style comments? Your example seems to work fine if I leave out the comment. regards, tom lane
Re: [SQL] 8K Limit, whats the best strategy?
Poul L. Christiansen wrote: > I've just split the text up (in 8 pieces), so it fits into 8K rows. But thats > only a viable solution if your text is less than a couple of 100K's. > > You could try to be a daredevil and use the Toast code, even if it's beta. But > I don't know how far the Toast project has come. TOAST is finished and will be shipped with 7.1. It's not a solution for huge items, but medium sized text up to some hundred K works fine. Jan > > Keith Wong wrote: > > > Hi everyone, > > > > Just wondering what strategies people have used to get around the 8K row > > limit in Postgres. > > If anyone has been troubled by this limitation before and has a nice > > solution around it, I would love to hear it. > > > > My application is a web-based system in which needs to store large amounts > > of text. > > The text stored needs to searchable as well. > > > > Cheers, > > Keith. > -- #==# # It's easier to get forgiveness for being wrong than for being right. # # Let's break this rule - forgive me. # #== [EMAIL PROTECTED] #
RE: [GENERAL] Re: [SQL] database design and diagraming book recommendations..
> "Francisco Hernandez" <[EMAIL PROTECTED]> > > > anyone know of a good book or books on database modeling? > > like for entity relationship diagrams and such.. > > > To get a good life, go with Chris Date's rather difficult two > volume set. > What are the names of these two books? I just looked on Amazon.com, but didn't see anything that looked like a set. Thanks! >-dlj. > Cheers, Craig
