bsite states:
"...running one Cluster DB node in a different geographical location is
not what PGCluster was built for..."
Does anyone have an idea? We cannot possibly be the only company needing
to do this.
Thanks,
Matthew
---(e
,
Matthew
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
message can get through to the mailing list cleanly
alize this and we do identify our columns during select
statements, but when you look at a table using a tool like phpPGAdmin or
pgAdmin3, the columns are displayed in some defined order. It's much
easier to see your data/columns in some logical order (ie: all the cost
columns next to each ot
no WARNING's.
A point if the right direction would be appreciated.
Thanks,
Matthew
---(end of broadcast)---
TIP 1: if posting/reading through Usenet, please send an appropriate
subscribe-nomail command to [EMAIL PROTECTED] so that your
Tom Lane wrote:
Although ... it's true that there seem to be very few apps relying on
case sensitivity per se, ie, expecting "Foo" and "foo" to be different.
The complaints that I can remember were about programs that expected
"FOO" and FOO (not quoted) to be the same. So always-smash-to-lower-
Try this link to OpenOffice and its postgres-sdbc-driver.
http://dba.openoffice.org/drivers/postgresql/
I have Installed OO on RedHat and tested that I can connect postgres -
which it does without any dramas, but I havent done much more than that.
So I am not sure as to what the limit is on the
I see that postgre supports win32, but all I see mentioned is NT. Does
postgre run on Win 9x? I don't want this as a production environment,
but need it for development.
Matt
I wouldn't recommend pg_upgrade as 6.3 is quite old, pg_dump should be a
better option.
Since you are having trouble with pg_dump also, we would probably need some
actual errors to be able to help. One suggestion is to dump everything out
with proper inserts ( use the -d or -D option for this)
Yes, it sounds like a persistent connection problem with PHP. I know they
help with load speed of postgres, but we have had problems with them.
> -Original Message-
> From: Gunnar R|nning [SMTP:[EMAIL PROTECTED]]
> Sent: Thursday, October 05, 2000 8:35 AM
> To: Nikolay Mijaylov
> Cc:
When are the 7.0.3 RPM's expected to be released? Most of my production
servers use the RPM install and I would like to keep it that way, however I
would also like to move up to 7.0.3 some time soon.
Just wondering...
We run RH6.2 for production, but I have a few RH7.0 boxes that I would like
to upgrade soon also
> -Original Message-
> From: Lamar Owen [SMTP:[EMAIL PROTECTED]]
> Sent: Monday, November 13, 2000 11:57 AM
> To: Matthew
> Cc: pgsql-general
> Subject: Re: [G
I just checked the FTP site, and found the 7.0.3 RPMS. thanks much... I
will try them out tonight.
> -Original Message-
> From: Matthew [SMTP:[EMAIL PROTECTED]]
> Sent: Monday, November 13, 2000 1:08 PM
> To: 'Lamar Owen'
> Cc: pgsql-general
> Subject:
If you do a:
/usr/sbin/lsof |wc
What is the output? First number from wc is the number of lines, which in
this case translates to the number of open file descriptors. If this is
close to your limit, then bump your limit.
> -Original Message-
> From: Warren Vanichuk [SMTP:[EMAIL PR
a compatibility module for mSQL.
>
[Matthew] Would it make sense for postgre to have a mysql
compatibility module? An add on package (perhaps in contrib) that would add
many of the functions that mysql has that postgre does not. I know this
couldn't cover everything, but it could probably ma
> "Brett W. McCoy" <[EMAIL PROTECTED]> writes:
> > On Sun, 10 Dec 2000, Matthew wrote:
> >> [Matthew] Would it make sense for postgre to have a mysql
> >> compatibility module? An add on package (perhaps in contrib) that
> >> would add many o
Depends on your needs. Typically I would always break out the addresses
into another table, since it's much more flexible. creating four separate
address fields in the user table will reduce the need to perform joins and
thus perhaps make your schemea a little simpler, and your queries a little
I have seen this before myself, and I agree it is less then optimal. I
would think that it's a place where referential integrity should be used.
Either don't allow you to drop the user until all permissions are removed,
or automatically remove all permissions, cascade delete type of thing.
> ---
Has anyone submitted anything?
> -Original Message-
> From: Bruce Momjian [SMTP:[EMAIL PROTECTED]]
> Sent: Tuesday, February 13, 2001 2:37 PM
> To: PostgreSQL-announce; PostgreSQL-general
> Subject: [GENERAL] O'Reilly Conference Proposals
>
> There is less than one week left to su
Pardon my ignorance on this topic, but if you compile using MS VC++ do you
need cygwin at all? Does this allow you to run postgre without cygwin
installed?
> -Original Message-
> From: Brent R. Matzelle [SMTP:[EMAIL PROTECTED]]
> Sent: Monday, March 19, 2001 1:08 PM
> To: [EMAIL PROTE
They are hoping put out Release Candidate (RC) 1 in the next few days. We
will see after that, hopefully we will see a final 7.1 soon, but my guess
at this point is not till the first week of April.
> -Original Message-
> From: [EMAIL PROTECTED] [SMTP:[EMAIL PROTECTED]]
> Sent: Tuesday,
> I receive this error when trying to pg_dump -s
>
> failed sanity check, table answers was not found
>
> the failed table name varies. All the problematic tables
> seem to work. vacuumdb (-z) also works without any comment.
>
I had some similar problems with pg_dump on 7.0.3 rec
Hello, I am new to the world of PostgreSQL. I recently installed the
PostgreSQL database and tried to compile one of the example C programs
from the web site. I fixed a few typo's then it compiled, but when it
goes to do the linking I get an error complaining about undefined
reference to crypt.
I am using RH5.2 / Postgre 6.3.2 I need a query that has a having
clause. In 6.3.2 it says that having is not supported yet. I looked
at the changes in 6.4 and it appears that 6.4.2 supports this. Is this
true? Also I have found 6.4.2 rpms but no data rpm so now when I try to
use psql it say
I may be missing something simple here, but I have an attribute in a
table that was created as NOT NULL. I now need to remove this
constraint. I don't see a proper way to do this. I think I can update
the pg_attribute table and change attnotnull from true to false. Is
this an acceptable soluti
You can set log_min_duration_statement = 0 to log every statement.
On Fri, Mar 13, 2015 at 3:38 AM, Sreerama Manoj wrote:
>
> Hi,
> As we know that "pg_stat_statements" will monitor the queries after
> normalizing the queries(Removes the values present in query). I want to
> know is there
I just ran into monitoring this and came up with the following 1 liner for
monitoring this in releases < 9.6 through the SQL layer. Once you are at 9.6
Jeff Janes solution is correct.
It does make the assumption that the queue size is 8GB. It can misjudge the
queue usage by up to one file seg
ithout restarting the instance? (I am not yet sure when
that can happen but it would have to be scheduled [in advance] to occur during
an after hours maintenance window.
Thanks in advance,
Matt
--
Matthew Musgrove
Senior Software Engineer
Assurant Mortgage Solutions
817.788.4482
mmusgr...@emor
On 07/25/2016 03:20 PM, Tom Lane wrote:
Matthew Musgrove
<mailto:mmusgr...@emortgagelogic.com> writes:
One of our instances has been behaving -- oddly. Most queries are blazing fast.
It appears to just be some of the stat views that are slow.
It sounds like requests for stats updat
I would like a trigger on a materialized view. Is this possible? I have tried
both an INSTEAD OF and BEFORE trigger with no luck. It responds with:
"project_milestone_mv" is not a table or view.
It is absolutely present and spelled correctly. It is the same with or without
the schema qualificat
I was afraid of that. Any idea if it is a technical or resource limitation?
In other words, is such functionality impossible, undesirable, anticipated, or
in the works?
From: David G. Johnston [mailto:david.g.johns...@gmail.com]
Sent: Thursday, March 31, 2016 10:38 AM
To: Matthew Syphus
Cc
Tomasz Rakowski wrote:
I see autovacuum process starting in avarage every 5 minutes
(table contains 30.000 records, and update rate is about 1000records /min).
But what is strange is that the number of pages allocated to the table
are constant (at least for this hour: 500)
and number of pa
Does Postgres have any native support for hierarchical data storage?
I'm familiar with the Adjacency List technique, but am trying to
determine whether or not Nested Sets would make sense for our
application or not. I understand that Nested Sets might be better
for high read applications,
On Jul 17, 2007, at 3:13 AM, Vince wrote:
When this fails, I get a useful error print:
$dbconn = pg_connect($connection_str);
Ouput:
Warning: pg_connect() [function.pg-connect]: Unable to connect to
PostgreSQL server: FATAL: password authentication failed for user
"postgres1" in ...
How do
I have just listened to Josh Berkus on FLOSS regarding postgresql. It was
suggested that postgresql has advantages over other databases when it comes
to reporting. Why is this?
List,
One of the reasons why I use postgres is because you can insert data and
it will work or give you an error instead of converting, truncating,
etc... well I found a place where postgres makes an erroneous
assumption and I'm not sure this is by design.
When inserting a float such as 4.1232234
Maybe I'm just missing something but I can't seem to get pg_dump to output
copy statements. Regardless of the -d / --inserts flag it always outputs
insert statements. The doc says that pg_dump will output copy statements by
default and will only output insert statements with the -d / --inserts fl
All of them
On 9/26/07, Tom Lane <[EMAIL PROTECTED]> wrote:
>
> Jan de Visser <[EMAIL PROTECTED]> writes:
> > In my world two identical pilot errors within a short timeframe are
> indicat=
> > ive=20
> > of a bad interface.
>
> Yeah, it's inconsistent. How many people's dump scripts do you want t
I have been using postgresql for my web application for a while now and
it has been great.
I want to set up a separate emergency failover server and database in a
different data center.
In the event that my primary data center becomes inaccessible, I want to
update a DNS record and then redirect
With 8.2.4 is it possible to get Postgres to log incoming SQL
statements the same as they look when written? Instead of:
DEBUG: insert into foo (name) values ($1);
DETAIL: parameters: $1 = 'stan'
I'd like to see:
DEBUG: insert into foo (name) values ('stan');
This would be extremely helpful
I have a lot of code -- millions of lines at this point, written
over the last 5 years. Everything is in a bunch of nested folders.
At least once a week, I want to find some code that uses a few modules,
so I have to launch a find + grep at the top of the tree and then wait
for it to finish.
I w
The release notes seem to be in two places, with slightly different
information.
The page Google sends back for most 8.3 queries
http://www.postgresql.org/docs/8.3/static/release-8-3.html
and the one you get from the PostgreSQL beta program link
http://developer.postgresql.org/pgdocs/postgres/rel
I want to create an aggregate that will give the average velocity (sum of
distance traveled / sum of elapsed time) from position and timestamps.
example:
create table data(position integer, pos_time timestamp, trip_id integer);
insert into data values(1, "time x", 1);
insert into data values(2,
before returning Thu Dec 06 21:04:39.952846
2007 CET
Total query runtime: 114250 ms.
2 rows retrieved.
Hope this information can help you give me a better idea on how I can
improve this query.
Thanks and regards
Matthew
---(end of broadcast)---
TIP 2
On 12/11/07, Richard Broersma Jr <[EMAIL PROTECTED]> wrote:
>
> Is it possible to create aggregate functions using pl/pgsql?
Yes, the sfunc and ffunc can be functions written in plpgsql.
If not possible in plpgsql, is there any other way to create these types of
> functions?
Yes, but I don't k
Those who make peaceful revolution impossible will make violent
> revolution inevitable.
> > -- John F Kennedy
>
> -BEGIN PGP SIGNATURE-
> Version: GnuPG v1.4.1 (GNU/Linux)
>
> iD8DBQFHWYaJIB7bNG8LQkwRAmOyAJ92ZtXFiQfme/Rn2+2ylISrR3YM+ACfTWO8
> ftUieOQhrQ
What are good settings for shared_buffers and max_connections with
the above software combo? Out of the box I'm getting the "Cannot
allocate memory" message upon doing a 'make check'.
Thanks,
-M@
---(end of broadcast)---
TIP 1: if posting/
This may be a tad off topic, but thought a PG enthusiast might have
some insight on this. Apple Mail sits on top of Sqlite. I was
wondering if it would be possible to substitute Postgres as the
underlying database. I do know how to vacuum Sqlite to speed up
Mail, but with the massive amo
in 8.3beta3
create table t0(c1 int);
create table t1(c1 int);
insert into t0 values (1);
insert into t0 values (2);
insert into t0 values (3);
If I execute "delete from t0 returning *" it deletes the rows and returns
the deleted rows. I could insert all those rows into t1 by doing "insert
into
I have a table MESSAGE with a text column and a timestamp column. I
want to make sure that I have no rows with duplicates for both values.
I have two ideas right now for how to do this.
IDEA 1:
CREATE UNIQUE INDEX nodups on MESSAGE (my_timestamp_col, my_text_col);
IDEA 2:
CREATE UNIQUE INDEX n
On Wed 02 Jan 2008 04:23:46 AM EST, Pavel Stehule wrote:
> Hello
>
> IDEA 3:
>
> Use two hash functions:
>
> CREATE UNIQUE INDEX uidx ON TEST((decode(md5(a),'hex')),(hashtext(a)));
>
> removing spaces helps
> CREATE UNIQUE INDEX uidx ON test((decode(md5(lower(replace(a,'
> ',''))),'hex')));
>
> Re
I'm in need of an aggregate hash function. Something like "select
md5_agg(someTextColumn) from (select someTextColumn from someTable order by
someOrderingColumn)". I know that there is an existing MD5 function, but it
is not an aggregate. I have thought about writing a "concat" aggregate
functio
On Jan 30, 2008 4:40 PM, Vyacheslav Kalinin <[EMAIL PROTECTED]> wrote:
> Most implementations of md5 internally consist of 3 functions: md5_init -
> which initializes internal context, md5_update - which accepts portions of
> data and processes them and md5_final - which finalizes the hash and
> r
Even though Movable Type 5 has dropped official support for Postgres it still
comes with the drivers in order for it to work. I've gotten it setup and
working with Postgres, but there is one problem that keeps me from being able
to use it. That issue I've explained over on the MT forums:
http
I've posted a short guide on how to get Movable Type 5, Postgres 9, and Tomcat
7.0 working together.
http://www.greenskagitvalley.com/blog/movable-type-5-with-tomcat-70x-and-postgres-90.html
-M@
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subsc
Dave,
Does your application use temporary tables? This may explain high
autovacuum activity in the catalog tables.
With regards to your tables, I wrote this very script to give me an
indication of the amount of required activity on the user tables. I
deliberately keep this one simple and it doe
Dan,
It depends on your application. There is no point in creating an index with
the same 3 columns in the primary key (in the same order).
If you have an index on COL1, COL2 and COL3 (in that order) then if you have
a query such as SELECT COL1, COL2, COL3 from T1 then the index will be
consider
"Apparently (after reading the documentation link provided by Tom) there can
be value to indexes when accessed with leading columns missing".
That is a new one on me too - interesting.
I suppose it comes down to testing at the end of the day - if you "set
enable_seqscan to false" and "EXPLAIN ANA
Dave,
Does your application use temporary tables? This may explain high
autovacuum activity in the catalog tables.
With regards to your tables, I wrote this very script to give me an
indication of the amount of required activity on the user tables. I
deliberately keep this one simple and it doe
You can create a trigger on the table to store the old/new values in an
audit table if that is what you mean?
On Wed, Nov 17, 2010 at 2:45 PM, Gauthier, Dave wrote:
> How does one set pager off in interactive sql ?
>
> I tried \set pager off, doesn't seem to work.
>
>
>
> Thanks in Advance !
>
\pset pager off
Vangelis,
I don't believe you can do file level copying of single databases
(especially as they are different versions).
Take a look at pg_dump in the documentation. This will do what you need I
think but at a logical level rather than physical.
On Fri, Nov 19, 2010 at 11:52 AM, Vangelis Katsik
/19/2010 03:12 PM, Matthew Walden wrote:
>
>> Vangelis,
>>
>> I don't believe you can do file level copying of single databases
>> (especially as they are different versions).
>>
>
> Hi Matthew, thanks for your answer.
>
> If the different versions i
Jen,
Regarding the first point, is postgres actually running? You can check this
by typing the following at a terminal -
ps -ef|grep postgres
Reload configuration is used to tell PostgreSQL to read in the configuration
file for any chances since the database started. If you want to start the
d
Deepak,
Does your backup script exclude the pg_xlog directory? There is no point
backing up the WAL files but you will want to make sure the archived WAL
files are added to the backup set afterwards.
How many files are in pg_xlog at the point where you stop the backup? It
may actually be that i
I thought that when I first read about WAL archiving but the documentation
explains quite well. Basically it is to stop a successful result being
returned in the event that the file already exists in the archive
destination (to cause an error in the event it tries to overwrite a file).
On Thu, No
> So the in-place update is a vital part of the program, because a range of
values
> of column col should be stored together on the disk. Is there any way to
do such
> a in-place update without generating much extra overhead?
>
Although in-place update is not possible, can he not use partitioning
I have a table like this:
create table event(
destination_id integer not null references destination
(destination_id),
starts timestamp,
ends timestamp
);
I want to make sure that no two rows **with the same destination_id**
overlap in time.
I'm not sure how to write this exclu
The following is a real critical problem that we ran into here at TripAdvisor,
but have yet figured out a clear way to mitigate.
TL;DR:
Streaming replicas—and by extension, base backups—can become dangerously broken
when the source and target machines run slightly different versions of glibc.
We are currently running with the en_US.UTF-8 collation. It was a decision
made long ago, and seeing as we never actually rely on the sort order of
internationalized strings (other than for stability, apparently), we have never
had any motivation to change this practice.
Some way of versioning
I’m in the process of installing mediawiki, and ran into this error while it
was setting up the database:
• Query:
CREATE FUNCTION page_deleted() RETURNS TRIGGER LANGUAGE plpgsql AS $mw$ BEGIN
DELETE FROM recentchanges WHERE rc_namespace = OLD.page_namespace AND rc_title
= OLD.page_tit
Would something like this work best, or is it better to use pgdump?
CREATE DATABASE newDatabase TEMPLATE oldDatabase ENCODING 'UTF8'
Does using a template do a file system copy or just SQL copy everything
over?
-Matt
Thanks,
How long did it take to complete?
On 15/10/14 06:39, Adrian Klaver wrote:
On 10/14/2014 10:31 AM, Matthew Chambers wrote:
Would something like this work best, or is it better to use pgdump?
CREATE DATABASE newDatabase TEMPLATE oldDatabase ENCODING 'UTF8'
I actually jus
The way I’ve solved the problem before 9.4 is to use a command called 'pv'
(pipe view). Normally this command is useful for seeing the rate of data flow
in a pipe, but it also does have a rate limiting capacity. The trick for me
was running the output of pg_basebackup through pv (emulates havi
Hi,
I can get postgres to log unsuccessful queries, including the user who
wrote them, but I'm missing how to get postgres to log the successful
queries too (I don't need a store of the answer, just the query
itself). How do I do this?
Thanks,
Matthew
--
Matthew Vernon
Quantitative
e configuration
parameter 'cursor_tuple_fraction' is having no observable effect.
This is being seen on Postgres 9.1 (Ubuntu x64), on a server with fast disks
and large amount of memory. Basic memory tuning has also been performed.
Thanks in advanced, I appreciate any insights.
Kind
Thanks Tom,
That's very enlightening and I really appreciate you taking time to respond.
I've tried cursor_tuple_fraction values as low as 0.01 which by my
reckoning should be low enough and also 0 but the planner is still selecting
seqscan-and-sort.
Kind regards,
Matthe
Are there any plans to support \u in JSONB and, relatedly, UTF code
point 0 in TEXT? To the best of my knowledge \u is valid in JSON and
code point 0 is valid in UTF-8 but Postgres rejects both, which severely
limits its usefulness in many cases.
I am currently working around the issue by
arrays to the implementations of those types and keep backward
compatibility by leaving TEXT and JSONB alone.
Matt
On Wed, Jul 19, 2017 at 7:30 PM, Tom Lane wrote:
> Matthew Byrne writes:
> > Are there any plans to support \u in JSONB and, relatedly, UTF code
> > point 0
I see. Thanks for the quick responses!
On Wed, Jul 19, 2017 at 11:32 PM, Tom Lane wrote:
> Matthew Byrne writes:
> > Would a more feasible approach be to introduce new types (say, TEXT2 and
> > JSONB2 - or something better-sounding) which are the same as the old ones
> >
I have an employees table and one column in the employees table is
"supervisor_id" which is an FK to the id column.
I have employees from numerous companies all in the same table. I have
a column called company_id that indicates the company.
I want to make sure that an employee chooses a super
Do SQL statements inside of plpgsql functions get planned upon every
execution, only when the function is first executed/defined, or something
else entirely?
For example, suppose I have a table foo and a function bar. Function bar
executes some SQL statements (select/insert/update) against table
In PostgreSQL 8.3 lets say I have a table:
create table t(c1 int, c2 int, ts0 timestamp with time zone, ts1 timestamp
with time zone, data varchar);
an index:
create index t_c1_c2_ts0_idx on t using btree(c1, c2, ts0 desc);
and a function:
create function f(_ts timestamp(0) with time zone, _c1
ump into a new database. RTFM on pg_dump, especially the "-t" and "-T"
options.
Matthew
--
All of this sounds mildly turgid and messy and confusing... but what the
heck. That's what programming's all about, really
-- Comp
t;, and "COLUMN_VALUE". Perform joins as you see fit to build up the
structure at times. Use arrays if you'd like or perform multiple joins, which
ever.
Matthew Hartman
Programmer/Analyst
Information Management
Kingston General Hospital, ICP
(613) 544-2631 x4294
__
Is there a recommended approach when trying to use EXPLAIN on a
function? Specifically, a function that is more than the typical SELECT
statement or tiny loop. The one in question that I'm hoping to optimize
is around 250 lines.
Thanks,
Matthew Hartman
Programmer/Analyst
Information Manag
Hy the raise notice is a good idea, thanks. I use raise notice
already for other uses, may as well go with it. Thanks.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
-Original Message-
From: Merlin Moncure
Thanks! That'll reduce the amount of copy/pasting I have to do to figure
out the differences in times.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
-Original Message-
From: Chris Spotts [mailto:rfu...@gmail.com]
Good morning.
On occasion I'll perform a full vacuum on a database but will still
receive a suggestion to vacuum an individual table immediately after.
Does the full database vacuum not handle each individual table?
Thanks,
Matthew Hartman
Programmer/Analyst
Information Management
Windows XP, PostgreSQL 8.3.5, using pgAdmin III 1.8.4. It's the typical
"Running vacuum on this table is recommended" dialog box.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
-Original Message-
From: Grze
But it'll do so immediately after I run a full vacuum on the entire database?
Nothing has changed. This is a development box.
You know, I bet it doesn't refresh the view of the database after having run
the maintenance script..
Matthew Hartman
Programmer/Analyst
Information Manag
I do have autovacuum on (as of yesterday). This was discovered when I
ran vacuum on a whim.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
-Original Message-
From: Merlin Moncure [mailto:mmonc...@gmail.com]
Sent
Good morning.
I am itching to upgrade my 8.3 development database to 8.4 before I move
to production. Pg_migrator is listed as beta so I'd like to avoid that.
Has anyone made the leap yet?
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613
Just create a unique constraint on all of the columns.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
> -Original Message-
> From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
> ow...@postgresq
If it's static (i.e. the planets don't move too much, hah), calculate
and store. No sense in re-calculating it each and every time.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
(613) 549- x4294
> -Original Message-
> Fro
Try this.
select idn,
array_to_string(array(select code from tbl t2 where
t2.idn = t1.idn order by code), ', ') as codes
fromtbl t1
group byidn
order byidn
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hos
s. If no such lock exists, one is created and returned to the
application. The lock is released at the end of the current task. The
advantage is that if anything crashes, there is a page in the
application that an administrator can delete any lock from, or see who
holds a lock on what from
ype t ON
> t.machine_type_id=h.machine_type_id
> WHERE h.part_id=379 AND h.machine_type_id=1
> WHERE t.machine_type_id=1
> GROUP BY t.name,j.workorder
> ORDER BY avgtime
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospita
nner to the WHERE clause. And unless you have duplicate rows
to eliminate, use UNION ALL rather than UNION for a speed increase.
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
.now.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
TableAndCriteria
) as allTables
Group by part_number
Matthew Hartman
Programmer/Analyst
Information Management, ICP
Kingston General Hospital
.now.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-general
Given the situation where a user connecting to the database needs access to two
separate schemas: the primary schema which contains the data they will be
updating and a second schema which contains read-only reference data, used by
many users, that will be using in joins on queries.
I don't wan
1 - 100 of 528 matches
Mail list logo