On Jun 13, 3:13 pm, Andrus [EMAIL PROTECTED] wrote:
(...)
As I understand, only way to optimize the statement
delete from firma1.rid where dokumnr not in (select dokumnr from
firma1.dok);
assuming that firma1.dok.dokumnr does not contain null values is to change
it to
CREATE TEMP TABLE
tom wrote:
I found a problem with my application which only occurs under
high loads (isn't that always the case?).
insert into tokens (token)
select [...]
This works 99% of the time.
But everyone once in a long while it seems that I hit simultaneaous
execute() statements that
Hi,
Is there any way to do tree traversal with only one SQL query (no
procedure/function)?
CREATE TABLE foo (
node INTEGER,
parent INTEGER REFERENCES foo(node)
);
Ideally the output would be a depth-first tree traversal starting at root
(marked by parent == node, for example.)
Obviously,
Hi,
I want to tighten down my db schema as much as possible against accidential
corruption. For this, I'd like to have fields that can only inserted and
not later changed (think some sort of id, account names, etc., which are
often not only stored in the database but also in external places
am Fri, dem 15.06.2007, um 10:24:26 +0200 mailte Adrian von Bidder folgendes:
Hi,
I want to tighten down my db schema as much as possible against accidential
corruption. For this, I'd like to have fields that can only inserted and
not later changed (think some sort of id, account names,
Adrian von Bidder wrote:
Hi,
Is there any way to do tree traversal with only one SQL query (no
procedure/function)?
CREATE TABLE foo (
node INTEGER,
parent INTEGER REFERENCES foo(node)
);
Ideally the output would be a depth-first tree traversal starting at root
(marked by
Francisco Reyes [EMAIL PROTECTED] writes:
There is no ulimit -a in cshell which is what I use.
I guessed this may be a bash setting .. so tried that..
The output of ulimit -a is:
The csh equivalent is just limit.
core file size (blocks, -c) unlimited
data seg size
The insert is deadlocking against the update delete.
The problem is that the insert has to lock the records to be sure they aren't
deleted. This prevents the update for updating them. But the update has
already updated some other records which the insert hasn't referred to yet.
When the insert
On Thu, 14 Jun 2007, Lza wrote:
Does anyone have any suggestions on how to store historical
information in databases?
-- I have tables:
create table history_columns (
column_id smallint primary key,
column_name varchar(63) not null,
table_name varchar(63) not null,
Gregory Stark wrote:
The insert is deadlocking against the update delete.
The problem is that the insert has to lock the records to be sure they aren't
deleted. This prevents the update for updating them. But the update has
already updated some other records which the insert hasn't referred to
Tom Allison [EMAIL PROTECTED] writes:
The other approach would be to use an external file to queue these updates and
run them from a crontab. Something like:
...
and then run a job daily to read all these in to a hash (to make them unique
values) and then run one SQL statement at the end of
Tom Lane writes:
The memory dump only comes out on stderr (I think because of paranoia
about running out of memory while trying to report we're out of memory).
Can you get the postmaster's stderr output?
From stderr
pg_restore: restoring data for table message_attachments
pg_restore:
Gregory Stark writes:
You're right that your limit is above 128M but the error just means it tried
to allocated 128M and failed, it may already have allocated 400M and been
trying to go over the 524M mark.
My limit should be 1.6GB as per /boot/loader.conf
I ran a ps every 20 seconds to try
Francisco Reyes [EMAIL PROTECTED] writes:
Gregory Stark writes:
You're right that your limit is above 128M but the error just means it tried
to allocated 128M and failed, it may already have allocated 400M and been
trying to go over the 524M mark.
My limit should be 1.6GB as per
Lza writes:
I have a table in my database that holds information on policies and
this table is linked to a number of other tables. I need to be able to
keep a history of all changes to a policy over time. The other tables
There is a postgresql project which I just saw last night.
Don't recall
Gregory Stark writes:
I'm skeptical that you can trust ps's VSZ reporting at this level of detail.
On some platforms VSZ includes a proportionate share of its shared memory or
might not include memory allocated but not actually written to yet (due to
copy-on-write).
Understood. But at least
Francisco Reyes wrote:
Tom Lane writes:
The memory dump only comes out on stderr (I think because of paranoia
about running out of memory while trying to report we're out of memory).
Can you get the postmaster's stderr output?
From stderr
pg_restore: restoring data for table
am Fri, dem 15.06.2007, um 8:21:45 -0400 mailte Francisco Reyes folgendes:
Lza writes:
I have a table in my database that holds information on policies and
this table is linked to a number of other tables. I need to be able to
keep a history of all changes to a policy over time. The other
Francisco Reyes [EMAIL PROTECTED] writes:
If it is of any help.. information about the data.
It is hex64 encoded data. The original data getting encoded can be up to
100MB. Not sure how much bigger the hex encoding could be making it.
Hmm. I'm wondering about a memory leak in the input
Tom Lane writes:
Hmm. I'm wondering about a memory leak in the input converter. What
datatype exactly are the wide column(s)?
Text.
Also, do you have any ON INSERT triggers on this table?
No.
---(end of broadcast)---
TIP 3: Have you
Alvaro Herrera writes:
This is pg_restore's stderr. What Tom wants to see is postmaster's. It
is probably redirected (hopefully to a file, but regretfully it is
common to see it go to /dev/null) on the init script that starts the
service.
How would I run it manually?
When I do pg_ctl start,
While trying to debug an out of memory error I am trying to restore only
certain files.
pg_restore is trying to restore everything even though I am telling it to
read from a file which tables to restore.
The parameters i am trying are:
pg_restore -U pgsql -L Load-list.txt -v -d backaway_copy
Mike Gould writes:
After a user is authenticated on a connection, if this setting is set,
then the database automatically triggers this procedure to be run.
Don't know about triggers, but I you can do set commands on a per
user bases.
ALTER USER name SET configuration_parameter { TO | =
Tom Allison wrote:
Terry Fielder wrote:
My 2 cents:
I used to get a lot of these sharelock problems.
Users using different records, but same tables in different order.
(apparently 7.x was not as good as 8.x at row level locking)
I was advised to upgrade from 7.x to 8.x
I did, and all
All,
As the subject says we are converting from Sybase's SQL Anywhere to PostGres
8.2.4. One of the features in ASA is a SET OPTION
PUBLIC.login_procedure='DBA.login_check'.
After a user is authenticated on a connection, if this setting is set, then the
database automatically triggers this
Alvaro Herrera writes:
This is pg_restore's stderr. What Tom wants to see is postmaster's. It
is probably redirected (hopefully to a file, but regretfully it is
common to see it go to /dev/null) on the init script that starts the
pg_ctl -l file didn't work.
Trying now with changes to
I promised that I will get back to the group with the reason. Well, of course
was a query :). I do use a search engine file system based(lucene) that will
take any desired entity saved into the database and find the primary keys and
then do a select * from entity where id is in (:ids)If I get
Francisco Reyes [EMAIL PROTECTED] writes:
Mike Gould writes:
After a user is authenticated on a connection, if this setting is set,
then the database automatically triggers this procedure to be run.
Don't know about triggers, but I you can do set commands on a per
user bases.
psql's
Tom Lane writes:
Can we see the context-sizes dump that should've come out right before
that in the log?
Hope this is what you are looking for.
Included a few lines before the error in case that is of any help.
These lines are from the postgresql log. Redirected stderr to a file.
I am not even sure this is a problem.
INFO: data_import_customer2: scanned 3000 of 5033 pages, containing 84905
live rows and 0 dead rows; 3000 rows in sample, 142442 estimated total rows
Do I have some setting wrong?
---(end of
Walter Vaughan writes:
I am not even sure this is a problem.
INFO: data_import_customer2: scanned 3000 of 5033 pages, containing 84905
live rows and 0 dead rows; 3000 rows in sample, 142442 estimated total rows
I don't believe it is a problem.
I think it is just normal output of the vacuum.
Francisco Reyes [EMAIL PROTECTED] writes:
Tom Lane writes:
Can we see the context-sizes dump that should've come out right before
that in the log?
Hope this is what you are looking for.
Included a few lines before the error in case that is of any help.
Well, that's the last few lines of
I promised that I will get back to the group with the reason. Well, of
course was a query :). I do use a search engine file system
based(lucene) that will take any desired entity saved into the database
and find the primary keys and then do a select * from entity where id is
in (:ids)If
Francisco Reyes wrote:
Walter Vaughan writes:
I am not even sure this is a problem.
INFO: data_import_customer2: scanned 3000 of 5033 pages, containing
84905 live rows and 0 dead rows; 3000 rows in sample, 142442 estimated
total rows
I don't believe it is a problem.
I think it is just
I'd like to dynamically generate a SQL script that pulls together complete
DDL (CREATE, ALTER, etc.) for specific objects (especially constraints) in
my PostgreSQL 8 database. I want to use this for our development project's
schema migration process, which involves dumping the data, making schema
Walter Vaughan writes:
Well, why did it only scan 3000 pages out of 5033? It decided it didn't need to?
It took a sample and decided it was gonna be okay?
I don't know. Hopefully others with more knowledge can answer.
I think it could be because the pages did not need to be looked at.
Walter Vaughan wrote:
Francisco Reyes wrote:
Walter Vaughan writes:
I am not even sure this is a problem.
INFO: data_import_customer2: scanned 3000 of 5033 pages, containing
84905 live rows and 0 dead rows; 3000 rows in sample, 142442 estimated
total rows
I don't believe it is a
Thanks for replying! I will try to upgrade to the latest on my prod box. Any
easy way to do that ? Can I have two releases in the same time 7.4.7 and 8.2.3
? This is a phased approach so it may take a little while to do it and I want
to have 7.4.7 till I'd be able to switch to the 8.2.3( on a
I accidentally formatted a string for tsearch before trying to cast it
to a date, and it worked!
select 'June152007'::date
date
2007-06-15
(1 row)
Is this a happy accident, or is it OK to count on it continuing to
work this way?
Thanks,
Ian
---(end of
Tom Lane writes:
Well, that's the last few lines of what I wanted, but not the part that
was interesting :-(. Please show all the lines that are like the n
total in m blocks format. Or at least look for the one(s) that contain
large numbers...
Since you know best what you are looking for I
Hello, people, i have a question for you:is it possible to recover more than one recordset or cursor from a function?I use to do it in ms sql server, but it is a mistery for me in postgre.ThanksGet your FREE, LinuxWaves.com Email Now! -- http://www.LinuxWaves.comJoin Linux Discussions! --
On Jun 15, 1:56 pm, [EMAIL PROTECTED] (Ian Harding) wrote:
I accidentally formatted a string for tsearch before trying to cast it
to a date, and it worked!
select 'June152007'::date
date
2007-06-15
(1 row)
Is this a happy accident, or is it OK to count on it continuing
guillermo arias wrote:
Hello, people, i have a question for you:
is it possible to recover more than one recordset or cursor from a function?
I use to do it in ms sql server, but it is a mistery for me in postgre.
Start here:
http://www.postgresql.org/docs/8.2/static/xfunc-sql.html#AEN36437
Gary Fu wrote:
hello,
I try to allocate a chunk of ids from a sequence with the following
proc. However, if I don't use the 'lock lock_table', the proc may not
work when it runs at the same time by different psql sessions. Is
there a better way without using the 'lock lock_table' ?
Francisco Reyes [EMAIL PROTECTED] writes:
Tom Lane writes:
Well, that's the last few lines of what I wanted, but not the part that
was interesting :-(. Please show all the lines that are like the n
total in m blocks format. Or at least look for the one(s) that contain
large numbers...
On Jun 15, 2007, at 2:31 PM, Mark Soper wrote:
I’d like to dynamically generate a SQL script that pulls together
complete DDL (CREATE, ALTER, etc.) for specific objects (especially
constraints) in my PostgreSQL 8 database. I want to use this for
our development project’s schema migration
Tom Lane writes:
thought it was all one long line and tried to read it all in at once.
What's the history of your datafile --- has it maybe gotten passed
through a Windows newline conversion?
The database has emails.
The table in question holds the attachments. Each row represents one
Gregory Stark wrote:
Tom Allison [EMAIL PROTECTED] writes:
The other approach would be to use an external file to queue these updates and
run them from a crontab. Something like:
and then run a job daily to read all these in to a hash (to make them unique
values) and then run one SQL
48 matches
Mail list logo