Clemens Schwaighofer wrote:
I sometimes have a problem with conversion of encodings eg from UTF-8
tio ShiftJIS:
ERROR: character 0xf0a0aeb7 of encoding UTF8 has no
equivalent in SJIS
I have no idea what character this is, I cannot view it in my
browser, etc.
It translates to Unicode
On 04/22/2008 05:37 PM, Albe Laurenz wrote:
Clemens Schwaighofer wrote:
I sometimes have a problem with conversion of encodings eg from UTF-8
tio ShiftJIS:
ERROR: character 0xf0a0aeb7 of encoding UTF8 has no
equivalent in SJIS
I have no idea what character this is, I cannot view it in my
All,
In the past I have used foreign keys to lookup tables for small lists of
values that I now think ENUM could do the job of. I was hoping that by
using ENUM, I would avoid having to do joins in my queries, and that I'd
be making the data more robust and faster.
I used to have a table
Clemens Schwaighofer wrote:
I sometimes have a problem with conversion of encodings eg from UTF-8
tio ShiftJIS:
ERROR: character 0xf0a0aeb7 of encoding UTF8 has no
equivalent in SJIS
I have no idea what character this is, I cannot view it in my
browser, etc.
It translates to Unicode
Hi,
in my office we were discussing the various ways statement level read
consistency is implemented in different databases, namely Oracle and Postgres.
I am interested in the technical details on how PG determines that a block needs to be
read from from some other place than the data block
On Tue, Apr 22, 2008 at 4:11 PM, Thomas Kellerer [EMAIL PROTECTED] wrote:
I am interested in the technical details on how PG determines that a block
needs to be read from from some other place than the data block because
another transaction has updated the data block.
Postgres uses MVCC
On Tue, Apr 22, 2008 at 12:41:08PM +0200, Thomas Kellerer wrote:
I'm referring to the scenario that Tom Kyte describes here:
www.oracle.com/technology/oramag/oracle/05-nov/o65asktom.html
and how PG would detect that row 342,023 has been modified by a different
transaction (and thus reads the
Pavan Deolasee, 22.04.2008 12:57:
On Tue, Apr 22, 2008 at 4:11 PM, Thomas Kellerer [EMAIL PROTECTED] wrote:
I am interested in the technical details on how PG determines that a block
needs to be read from from some other place than the data block because
another transaction has updated the
Christophe -
We use LiquiBase: http://www.liquibase.org/home
We don't use it quite the same as what you're proposing, but I think you
could use it that way. When we found it, we did 'Generate Changelog' to
start, then made new changes to the ChangeLog, and applied it to each
database as needed.
Thanks to all who responded. I now know why execute will help this problem,
but then it causes a new problem. The example I sent you was trivial,
trying to get to the bottom of the issue. What I'm really trying to is get
past the restriction of execute to do SELECT INTO. That's why I created a
Pavan Deolasee, 22.04.2008 12:57:
On Tue, Apr 22, 2008 at 4:11 PM, Thomas Kellerer
[EMAIL PROTECTED]
wrote:
I am interested in the technical details on how PG determines that
a
block
needs to be read from from some other place than the data block
because
another transaction has
D. Dante Lorenso wrote:
The problem is that once I create a column in my account table that
uses this 'account_type' datatype, I can't seem to change or add to it
any more. I want to add a new value or edit/delete an existing one.
How do you make changes to an ENUM datatype that is already
Roberts, Jon, 22.04.2008 14:56:
As far as I can tell (from the PDF and your quote) Postgres uses a
very
similar concept as Oracle. .
Each transaction has a uniqe number and each tuple contains the
information for which transaction number it is visible.
Oracle moves the old row(s) to the
Roberts, Jon wrote:
Back to PostgreSQL, when a vacuum is issued, the old deleted rows are
removed. This can be done with the autovacuum feature or manually.
-Does vacuum make sure there are no long running queries referencing the
deleted rows before it attempts to remove the old rows?
Hello
On 22/04/2008, Kerri Reno [EMAIL PROTECTED] wrote:
Thanks to all who responded. I now know why execute will help this problem,
but then it causes a new problem. The example I sent you was trivial,
trying to get to the bottom of the issue. What I'm really trying to is get
past the
Can you explain what you mean by the restriction to do SELECT INTO?
Why are you using a temp table to begin with?
Jon
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Kerri Reno
Sent: Tuesday, April 22, 2008 7:55 AM
To:
Back to PostgreSQL, when a vacuum is issued, the old deleted rows
are
removed. This can be done with the autovacuum feature or manually.
-Does vacuum make sure there are no long running queries referencing
the
deleted rows before it attempts to remove the old rows?
It does.
-With
http://www.postgresql.org/docs/8.2/static/plpgsql-statements.html#PLPGSQL-STATEMENTS-EXECUTING-DYN
says
SELECT INTO is not currently supported within EXECUTE.
I was using a temp table to get around the above problem.
On 4/22/08, Roberts, Jon [EMAIL PROTECTED] wrote:
Can you explain what you
Roberts, Jon wrote:
-With autovacuum, does it skip these rows still being referenced
in a transaction or does it wait?
It skips them, the idea being that a future vacuum will remove them.
Awesome. In a large data warehouse, the snapshot too old error is very
annoying and I'm glad
I tried that, too, and got a different error.
NOTICE: query: select salary_schedule, pay_column, step from saltab07 where
cp_id =32
ERROR: syntax error at or near $2
SQL state: 42601
Context: PL/pgSQL function get_salary_schedule line 15 at execute
statement
In the PG log file, it says:
On Tuesday 22 April 2008 6:26 am, Kerri Reno wrote:
http://www.postgresql.org/docs/8.2/static/plpgsql-statements.html#PLPGSQL-S
TATEMENTS-EXECUTING-DYN says
SELECT INTO is not currently supported within EXECUTE.
In 8.2 EXECUTE INTO is supported.;
The INTO clause specifies where the results of
I'll ask again in a different way.
What is the purpose of your dynamic sql and/or temp table? Don't tell me
anything about using select into.
What is the business purpose of the function? An appropriate answer would be
I'm trying calculate x or I'm trying to determine y by looking
So the reason I'm getting the error is that I'm running it in 8.0. Thanks
so much for your help!
Kerri
On 4/22/08, Adrian Klaver [EMAIL PROTECTED] wrote:
On Tuesday 22 April 2008 6:26 am, Kerri Reno wrote:
http://www.postgresql.org/docs/8.2/static/plpgsql-statements.html#PLPGSQL-S
David Wilson wrote:
On Mon, Apr 21, 2008 at 7:55 PM, Joris Dobbelsteen
[EMAIL PROTECTED] wrote:
If you want to clean up the the staging table I have some concerns about
the advisory lock. I think you mean exclusive table lock.
Either works, really. An advisory lock is really just a lock over
if you have .sql file with COPY table FROM stdin;
and the table doesn't exist, psql prints a lot of errors from lines
which contain only data for copy.
to reproduce:
= echo -ne COPY non_existing_table FROM stdin;\n1\n\\N\n2\n\\.\n | psql
ERROR: relation non_existing_table does not exist
invalid
On Apr 22, 2008, at 8:35 AM, Alvaro Herrera wrote:
Roberts, Jon wrote:
-With autovacuum, does it skip these rows still being referenced
in a transaction or does it wait?
It skips them, the idea being that a future vacuum will remove them.
Awesome. In a large data warehouse, the snapshot
On Tue, Apr 22, 2008 at 9:52 AM, Joris Dobbelsteen
[EMAIL PROTECTED] wrote:
Describe the mechanism, because I don't really believe it yet. I think you
need to do a advisory lock around every commit of every transaction that
writes to the log table.
Consider some number of reader processes
On Tue, 22 Apr 2008 20:58:30 +0800 Craig Ringer wrote:
D. Dante Lorenso wrote:
The problem is that once I create a column in my account table that
uses this 'account_type' datatype, I can't seem to change or add to it
any more. I want to add a new value or edit/delete an existing one.
I have a table with a mileage column that is a character varying (please
don't ask why :).
I need to do a query where mileage 500
select * from cars where mileage500
So I need to cast it but everything I try throws an error such as :
ERROR: invalid input syntax for integer: +
How can I cast
On Apr 22, 2008, at 10:34 AM, blackwater dev wrote:
I have a table with a mileage column that is a character varying
(please don't ask why :).
I need to do a query where mileage 500
select * from cars where mileage500
So I need to cast it but everything I try throws an error such as :
Yeah, it was my being stupid, I got it going now.
Thanks!
On Tue, Apr 22, 2008 at 11:42 AM, Erik Jones [EMAIL PROTECTED] wrote:
On Apr 22, 2008, at 10:34 AM, blackwater dev wrote:
I have a table with a mileage column that is a character varying (please
don't ask why :).
I need to do
blackwater dev wrote:
I have a table with a mileage column that is a character varying (please
don't ask why :).
Why? :-)
I need to do a query where mileage 500
select * from cars where mileage500
So I need to cast it but everything I try throws an error such as :
ERROR: invalid input
Hi,
I am running postgres-8.1 on a Centos5 system.
this morning when I came in there was no more disk space on the
partition that holds pgsql: /var/lib/pgsql.
I believe postgres had crashed at this point. I stupidly removed all of
the files under pg_xlog to create disk space. Not knowing that
Some time ago I had a similar problem with Postgresql 7.3.
If I remember correctly running pg_resetxlog helped; check out
http://www.postgresql.org/docs/8.3/static/app-pgresetxlog.html
Michael.
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of Mary Ellen
David Wilson wrote:
On Tue, Apr 22, 2008 at 9:52 AM, Joris Dobbelsteen
[EMAIL PROTECTED] wrote:
Describe the mechanism, because I don't really believe it yet. I think you
need to do a advisory lock around every commit of every transaction that
writes to the log table.
Consider some number of
On Tue, Apr 22, 2008 at 2:48 PM, Joris Dobbelsteen
[EMAIL PROTECTED] wrote:
Ah, yes, all visible rows...
My point is that, unless you use a transaction with serializable isolation,
this all visible rows for the second statement might be different from those
that you copied into the log
I ran the pg_resetxlog -n to test and it found the correct values. Ran
it for real and it worked.
Thank you for the info/help
Mary Ellen
BRUSSER Michael wrote:
Some time ago I had a similar problem with Postgresql 7.3.
If I remember correctly running pg_resetxlog helped; check out
On Wed, Apr 23, 2008 at 12:29 AM, David Wilson [EMAIL PROTECTED]
wrote:
On Tue, Apr 22, 2008 at 2:48 PM, Joris Dobbelsteen
[EMAIL PROTECTED] wrote:
Ah, yes, all visible rows...
My point is that, unless you use a transaction with serializable
isolation,
this all visible rows for the
On Tue, Apr 22, 2008 at 1:00 PM, Mary Ellen Fitzpatrick [EMAIL PROTECTED]
wrote:
I ran the pg_resetxlog -n to test and it found the correct values. Ran it
for real and it worked.
Note that on most unix systems, some percentage of disk storage is
reserved for use by root only. On linux with
Gurjeet Singh wrote:
On Wed, Apr 23, 2008 at 12:29 AM, David Wilson [EMAIL PROTECTED]
mailto:[EMAIL PROTECTED] wrote:
On Tue, Apr 22, 2008 at 2:48 PM, Joris Dobbelsteen
[EMAIL PROTECTED] mailto:[EMAIL PROTECTED]
wrote:
Ah, yes, all visible rows...
My point is
On Apr 21, 2008, at 10:44 PM, Christophe wrote:
I'd like a tool that would automatically create these scripts, and I
wondered if anything like this existed. The theory would be that it
would consider two databases a and b, and produce the appropriate
script to change b's schema to match
I have a fairly simple table (a dozen real/integer columns, a few
indexes, one foreign key reference) with ~120m rows. Periodically the
table is truncated or dropped and recreated and the data is
regenerated (slightly different data, of course, or the exercise would
be rather pointless). The
On Tue, Apr 22, 2008 at 2:31 PM, David Wilson [EMAIL PROTECTED] wrote:
I have a fairly simple table (a dozen real/integer columns, a few
indexes, one foreign key reference) with ~120m rows. Periodically the
table is truncated or dropped and recreated and the data is
regenerated (slightly
If youre looking for a PHP solution, theres:
http://ezcomponents.org/docs/api/latest/introduction_DatabaseSchema.html
Its very limited but works for simple schemas, I begun extending it to
support more rigorously SQL-92 schema information (will be released BSD
license at openmv.com).
Andreas 'ads' Scherbaum wrote:
On Tue, 22 Apr 2008 20:58:30 +0800 Craig Ringer wrote:
D. Dante Lorenso wrote:
The problem is that once I create a column in my account table that
uses this 'account_type' datatype, I can't seem to change or add to it
any more. I want to add a new value or
On Apr 22, 2008, at 1:45 PM, D. Dante Lorenso wrote:
So, the advice here is don't use ENUM?
I think it's more Don't use ENUM for a type that you are planning to
extend.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On Apr 22, 2008, at 3:09 PM, John DeSoi wrote:
It would be nice if PostgreSQL had some kind of unique reference for
the column, but I think columns are just numbered sequentially as
they are added. It would also be neat to have a built-in way to log
the schema changes.
It does:
On Tue, Apr 22, 2008 at 4:38 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
The best bet is to issue an analyze table (with your table name in
there, of course) and see if that helps. Quite often the real issue
is that pgsql is using a method to insert rows when you have 10million
of them
On Tue, Apr 22, 2008 at 2:59 PM, David Wilson [EMAIL PROTECTED] wrote:
On Tue, Apr 22, 2008 at 4:38 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
The best bet is to issue an analyze table (with your table name in
there, of course) and see if that helps. Quite often the real issue
is
On Tue, Apr 22, 2008 at 3:31 PM, D. Dante Lorenso [EMAIL PROTECTED] wrote:
All,
In the past I have used foreign keys to lookup tables for small lists of
values that I now think ENUM could do the job of. I was hoping that by
using ENUM, I would avoid having to do joins in my queries, and
On Tue, Apr 22, 2008 at 2:45 PM, D. Dante Lorenso [EMAIL PROTECTED] wrote:
So, the advice here is don't use ENUM?
I was really hoping that it would be more efficient to not have to do all
the foreign keys and joins for tables that may have 4-5 enum types.
Just being able to:
SELECT *
On Tue, Apr 22, 2008 at 5:04 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
Normally, after the first 50,000 or so the plan won't likely change
due to a new analyze, so you could probably just analyze after 50k or
so and get the same performance. If the problem is a bad plan for the
inserts /
On Tue, Apr 22, 2008 at 3:15 PM, David Wilson [EMAIL PROTECTED] wrote:
On Tue, Apr 22, 2008 at 5:04 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
Normally, after the first 50,000 or so the plan won't likely change
due to a new analyze, so you could probably just analyze after 50k or
so
On Tue, 22 Apr 2008 15:45:39 -0500
D. Dante Lorenso [EMAIL PROTECTED] wrote:
I was really hoping that it would be more efficient to not have to do
all the foreign keys and joins for tables that may have 4-5 enum
types.
Just being able to:
SELECT *
FROM tablename
would be nice
On Apr 22, 2008, at 4:53 PM, Erik Jones wrote:
It would be nice if PostgreSQL had some kind of unique reference
for the column, but I think columns are just numbered sequentially
as they are added. It would also be neat to have a built-in way to
log the schema changes.
It does:
Scott Marlowe wrote:
On Tue, Apr 22, 2008 at 2:45 PM, D. Dante Lorenso [EMAIL PROTECTED] wrote:
So, the advice here is don't use ENUM?
I was really hoping that it would be more efficient to not have to do all
the foreign keys and joins for tables that may have 4-5 enum types.
Just being able
On Tue, Apr 22, 2008 at 5:18 PM, Scott Marlowe [EMAIL PROTECTED] wrote:
Try upping your checkpoint segments. Some folks find fairly large
numbers like 50 to 100 to be helpful. Each segment = 16Megs, so be
sure not to run your system out of drive space while increasing it.
Ahh, much more
On Apr 22, 2008, at 4:33 PM, John DeSoi wrote:
On Apr 22, 2008, at 4:53 PM, Erik Jones wrote:
It would be nice if PostgreSQL had some kind of unique reference
for the column, but I think columns are just numbered sequentially
as they are added. It would also be neat to have a built-in
On Apr 22, 2008, at 4:46 PM, David Wilson wrote:
On Tue, Apr 22, 2008 at 5:18 PM, Scott Marlowe [EMAIL PROTECTED]
wrote:
Try upping your checkpoint segments. Some folks find fairly large
numbers like 50 to 100 to be helpful. Each segment = 16Megs, so be
sure not to run your system out of
David Wilson [EMAIL PROTECTED] writes:
Foreign keys are definitely indexed (actually referencing a set of
columns that the foreign table is UNIQUE on).
Are you loading any tables that are the targets of foreign key
references from other tables being loaded? If so, I'd bet on
Scott's theory
On Tue, 2008-04-22 at 16:34 -0500, D. Dante Lorenso wrote:
I see this might be a
problem with storage since you will need to store the TEXT value for
every row in the 'mystuff' table instead of just storing the reference
to the lookup table as an INTEGER. Over millions of rows, perhaps
On Tue, Apr 22, 2008 at 6:10 PM, Tom Lane [EMAIL PROTECTED] wrote:
David Wilson [EMAIL PROTECTED] writes:
Are you loading any tables that are the targets of foreign key
references from other tables being loaded? If so, I'd bet on
Scott's theory being correct with respect to the plans for
David Wilson [EMAIL PROTECTED] writes:
My guess at this point is that I'm just running into index update
times and checkpoint IO. The only thing that still seems strange is
the highly variable nature of the COPY times- anywhere from 1.0
seconds to 20 seconds, with an average probably around
Kerri Reno wrote:
So the reason I'm getting the error is that I'm running it in 8.0.
Thanks so much for your help!
Kerri
use FOR instead
CREATE OR REPLACE FUNCTION private.get_salary_schedule(integer, text)
RETURNS record AS
$BODY$
declare
tcp_id alias for $1;
tfy alias for $2;
tbl
On 04/22/2008 07:30 PM, Albe Laurenz wrote:
Clemens Schwaighofer wrote:
I sometimes have a problem with conversion of encodings eg from UTF-8
tio ShiftJIS:
ERROR: character 0xf0a0aeb7 of encoding UTF8 has no
equivalent in SJIS
I have no idea what character this is, I cannot view it in my
On Tue, Apr 22, 2008 at 7:33 PM, Tom Lane [EMAIL PROTECTED] wrote:
What have you got shared_buffers set to? If it's not enough to cover
the working set for your indexes, that might be the (other) problem.
shared_buffers = 1536MB
Is there a way to get the size of a specific index, on that
On Tue, Apr 22, 2008 at 6:11 PM, Jeff Davis [EMAIL PROTECTED] wrote:
If you store an integer reference instead, joins are not necessarily
expensive. If the number of distinct values is small (which is the
normal use case for ENUM), I would expect the joins to be quite cheap.
Beware of
On Tue, 22 Apr 2008, David Wilson wrote:
My guess at this point is that I'm just running into index update
times and checkpoint IO. The only thing that still seems strange is
the highly variable nature of the COPY times- anywhere from 1.0
seconds to 20 seconds, with an average probably around
68 matches
Mail list logo