Juan Pablo L wrote:
Hi, i have a function that receives a parameter which represents days:
FUNCTION aaa_recharge_account(expdays integer)
i want to add those days to the CURRENT_DATE, but i do not know how to do it,
i have tried several
ways to replace that in an expresion like:
Restoring a plain format dump and a custom format dump of
the same database can lead to different results:
pg_dump organizes the SQL statements it creates in TOC entries.
If a custom format dump is restored with pg_restore, all
SQL statements in a TOC entry will be executed as a single command
salah jubeh wrote:
ERROR: could not convert table b to a view because it has triggers
HINT: In particular, the table cannot be involved in any foreign key
relationships.
** Error **
ERROR: could not convert table b to a view because it has triggers
SQL state: 55000
salah jubeh wrote:
http://www.postgresql.org/docs/current/static/catalog-pg-class.html
relhastriggers boolTrue if table has (or once had) triggers
This is what is queried when you try to convert the table into a view.
So there is no way to convert your table to a view unless you are
John R Pierce wrote:
if the postgresql server was running when that file backup was made(*),
its pretty much worthless, it will have data corruption and errors
throughout.
Well, it would be better than nothing.
You can pg_resetxlog and manually clean up the inconsistencies.
That's better than
misspa...@tiscali.it wrote:
I am using sybase ase as dbms and I would migrate to postgresql, but the
absence of a built in DB
audit functionality is a show stopper for me.
So I would know if there is a way to get information about DB events like:
server boots
login logout
table
Sameer Kumar wrote:
I am trying to do a custom build (and generate binary and source RPM) package
for PostgreSQL.
I know community already has a RPM package, but I am trying to do a custom
build.
I am using attached SPEC file. But I am not able to get binary rpm. rpmbuild
always gives
Shuwn Yuan Tee wrote:
We recently experienced crash on out postgres production server. Here's our
server environment:
- Postgres 9.3
- in OpenVZ container
- total memory: 64GB
Here's the error snippet from postgres log:
ERROR: could not read block 356121 in file
Tim Kane wrote:
Is anyone aware of cases where psql will occasionally return an exit status
of 1, despite there being
no obvious error condition?
I have a regular (daily) cron that executes a fairly simple script of the
form:
psql --set ON_ERROR_STOP=1 -h $DB_HOST -U $DB_USER -a
Tim Kane wrote:
I’ve enabled shell debugging to be 100% sure that I’m not munging the return
code anywhere. It’s
entirely possible there is something going on at the shell side of things,
though I fail to see how
just at the minute :)
The output of the script is as follows, bearing in mind
Tobadao wrote:
Hello everyone.
I'm using PostgreSQl 9.2 on the windows XP
in recovery.conf use command
*standby_mode = 'on'
primary_conninfo = 'host=10.0.10.2 port=5432 user=postgres password =
password'
restore_command = 'copy 10.0.10.2\\archiver\\%f %p'
restore_command =
N wrote:
Are there documents specifying the query optimisations in Postgresql
like the SQLite (http://www.sqlite.org/optoverview.html)?
From the web, I can say, there are index and join optimisation, but
are there anything others like Subquery flattening?
There is subquery flattening:
John R Pierce wrote:
On 11/26/2013 9:24 AM, Joey Quinn wrote:
When I ran that command (select * from pg_stat_activity), it returned
the first six lines of the scripts. I'm fairly sure it has gotten a
bit beyond that (been running over 24 hours now, and the size has
increased about 300 GB). Am
Janek Sendrowski wrote:
Is there a documentation of postgresql's C functions like SET_VARSIZE for
exmaple?
For things like this consult the source code.
In src/include/postgres.h you'll find:
/*
* VARDATA, VARSIZE, and SET_VARSIZE are the recommended API for most code
* for varlena
Rémi Cura wrote:
somebody knows of a way to autodocument plpgsql function, in a docxygen style
(adding tags in comments for instance, or creating doc templates to fill).
It would really help to write the doc and maintain it.
I am not sure what you need, but I see two ways to
document a
Rémi Cura wrote:
somebody knows of a way to autodocument plpgsql function, in a docxygen
style
(adding tags in comments for instance, or creating doc templates to fill).
It would really help to write the doc and maintain it.
Typically in you comments you include special tags, like @input,
Peter Kroon wrote:
Is anyone able to reproduce?
When I run the query below all 5 rows are returned instead of 2.
Or is this the default behaviour..
SELECT
xmlagg(
[...]
)--xmlagg
FROM __pg_test_table AS dh
WHERE dh.__rel=5 LIMIT 2 --OFFSET 10;
According to the documentation, that query
Chris Curvey wrote:
My vendor took a dump of our something else database (which runs on
Windows), did their conversion
to Postgres, and then sent me back a postgres dump (custom format) of the
database for me to load onto
my servers for testing.
I was interested to find that while I
Andrew Sullivan wrote:
Guess guessing, but I bet the collation is what hurts, [...]
(The background for my guess: on your Linux box UTF-8 is likely the
normal local encoding, but on Windows that isn't true, and 1252 is
_almost_ but not quite Unicode. This bites people generally in
Thomas Kellerer wrote:
mrprice22 wrote on 22.11.2013 19:25:
We are in the process of moving from Oracle to PostgreSQL. We use a stored
procedure to populate some reporting tables once an hour. There are two
sets of these tables, set A and set B. We use synonyms to point to the
“active” set
Kaushal Shriyan wrote:
I have read on the web that Postgresql DB supports replication across data
centers. Any real life
usecase examples if it has been implemented by anyone.
Well, we replicate a 1 TB database between two locations.
It is a fairly active OLTP application, but certainly not
Torsten Förtsch wrote:
Don't use synchronous replication if you have a high transaction
rate and a noticable network latency between the sites.
Wait for the next bugfix release, since a nasty bug has just
been discovered.
Can you please explain or provide a pointer for more information?
Michael Paquier wrote:
On Fri, Nov 22, 2013 at 10:03 PM, Kaushal Shriyan kaushalshri...@gmail.com
wrote:
I am not sure i understand the difference between async and sync replication
and on what scenarios i should use async or sync replication. Does it mean
if it is within same DC then sync
Karsten Hilbert wrote:
Let me try to rephrase:
Fact: pg_upgrade can NOT properly upgrade clusters which contain
databases that are set to default_transaction_read_only on
Question: Is this intended ?
I am pretty sure that this is an oversight and hence a bug.
Yours,
Laurenz Albe
LPlateAndy wrote:
Just wondering what kind of execute statement (within a function) i should
use to force the planner to use the index for the following?:
You cannot force anything.
The best you can do is to provide an index that *can* be used
and keep your statistics accurate.
SELECT pcode
oka wrote:
I have a question.
There are the following data.
create table chartbl
(
caseno int,
varchar5 varchar(5)
);
insert into chartbl values(1, ' ');
insert into chartbl values(2, '');
The same result with the following two queries is obtained.
select * from chartbl where
Jayadevan M wrote:
The code has these comments -
When a request message is received, we now fork() immediately. The child
process performs
authentication of the request,
Now authentication is done by the Backend process and not by the daemon?
Yes.
The authentication is called in
Andreas Kretschmer wrote:
ist there (in 9.1) a way to unload a shared lib?
It seems it's impossible since 8.1 or so, i'm right?
Yes:
http://www.postgresql.org/docs/9.3/static/xfunc-c.html#XFUNC-C-DYNLOAD
(Presently, unloads are disabled and will never occur, but this may change in
the
Jing Fan wrote:
If the grouping inside CTE is executed, I don't think it would generate
result like
src_id | dest_id | dist
+-+--
3384 |6236 |1
3384 |1739 |2
3384 |6236 |3
3384 |1739 |4
3384 |6236 |5
3384 |
Steve Crawford wrote:
There is a comment in utils/adt/formatting.c:
* This function does very little error checking, e.g.
* to_timestamp('20096040','MMDD') works
I think the place for such warnings in addition to the source-code is in
the documentation. This or similar issues with
Maciej Mrowiec wrote:
I'm working on RBAC implementation over posgresql and I was wondering is
there any common technique to
achieve row level access control ?
So far I'm considering using WITH clause in template like this:
WITH table_name AS ( SELECT . ) user_query;
Which would
Jing Fan wrote:
I am sorry but I still don't understand why it doesn't work. Possibly I
misunderstand how with
recursive works?
In my opinion,
with recursive table as{
seed statement
union
recursive statement
}
In every iteration, It will just generate results from seed
Jing Fan wrote:
On Wed, Nov 6, 2013 at 8:10 AM, Albe Laurenz laurenz.a...@wien.gv.at wrote:
Let's assume that we have three nodes A, B and C.
Also, A points to B, B points to C and C points to B.
Let's assume that we already generated (A, B, 1) and (A, C, 2)
in previous iterations
Jing Fan wrote:
I use following command to get a shortest-path query:
with recursive paths( src_id, dest_id, dist) as(
select n1,n2,1
from nodes
union
select src_id, dest_id, min(dist)
from ( select paths.src_id as src_id, nodes.n2 as dest_id,
Thomas Kellerer wrote:
bsreejithin, 05.11.2013 13:14:
Not able to post the attached details as a comment in the reply box, so
attaching it as an image file :
http://postgresql.1045698.n5.nabble.com/file/n5776987/Untitled.png
It would have much easier if you had simply used copy paste to
Patrick Dung wrote:
I have seen some databases product that allocate small number of large files.
Please correct me if I am wrong:
MSSQL (one file is the data and another file for the transaction log)
MySQL with InnoDB
Oracle
DB2
I don't know enough about DB2 and MSSQL, but you are
Jing Fan wrote:
I have two group operations.
One is inside the CTE ( union
select src_id, dest_id, min(dist) ),
another is outside the CTE.
Do you mean that even the grouping inside the CTE will be calculated only
after the CTE has been
calculated?
I
Jing Fan wrote:
Why the one inside does not do anything? It won't be executed?
It is executed.
It might filter out the occasional row, but if you look at
the example I gave you, you'll see that it won't do anything
to keep it from recursing.
Yours,
Laurenz Albe
--
Sent via pgsql-general
Patrick Dung wrote:
As I have seen, some database created or pre-allocate large physical files on
the file system to as
the backend of the database tablespace.
For Postgresql, I have observed that it created several files in the base and
global directory.
It may be by design, what is
Ivan Voras wrote:
I'm just wondering: in the execution plan such as this one, is the
Recheck Cond phase what it apparently looks like: an additional check
on the data returned by indexes, and why is it necessary? I would have
though that indexes are accurate enough?
cms= explain analyze
DDT wrote:
According to manual, when you set synchronous_commit to on, the transaction
commits will wait until
master and slave flush the commit record of transaction to the physical
storage, so I think even if
turn off the fsync on master is safe for data consistency and data will not
be
Shaun Thomas wrote:
Wrong. The database cannot check all data for consistency
upon backup. For one, that would take way too long.
Well, what I meant, was that it would stop the database if it couldn't
apply one of the transaction logs for whatever reason. It wasn't
inconsistent enough for
Anson Abraham wrote:
No client connecting to the slave. It's just streamed replication for HA.
This occurs when the slave
starts immediately. SSL is used. And as I mentioned the libraries are
identical on both slave and
master. Interestingly, another slave that replicates from master
Amit Langote wrote:
With a server initdb'd with UTF8 encoding , if I create a table with a
client using LATIN1 encoding and later try to work with the relation
with a client using UTF8 encoding (both the scenarios simulated using
single session of psql but with different client_encoding set),
DT wrote:
I'm reading code of xact.c, and I found the only difference between
TRANS_START
and TRANS_INPROGRESS is when transaction aborts in TRANS_START status we set
status to TRANS_INPROGRESS so AbortTransaction() will not report WARNING.
So I wonder to know the reason why we distinguish
Shaun Thomas wrote:
I have a revised backup process that's coming out inconsistent, and I'm not
entirely sure why. I call
pg_start_backup(), tar.gz the contents elsewhere, then pg_stop_backup().
Nothing crazy. Upon restore,
two of my tables report duplicate IDs upon executing my redaction
Anson Abraham wrote:
I'm on Debian Wheezy running postgres 9.3
both boxes are identical.
I see in log file on slaves:
LOG: could not receive data from client: Connection reset by peer
That looks to me like a client that is connected to the slave
is dying.
Do you have hot standby turned
Carlo Curatolo wrote:
When I lauch a vacuumdb, I have an error : ERREUR: en-tête de page invalide
dans le bloc 39639 de la relation base/16384/599662
With a
SELECT * FROM pg_catalog.pg_largeobject
Result is
ERREUR: en-tête de page invalide dans le bloc 39639 de la relation
Carlo Curatolo wrote:
SELECT oid, relname, relkind FROM pg_class WHERE relfilenode = 599662; --
returns nothing.
Maybe the wrong database?
Try to find out which object this file belongs to (maybe with oid2name).
No crash occurs, I have tested the hardware (memory, harddisks, RAID5,
stability
Carlo Curatolo wrote:
Yes I ran pg_dumpall, create a new cluster and import.
Ok, cool.
Everything seems fine now.
How can I prevent that ?
Prevent data corruption?
Have good hardware, run the latest PostgreSQL fixes...
Most of all, have a good backup so that you can recover.
Yours,
Jesse Long wrote:
I did ALTER COLUMN SET STATISTICS 1 for each column involved after
that (is that what you meant?). But it did not make much difference, but
I have read the manual regarding this setting and think I understand it.
Would it be a bad idea to ALTER COLUMN SET STATISTICS
高健 wrote:
The background writer and ordinary backends might write data
(for their own reasons) that the
checkpointer would have otherwise needed to write anyway.
And does the ordinary backends refer to
the ones created when a client make a connection to PG?
Yes.
Yours,
Laurenz Albe
--
Jesse Long wrote:
I have the following query, run immediately after executing VACUUM in
the database. There is only one connection to the database.
You should run ANALYZE, not VACUUM.
The query runs for much longer than I expect it to run for, and I think
this is due to it using the
Jesse Long wrote:
There is no problem with row visibility, there is only one connection to
the database - the connection I am using to do these selects.
No idea why the plans cannot be used.
It might be helpful to see the table and index definitions.
Thanks you for the advise regarding
高健 wrote:
I have one question about checkponint . That is : can checkpoint be parallel?
It is said that checkpoint will be activated according to either conditions:
1)After last checkpoint, checkpoint_timeout seconds passed.
2)When shared_buffers memory above checkpoint_segments size is
Rafael B.C. wrote:
I am dealing with the old decision about hiw to store data objects and trying
to understand deep the
postgre system including toast, pg-largedataobject table and so on.
My real doubt right now is why bytea does not gets processed by toast system
even when is grow
Tim Uckun wrote:
I have the following query.
[...]
SELECT
interval_start,
(interval_start AT TIME ZONE 'Africa/Monrovia')::timestamp with time zone as
interval_start_in_africa,
min_datetime,
min_datetime AT TIME ZONE 'Europe/Berlin' as min_datetime_in_berlin,
max_datetime,
Tim Uckun wrote:
The reason for that is that in PostgreSQL there is no time zone
information stored along with a timestamp with time zone,
it is stored in UTC.
That seems unintuitive. What is the difference between timestamp without time
zone and timestamp with
time zone? I was expecting
Karsten Hilbert wrote:
Maybe the question I need to ask is how can I store the time zone along
with the timestamp
Store an additional field offset.
If you want to invest more energy and don't mind writing C,
you could create your own data type.
Might not a composite type
Karsten Hilbert wrote:
Whatever became of the 2011 intent to implement
the above that's linked to in the blog post ?
You'd have to ask Alvaro.
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
Jayadevan M wrote:
Thanks for the pointer. I do not really want to log the plans of queries,
just the queries, execution
time and a couple of other details (database,user). If I use the auto-explain
module, it will end up
printing the plan for all query execution in the log files?
You can
John DeSoi wrote:
I have a 9.2.4 (CentOS 6.5) standby that has been working with no problems
using log shipping. I
wanted to add streaming replication which I thought would be as simple as
adding primary_conninfo to
recovery.conf and restarting the standby. But on restart there is no
Shaun Thomas wrote:
This is PostgreSQL 9.1.9.
So we've had vacuumdb -avz launched via cron at 3am for a few years
now, and recently noticed some queries behaving very badly. While
checking pg_stat_user_tables, I see this for several hundred of them:
relname |
Gurkan Ozfidan wrote:
We have been using postgresql-jdbc-8.3-604, and recently we started using
postgresql-jdbc-9.2,
everything seems working fine, except we are getting this error and could not
find the solution yet.
This related to our reporting and I could say that it is not happening
Jamie Echlin wrote:
What I'm trying to say is that I configured it to keep (60/5) * 24 segments
plus a few spare, because
I am switching xlog every 5 mins. But if there is heavy load then they will
be generated more often
than every 5 mins, so that number won't be enough.
You should delete
Jamie Echlin wrote:
I was planning to rely on the disk backup (of the base backup) if I wanted to
restore to a version
before the last on disk base backup. But your point about redundancy is
good... I think I will keep
two base backups, and do a base backup every day. Over the network this
Moshe Jacobson wrote:
Is there a way to rename an installed extension?
I have written an extension, but I don't like the name I originally chose,
and I would therefore like
to rename it. However, it is installed on a production system, from which it
cannot be uninstalled,
and I would
Vivek Singh Raghuwanshi wrote:
I am trying to install postgresql-jdbc but facing java error.
It would be helpful to know which error you are facing.
Yours,
Laurenz Albe
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
patrick keshishian wrote:
One more cute idea that came to me last night. Here is a very
poor attempt at it by yours truly; keep in mind I'm not a graphics
artist. This image is for illustration purposes only!
http://sidster.com/gallery/2013/09/10/elephant_paw.sml.jpg
Can you picture
Emre ÖZTÜRK wrote:
I have sent below question months ago but I could not get any reply from you
experts. I will very
appreciated if you can help.
PS: I have tried debug_pretty_print = off but I did not work.
Is there a parameter to log any SQL query as a single line in audit logs? I
ascot.m...@gmail.com wrote:
I am planing to backup archived WAL files from master to another machine, is
there a way to check and
make sure the backup archive file are all good and no any file missing or
corrupted?
The only 100% safe way would be to restore them, e.g. with
a standby
salah jubeh wrote:
I have a question regarding streaming replication:
When pg_xlog needs to be shipped in streaming replication and when not? I
have faced a different
issues in two existing setups, in the first setup, when shipping the pg_xlogs
a time line issue has
arisen, and in
Aram Fingal wrote:
I want to create a user defined data type but have flexible input just like,
for example, the boolean
data type where TRUE can be entered as any of (TRUE, true, T, t, YES, yes, Y,
y, 1...) and it will be
interpreted as the same thing.
So suppose I have days of the
Michael Cronenworth wrote:
On 08/15/2013 10:59 AM, Michael Cronenworth wrote:
The attached patches resolve the issue.
Should I forward the patches on to the pgsql-hackers list for review or is
this
list sufficient? (First time PostgreSQL hacker.)
Yes, any patches should be posted to
Stuart Ford wrote:
We have a problem on our development database server, which supports a PHP
application, which connects to it from a different server. Sometimes,
around 1 in 4 page loads, it fails and reports the following error message:
FATAL: terminating connection due to administrator
M Tarkeshwar Rao wrote:
We are upgrading our mediation product from postgres 8.1 to postgres 9.1
Can you please suggest some test cases or some issues which may hamper us?
This is the first thing that comes to mind:
ascot.m...@gmail.com wrote:
I have enabled archive in PG (v 9.2.4):
archive_mode = on
archive_command = 'test ! -f /usr/local/pgsql/data/archive/%f cp %p
/usr/local/pgsql/data/archive/%f'
I know that pg_xlog folder is maintained by PostgreSQL automatically, when
the pg_xlog folder
ascot.m...@gmail.com wrote:
I am trying another way to test PITR: by recovery_target_time.
The test machine has the same PG version 9.2.4 and same O/S Ubuntu 12.04 64
bit.All archived WAL
files are shipped and saved in /var/pgsql/data/archive, the latest time stamp
of them is
liuyuanyuan wrote:
By the way, my project is about migrating Oracle data of BLOB type to
PostgreSQL database. The out of memory error occurred between migrating
Oracle BLOB to PostgreSQL bytea. Another question, if I can't migrate BLOB to
bytea,
how about oid type ?
Large Objects (I guess
Klaus Ita wrote:
I have restored a Database Cluster with a recovery_target_time set to
recovery_target_time = '2013-07-27 21:20:17.127664+00'
recovery_target_inclusive = false
now it seems the restore rather restored to some point in time (rather the
18th than the 27th). Is
there an
Klaus Ita wrote:
I have restored a Database Cluster with a recovery_target_time set to
recovery_target_time = '2013-07-27 21:20:17.127664+00'
recovery_target_inclusive = false
now it seems the restore rather restored to some point in time (rather the
18th than the 27th). Is
there
Loïc Rollus wrote:
It's ok.
Before the insert, The foreign key constraint locked the row. If transaction
A and B lock the row with
FK, before doing UPDATE, they were stuck.
I found a solution by creating an before insert trigger with a simple
SELECT FROM UPDATE on the row.
You mean SELECT
Loïc Rollus wrote:
I've try to make some concurrency robustness test with an web server app that
use Hibernate and
Postgres.
It seems that my trigger make deadlock when multiple thread use it.
I will try to simplify examples:
I have a table films(id, title,director) and a table
Some Developer wrote:
On 24/07/13 01:55, John Meyer wrote:
Taking an absolutist position either way is pretty blind. What is the
purpose of the procedure? Is it enforcing business rules? Are these
rules that must be enforced against already existing data or are they
more akin to
Michael Orlitzky wrote:
I want to be able to create a database, set up the (default) group
permissions, and have them work, even when a new user is added to one of
the groups. Right now I don't know of a way to get default group
permissions.
There is none, as far as I can say.
You have two
Michael Orlitzky wrote:
We use Postgres for shared hosting; i.e. what most people use MySQL for.
The biggest headache for us so far has been that we're unable to get
group permissions set up effectively so that different groups of
customers, admins, apaches, etc. can access/modify the data
Kenneth Tilton wrote:
We want to make sure no two examiners are working on the same case at
the same time, where the
cases are found by searching on certain criteria with limit 1 to get the
next case.
A naive approach would be (in a stored procedure):
bhanu udaya wrote:
What is the best way of doing case insensitive searches in postgres using
Like.
Table laurenz.t
Column | Type | Modifiers
+-+---
id | integer | not null
val | text | not null
Indexes:
t_pkey PRIMARY KEY, btree (id)
CREATE INDEX t_val_ci_ind
Ingmar Brouns wrote:
My solution is fast and efficient, it will call upper() only once
per query. I don't see your problem. Different database systems
do things in different ways, but as long as you can do what you need
to do, that should be good enough.
I was toying around a little bit
But in the following expression:
template1=# SELECT CASE WHEN (SELECT 0)=0 THEN 0 ELSE 1/0 END;
ERROR: division by zero
(Just to be sure, a SELECT (SELECT 0)=0; returns true)
It seems that when the CASE WHEN expression is a query, the evaluation
order changes.
According to the
Dmitriy Igrishin wrote:
Since there can be only one unnamed prepared statement per
session, there should be only one such object per connection.
It should not get deallocated; maybe it could be private to the
connection, which only offers a parseUnnamed and executeUnnamed
mathod.
More
David Greco wrote:
In my development environment, I am using the auto_explain module to help
debug queries the developers
complain about being slow. I am also using the oracle_fdw to perform queries
against some oracle
servers. These queries are generally very slow and the application
bhanu udaya wrote:
What is the best way of doing case insensitive searches in postgres using
Like.
Table laurenz.t
Column | Type | Modifiers
+-+---
id | integer | not null
val| text| not null
Indexes:
t_pkey PRIMARY KEY, btree (id)
CREATE
Phoenix Kiula wrote:
Hi. Hard to find this command in the documentation - how should I alter a
table to REMOVE the on
delete cascade constraint from a table? Thanks.
Unless you want to mess with the catalogs directly, I believe that
you have to create a new constraint and delete the old one,
Dmitriy Igrishin wrote:
I understand the problem now.
I pondered a bit over your design, and I came up with a different
idea how to represent prepared statements in a C++ library.
First, a prepared statement is identified by its name.
To make the relationship between a PreparedStatement
Jirí Pavlovský wrote:
I have a win32 application.
LOG: statement: INSERT INTO recipients (DealID,
Contactid) VALUES (29009, 9387)
ERROR: invalid byte sequence for encoding UTF8: 0x9c
But the query is clean ascii and it doesn't even contain the mentioned
Jirí Pavlovský wrote:
I'm getting these errors on tables as
well. Actually when I copy and paste the offending queries from log into
pgAdmin it runs without an
error.
So the queries work from pgadmin; what application/environment are they NOT
working in?
Something is obviously different.
Dmitriy Igrishin wrote:
While developing a C++ client library for Postgres I felt lack of extra
information in command tags in the CommandComplete (B) message [...]
It seems like bad design to me to keep a list of prepared statements
on the client side when it is already kept on the server
Xiaobo Gu wrote:
We have very large files in size will be created as external tables in
PostgreSQL via file_fdw, we
have the following considerations:
1. Can file_fdw handle files with size large than the size of RAM in the
server.
That shouldn't be a problem.
2. Will file_fdw scan the
Michael Angeletti wrote:
I'm Michael, and this is my first post here.
I asked this question last night:
http://dba.stackexchange.com/questions/45077
which details the problem (or not?) I'm having. The gist of it is that
Postgres is not archiving the
first WAL segments for my
301 - 400 of 1237 matches
Mail list logo