Sergey Konoplev gray...@gmail.com writes:
On Sat, Feb 9, 2013 at 7:39 PM, Karl Denninger k...@denninger.net wrote:
Am I correct in that I can do this by simply initdb-ing the second instance
with a different data directory structure, and when starting it do so with a
different data directory
On 02/09/2013 09:39 PM, Carlo Stonebanks wrote:
I am actually in the same folder as the libpgtcl.dll, and that particular
failure would raise a different error in any case:
'couldn't load library libpgtc: this library or a dependent library could
not be found in library path'
I did find this
OK. What if we do not need to access the static data on the test volume? It
is a rare application that goes there, and for those we can bring over both
volumes/tablespaces.
Thx, ken
On Fri, Feb 8, 2013 at 10:09 PM, Gavan Schneider pg-...@snkmail.com wrote:
On Friday, February 8, 2013 at 10:58,
I load and dump text files with currency values in it. The decimal in these
input and output formats in implied. The V format character works great for
outputing numeric data:
# select to_char(123.45, '999V99');
to_char
-
12345
(1 row)
However, when importing data, the V doesn't do
On 10 February 2013 20:50, Jeremy Lowery jslow...@gmail.com wrote:
I load and dump text files with currency values in it. The decimal in
these input and output formats in implied. The V format character works
great for outputing numeric data:
# select to_char(123.45, '999V99');
to_char
Jeremy Lowery jslow...@gmail.com writes:
I load and dump text files with currency values in it. The decimal in these
input and output formats in implied. The V format character works great for
outputing numeric data:
# select to_char(123.45, '999V99');
to_char
-
12345
(1 row)
Here's what I did to fix this in Ubuntu 12.10.
Now I cannot explain (a) why this problem came into being or (b) what the
science is behind my fix. This was my first dive into Linux logs and there
being seemingly an array of ways logging can be handled now, and was
handled historically, with some
List,
Is there a way to create command aliases in the psql shell? I can never
remember all the \d* commands and have to look them up every time. If I could
create things like \list_databases, \list_tables, \list_roles, etc, it would be
much easier for me to remember.
Is there a way to create
Modulok modu...@gmail.com wrote:
Is there a way to create command aliases in the psql shell? I can never
remember all the \d* commands and have to look them up every time. If I
could
create things like \list_databases, \list_tables, \list_roles, etc,
it would be
much easier for me to
I am using a query like this to try and normalize a table.
WITH nd as (select * from sales order by id limit 100),
people_update as (update people p set first_name = nd.first_name from
nd where p.email = nd.email returning nd.id),
insert into people (first_name, email, created_at, updated_at)
We are hitting a situation where REINDEX is resulting in postgresql to go
to dead lock state* for ever*. On debugging the issue we found that
3 connections are going in to some dead lock state.
1. *idle in transaction *
2. *REINDEX waiting *
3. *SELECT waiting*
All these
On 2/6/2013 1:28 AM, Anoop K wrote:
3 connections are going in to some dead lock state.
1. *idle in transaction *
2. *REINDEX waiting *
3. *SELECT waiting *
you need to track down what resources are being locked by those
processes, by joining pg_stat_activity against pg_locks and
On Wed, Feb 6, 2013 at 1:28 AM, Anoop K anoo...@gmail.com wrote:
We are hitting a situation where REINDEX is resulting in postgresql to go to
dead lock state for ever. On debugging the issue we found that
3 connections are going in to some dead lock state.
idle in transaction
REINDEX waiting
We analyzed the application side. It doesn't seem to be create a
transaction and keep it open. StackTraces indicate that it is BLOCKED in
JDBC openConnection.
Any JDBC driver issue or other scenarios which can result in *idle in
transaction* ?
Anoop
On Mon, Feb 11, 2013 at 11:16 AM, Sergey
On Sun, Feb 10, 2013 at 5:23 PM, Tim Uckun timuc...@gmail.com wrote:
This works pretty good except for when the top 100 records have
duplicated email address (two sales for the same email address).
How is it assumed to work when the migrating email already exists in people?
I am wondering
On 2/10/2013 9:55 PM, Anoop K wrote:
We analyzed the application side. It doesn't seem to be create a
transaction and keep it open. StackTraces indicate that it is BLOCKED
in JDBC openConnection.
Any JDBC driver issue or other scenarios which can result in *idle in
transaction* ?
JDBC has
Yes, we do that.
On Mon, Feb 11, 2013 at 11:53 AM, John R Pierce pie...@hogranch.com wrote:
On 2/10/2013 9:55 PM, Anoop K wrote:
We analyzed the application side. It doesn't seem to be create a
transaction and keep it open. StackTraces indicate that it is BLOCKED in
JDBC openConnection.
On Sun, Feb 10, 2013 at 9:55 PM, Anoop K anoo...@gmail.com wrote:
We analyzed the application side. It doesn't seem to be create a transaction
and keep it open. StackTraces indicate that it is BLOCKED in JDBC
openConnection.
Any JDBC driver issue or other scenarios which can result in idle in
On 2/10/2013 10:25 PM, Anoop K wrote:
Yes, we do that.
well, you need to figure out which connection isn't doing that, as one
of them is leaving a long running transaction pending.
as I said, join pg_stat_activity.pid with pg_locks and whatever to find
out what tables its locking on.
I've poked around a bit with my good friend Google Search and come up
blank, and I'm fairly sure this is something that shouldn't be relied
upon, but it's a point of curiosity.
Suppose I have twenty processes that all request the same lock. (I'm
working with pg_advisory_xact_lock, but any
On Mon, Feb 11, 2013 at 12:26 PM, Chris Angelico ros...@gmail.com wrote:
Is there any sort of guarantee that all
the processes will eventually get a turn, or could two processes
handball the lock to each other and play keepings-off against the
other eighteen?
That should not happen. There
On Mon, Feb 11, 2013 at 6:12 PM, Pavan Deolasee
pavan.deola...@gmail.com wrote:
* Determine where to add myself in the wait queue.
*
* Normally I should go at the end of the queue.
Ah! That's perfect. So they'll actually go into perfect strict
round-robin, assuming that there
On Feb 11, 2013, at 2:23, Tim Uckun timuc...@gmail.com wrote:
This works pretty good except for when the top 100 records have
duplicated email address (two sales for the same email address).
I am wondering what the best strategy is for dealing with this
scenario. Doing the records one at a
23 matches
Mail list logo