I ran into this yesterday and I wanted to post this to see if this is
working as expected or it is a bug.
By creating 2 tables and creating a FK between them and then changing the
owner of the tables to a group, I lost the ability to insert into the first
table executing as SUPERUSER.
I thought th
On 03/09/2015 10:40 AM, Anushka Chandrababu wrote:
Hi,
Thank you Adrian for showing interest in my query.
The software I am using is Tableau Server. The library for tcl is lib.pgtcl.
Well on here:
https://github.com/flightaware/Pgtcl/blob/master/doc/PGTCL-NOTES
I do not see that pg_conndefa
Hi,
Thank you Adrian for showing interest in my query.
The software I am using is Tableau Server. The library for tcl is lib.pgtcl.
Thank You
Anushka
On Mon, Mar 9, 2015 at 11:00 PM, Adrian Klaver
wrote:
> On 03/09/2015 09:52 AM, Anushka Chandrababu wrote:
>
>> Hi,
>>
>> I am using Postgres a
On 03/09/2015 09:19 AM, Joshua D. Drake wrote:
On 03/09/2015 08:57 AM, Adrian Klaver wrote:
On 03/09/2015 08:49 AM, Kevin Grittner wrote:
pinker wrote:
INFO: vacuuming "my_table"
INFO: "my_table": found 0 removable, 3043947 nonremovable row
versions in 37580 pages
DETAIL: 0 dead row ver
On 03/09/2015 09:52 AM, Anushka Chandrababu wrote:
Hi,
I am using Postgres as a part of another software and the version is 9.3.
That software would be?
I do not know how then the pg_conndefaults command works.
Please help me with some other solution so that I can get the details of
a part
Joshua D. Drake wrote:
> On 03/09/2015 08:57 AM, Adrian Klaver wrote:
>> On 03/09/2015 08:49 AM, Kevin Grittner wrote:
>>> pinker wrote:
DETAIL: 0 dead row versions cannot be removed yet.
>>>
>>> So there are no longer any dead rows being left behind, right?
>>>
>>> Why are we still discuss
Nigel Gardiner writes:
> I'm looking at making a data warehouse to address our rapidly spiralling
> report query times against the OLTP. I'm looking first at what it would take
> to make this a
> real-time data warehouse, as opposed to batch-driven.
>
> One approach I've seen used to achieve re
On Mon, Mar 9, 2015 at 12:43:05PM -0400, Bruce Momjian wrote:
> On Fri, Mar 6, 2015 at 06:10:15PM -0500, Stephen Frost wrote:
> > Technically, there haven't been any complaints about either pg_dumpall's
> > behavior in this regard, or pg_upgrade's, but pg_upgrade's post-upgrade
> > scripts would
Hi,
I am using Postgres as a part of another software and the version is 9.3.
I do not know how then the pg_conndefaults command works.
Please help me with some other solution so that I can get the details of a
particular connection such as host ip address and then use that create a
new connecti
On Mon, Mar 9, 2015 at 10:12 AM, Eli Murray wrote:
> Thank you all for your help. I'm following along with John McKown's
> suggestion but when I run the update query I get "UPDATE 32956" but the
> personid column in my rawdata table has null values for every record.
>
> Here's the exact query I ra
On Fri, Mar 6, 2015 at 06:10:15PM -0500, Stephen Frost wrote:
> Technically, there haven't been any complaints about either pg_dumpall's
> behavior in this regard, or pg_upgrade's, but pg_upgrade's post-upgrade
> scripts would happily remove any databases which were marked as
> 'datallowconn = fal
On 03/09/2015 08:57 AM, Adrian Klaver wrote:
On 03/09/2015 08:49 AM, Kevin Grittner wrote:
pinker wrote:
INFO: vacuuming "my_table"
INFO: "my_table": found 0 removable, 3043947 nonremovable row
versions in 37580 pages
DETAIL: 0 dead row versions cannot be removed yet.
So there are no l
On Mon, 9 Mar 2015 09:05:07 -0700 (MST)
pinker wrote:
>
> > So there are no longer any dead rows being left behind, right?
> >
> > Why are we still discussing this? Do you have some other question?
>
> There are no dead rows, but postgres still cannot reuse the space because of
> 3043947 nonr
> So there are no longer any dead rows being left behind, right?
>
> Why are we still discussing this? Do you have some other question?
There are no dead rows, but postgres still cannot reuse the space because of
3043947 nonremovable row versions ..
INFO: vacuuming "my_table"
INFO: "my_table
On 03/09/2015 08:49 AM, Kevin Grittner wrote:
pinker wrote:
INFO: vacuuming "my_table"
INFO: "my_table": found 0 removable, 3043947 nonremovable row versions in
37580 pages
DETAIL: 0 dead row versions cannot be removed yet.
So there are no longer any dead rows being left behind, right?
On 03/09/2015 08:05 AM, pinker wrote:
select * from pg_stat_activity where state ilike 'idle in transaction%' it's
empty.
And there is no relation between this table and the tables or functions
being queried?
no...
If snapshot is what I think it means, you might want to point them at:
h
pinker wrote:
> INFO: vacuuming "my_table"
> INFO: "my_table": found 0 removable, 3043947 nonremovable row versions in
> 37580 pages
> DETAIL: 0 dead row versions cannot be removed yet.
So there are no longer any dead rows being left behind, right?
Why are we still discussing this? Do you
and select txid_current_snapshot() - 1694632069:1694632069:
select distinct xmin::text::bigint from sms_actual_mapper order by 1 desc
1694595273
--
View this message in context:
http://postgresql.nabble.com/VACUUM-FULL-doesn-t-reduce-table-size-tp5840782p5841128.html
Sent from the PostgreSQL
Thank you all for your help. I'm following along with John McKown's
suggestion but when I run the update query I get "UPDATE 32956" but the
personid column in my rawdata table has null values for every record.
Here's the exact query I ran:
UPDATE rawdata SET personid = (SELECT personid FROM assig
select * from pg_stat_activity where state ilike 'idle in transaction%' it's
empty.
>And there is no relation between this table and the tables or functions
being queried?
no...
>If snapshot is what I think it means, you might want to point them at:
>http://www.postgresql.org/docs/9.3/intera
On 03/09/2015 07:37 AM, pinker wrote:
Adrian Klaver-4 wrote
On 03/09/2015 07:08 AM, pinker wrote:
I did: select pg_cancel_backend(pid) from pg_stat_activity where usename
<>
'mine';
What makes you think that queries from usename = 'mine' are not important?
Because on production I don't have
Adrian Klaver-4 wrote
> On 03/09/2015 07:08 AM, pinker wrote:
>> I did: select pg_cancel_backend(pid) from pg_stat_activity where usename
>> <>
>> 'mine';
>
> What makes you think that queries from usename = 'mine' are not important?
>
> Because on production I don't have access to this table.
>
On 03/09/2015 07:08 AM, pinker wrote:
I did: select pg_cancel_backend(pid) from pg_stat_activity where usename <>
'mine';
What makes you think that queries from usename = 'mine' are not important?
Or to get back to the original request:
What does select * from pg_stat_activity show?
Also did
I did: select pg_cancel_backend(pid) from pg_stat_activity where usename <>
'mine';
and then tried again vacuum full:
INFO: vacuuming "my_table"
INFO: "my_table": found 0 removable, 3043947 nonremovable row versions in
37580 pages
DETAIL: 0 dead row versions cannot be removed yet.
CPU 1.07s/1.
On 03/08/2015 08:49 PM, Eli Murray wrote:
Hi all,
I'm a student journalist working on a project for our student paper
which lists salaries and positions for every staff member at the
university. We received the data from an FOI request but the university
is refusing to give us primary keys for t
On 03/09/2015 12:57 AM, Anushka Chandrababu wrote:
Hi,
I am accessing postgres database using tcl. I am looking for a way to
change the database in an existing connection, but I saw that this is
not possible in postgresql without creating a new connection.
Then I decided to query pg_conndefault
On 03/09/2015 04:22 AM, pinker wrote:
Yes I have. Environment I'm working on is production snapshot, so there is no
active transactions that could block those blocks from being removed...
Well the below from your original post would say different:
VACUUM FULL VERBOSE output:
INFO: vacuuming
On Sun, Mar 8, 2015 at 10:49 PM, Eli Murray wrote:
> Hi all,
>
> I'm a student journalist working on a project for our student paper which
> lists salaries and positions for every staff member at the university. We
> received the data from an FOI request but the university is refusing to give
> us
Vick Khera wrote
> On Fri, Mar 6, 2015 at 5:59 AM, pinker <
> pinker@
> > wrote:
>
>> I have deleted a large number of records from my_table, which originally
>> had
>> 288 MB. Then I ran vacuum full to make the table size smaller. After this
>> operation size of the table remains the same, desp
Yes I have. Environment I'm working on is production snapshot, so there is no
active transactions that could block those blocks from being removed...
--
View this message in context:
http://postgresql.nabble.com/VACUUM-FULL-doesn-t-reduce-table-size-tp5840782p5841085.html
Sent from the Postgr
María Dovale wrote:
> Thanks, I mean, how heavy it is in kB.
You can use lo_lseek64 to find out how big a large object is.
To find out the size of large object 24858, you can
SELECT lo_lseek64(lo_open(24858, 262144), 0, 2);
The last "2" here is SEEK_END from /usr/include/unistd.h, so it may be
t
sorry, 64 GB swap
--
View this message in context:
http://postgresql.nabble.com/autovacuum-worker-running-amok-and-me-too-tp5840299p5841075.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To mak
Hi paul
just found my system (24 GB Mem, 72 GB Swap) running nearly at it's limits:
The current vaccum is using 66.7 GB (sixty-six dot seven) GB of memory and
my System is nearly down.
I'm sorry, but that must be an bug. Remember: It's the Analyze of an
GIN-Index that is making that problems. Va
> Hi,
>
>
> does no one have an idea?
>
> It may be a rare case doing the same UPDATE a thousand times. But I´m really
> interested why this is not happening when doing DIFFERENT updates. And, of
> course, if something could be done on the database side to prevent this
> behavior in case so
Hi
it is side effect of MVCC implementation of Postgres. There is not possible
vacuum inside open transaction.
If you need it, then you should to use a different database - Postgres
doesn't work well when one record is highly often used in transaction.
Usual solution for Postgres is some proxy, t
Hi,
I am accessing postgres database using tcl. I am looking for a way to
change the database in an existing connection, but I saw that this is not
possible in postgresql without creating a new connection.
Then I decided to query pg_conndefaults so that I can get the previous
connection detail, m
Hi,
does no one have an idea?
It may be a rare case doing the same UPDATE a thousand times. But I´m really
interested why this is not happening when doing DIFFERENT updates. And, of
course, if something could be done on the database side to prevent this
behavior in case some application dev
37 matches
Mail list logo