The array_agg() has nothing to do with it. It’s the group by.
Without knowing what you are conceptually trying to accomplish, I can’t say
much.
On my test 9.4.10 db, a similar example does a HashAggregate, so no sorting
(google HashAggregate vs GroupAggregate). But still it’s an expensive
.
I hope you find it useful.
Regards,
Kiriakos Georgiou
s resolved but now it’s
happening again. Luckily it has no ill effect other than consuming 4G of space
for an otherwise 1G database.
Can you offer any hints as to how I can cure this?
thanks,
Kiriakos Georgiou
pg_controldata output:
pg_control version number:942
Catalog vers
If I'm understanding you correctly, you want a job queue.
This involves polling and retrieving jobs to work on them. The polling can be
assisted by listen/notify so workers don't poll unnecessarily. The retrieving
has to be done intelligently to avoid concurrency issues.
Kiriakos Georgiou
Try this: http://klicman.org/throttle/
Kiriakos
On Mar 19, 2012, at 12:06 AM, Aleksey Tsalolikhin wrote:
Hi. When pg_dump runs, our application becomes inoperative (too
slow). I was going to ask if nice'ing the postgres backend process
that handles the COPY would help but I just
The data anonymizer process is flawed because you are one misstep away from
data spillage. Sensitive data should be stored encrypted to begin. For test
databases you or your developers can invoke a process that replaces the real
encrypted data with fake encrypted data (for which everybody has
On Mar 19, 2012, at 5:55 PM, Bill Moran wrote:
Sensitive data should be stored encrypted to begin. For test databases you
or your developers can invoke a process that replaces the real encrypted
data with fake encrypted data (for which everybody has the key/password.)
Or if the
I would store sensitive data encrypted in the database. Check the pgcrypto
module.
Kiriakos
On Mar 18, 2012, at 1:00 PM, Janning Vygen wrote:
Hi,
I am working on postgresql 9.1 and loving it!
Sometimes we need a full database dump to test some performance issues with
real data.
In all likelihood you do not want to do what you described (lock the tables.)
You have to have a basic understanding of MVCC and transaction isolation levels
to determine if the default behavior of mvcc + 'read committed' is sufficient.
In a lot of cases it is.
Kiriakos
On Mar 18, 2012,
+1 to seamless partitioning.
Although the idea of having a student work on this seems a bit scary, but what
seems scary to me may be a piece of cake for a talented kid :-)
Kiriakos
http://www.mockbites.com
On Mar 13, 2012, at 3:07 PM, John R Pierce wrote:
On 03/08/12 12:01 PM, Andy Colson
Instead of the joins you can use a subquery to get the first address.
Or you can do the joins without the group by and use row_number()
over(partition by companies.id) on the select list to label each company
address with a number starting at 1. You can just keep rows that have
row_number =
Yes, can do. Just have an insert trigger on the jobs table that notifies the
monitor, something like:
CREATE OR REPLACE FUNCTION notify_monitor()
RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
BEGIN
NOTIFY monitor;
RETURN NULL;
END
$$
CREATE TRIGGER jobs_trigger
AFTER INSERT
ON
Indeed, if there is not some sort of implementation limitation, it would be
cool to be able to lock two big integers like so:
pg_try_advisory_xact_lock(key1 bigint, key2 bigint)
That would solve your problem with locking UUIDs (although you still couldn't
lock UUIDs simultaneously across
This is correct behavior with MVCC. Do a 'select * from x' in thread 2 and to
understand why. The select for update in thread 2 sees the data in table x as
it was prior to thread 1 committing, thus it won't see the row with a=2.
For further suggestions you'll have to explain what you are
I'd code it more general to allow for any user type:
select
yw, substr(id,1,2) as user_type, count(1)
from
pref_money
group by
yw, user_type
You can use some clever pivoting to get the user_types to be columns, but I see
no need to waste db cycles.
You can get the report you want by
As I understand it the order the of evaluation of search arguments is up to the
optimizer. I've tested the following query, that is supposed to take advantage
of advisory locks to skip over rows that are locked by other consumers running
the exact same query and it seems to work fine. It
On Feb 16, 2012, at 12:44 AM, Tom Lane wrote:
Kiriakos Georgiou kg.postgre...@olympiakos.com writes:
As I understand it the order the of evaluation of search arguments is up to
the optimizer. I've tested the following query, that is supposed to take
advantage of advisory locks to skip over rows
17 matches
Mail list logo