On 12/25/2014 02:35 AM, Euler Taveira wrote:
Hi,
Currently the same message goes to server log and client app. Sometimes
it bothers me since I have to analyze server logs and discovered that
lc_messages is set to pt_BR and to worse things that stup^H^H^H
application parse some error
hi,
I am working with postgresql 9.4.0 source using eclipse(indigo version) in
ubuntu 14.04. I am facing a problem of attaching a client process to
postgresql server.
I am following the steps given in this link
On 12/28/2014 06:37 PM, Ravi Kiran wrote:
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
how do we rectify this
On 12/28/2014 06:37 PM, Ravi Kiran wrote:
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
how do we rectify this
On Fri, Dec 26, 2014 at 4:16 PM, Michael Paquier michael.paqu...@gmail.com
wrote:
On Fri, Dec 26, 2014 at 3:24 PM, Fujii Masao masao.fu...@gmail.com
wrote:
pglz_compress() and pglz_decompress() still use PGLZ_Header, so the
frontend
which uses those functions needs to handle PGLZ_Header. But it
On 12/28/2014 07:49 PM, Ravi Kiran wrote:
Thank you for the response sir, I am running both the eclipse and the
client under the same user name which is ravi, I have installed postgres
source code under the user ravi not postgres,
It doesn't matter how you installed it. How you *run* it
Sir, I followed the instructions in the link which you gave , but this time
I am getting the following error.
*Can't find a source file at
/build/buildd/eglibc-2.19/socket/../sysdeps/unix/sysv/linux/x86_64/recv.c *
*Locate the file or edit the source lookup path to include its location.*
is the
On 12/28/2014 10:18 PM, Ravi Kiran wrote:
Sir, I followed the instructions in the link which you gave , but this
time I am getting the following error.
*Can't find a source file at
/build/buildd/eglibc-2.19/socket/../sysdeps/unix/sysv/linux/x86_64/recv.c *
*Locate the file or edit the
Hi all,
While reviewing another patch, I have noticed that recovery_min_apply_delay
can have a negative value. And the funny part is that we actually attempt
to apply a delay even in this case, per se this condition
recoveryApplyDelay@xlog.c:
/* nothing to do if no delay configured */
On Sat, Dec 27, 2014 at 3:42 AM, Alexey Vasiliev leopard...@inbox.ru wrote:
Thanks for suggestions.
Patch updated.
Cool, thanks. I just had an extra look at it.
+This is useful, if I using for restore of wal logs some
+external storage (like AWS S3) and no matter what the
21.12.2014, 18:48, Fabrízio de Royes Mello kirjoitti:
I work with some customer that have databases with a lot of schemas and
sometimes we need to run manual VACUUM in one schema, and would be nice
to have a new option to run vacuum in relations from a specific schema.
The new syntax could
On Tue, 2014-12-23 at 01:16 -0800, Jeff Davis wrote:
New patch attached (rebased, as well).
I also see your other message about adding regression testing. I'm
hesitant to slow down the tests for everyone to run through this code
path though. Should I add regression tests, and then remove
On Sun, Dec 28, 2014 at 12:37 PM, Jeff Davis pg...@j-davis.com wrote:
Do others have similar numbers? I'm quite surprised at how little
work_mem seems to matter for these plans (HashJoin might be a different
story though). I feel like I made a mistake -- can someone please do a
sanity check on
On Thu, 2014-12-11 at 02:46 -0800, Jeff Davis wrote:
On Sun, 2014-08-10 at 14:26 -0700, Jeff Davis wrote:
This patch is requires the Memory Accounting patch, or something similar
to track memory usage.
The attached patch enables hashagg to spill to disk, which means that
hashagg will
On Sun, 2014-12-28 at 12:37 -0800, Jeff Davis wrote:
I feel like I made a mistake -- can someone please do a
sanity check on my numbers?
I forgot to randomize the inputs, which doesn't matter much for hashagg
but does matter for sort. New data script attached. The results are even
*better* for
On Sat, Oct 11, 2014 at 09:07:46AM -0400, Peter Eisentraut wrote:
On 10/11/14 1:41 AM, Noah Misch wrote:
Good question. It would be nice to make the change there, for the benefit
of
other consumers. The patch's setlocale_native_forked() assumes it never
runs
in a multithreaded
On Sat, Dec 27, 2014 at 8:51 PM, Tom Lane t...@sss.pgh.pa.us wrote:
Robert Haas robertmh...@gmail.com writes:
On Sat, Dec 27, 2014 at 7:55 PM, Tom Lane t...@sss.pgh.pa.us wrote:
This would have the effect of transferring all responsibility for
dead-stats-entry cleanup to autovacuum. For
On 12/28/2014 04:58 PM, Noah Misch wrote:
On Sat, Oct 11, 2014 at 09:07:46AM -0400, Peter Eisentraut wrote:
On 10/11/14 1:41 AM, Noah Misch wrote:
Good question. It would be nice to make the change there, for the benefit of
other consumers. The patch's setlocale_native_forked() assumes it
Hi,
Anybody looks into problems in pgbench pointed out by Coverity? If no,
I would like to work on fixing them because I need to write patches
for -f option related issues anyway.
Best regards,
--
Tatsuo Ishii
SRA OSS, Inc. Japan
English: http://www.sraoss.co.jp/index_en.php
Hi
Currently tab completion for 'COMMENT ON {object} foo IS' will result in the
'IS'
being duplicated up to two times; not a world-shattering issue I know, but the
fix is trivial and I stumble over it often enough to for it to mildly annoy me.
Patch attached.
Regards
Ian Barwick
--
Ian
On Wed, Dec 24, 2014 at 4:00 PM, Dilip kumar dilip.ku...@huawei.com wrote:
Case1:In Case for CompleteDB:
In base code first it will process all the tables in stage 1 then in
stage2 and so on, so that at some time all the tables are analyzed at least
up to certain stage.
But If we process all
On 29 December 2014 10:22 Amit Kapila Wrote,
Case1:In Case for CompleteDB:
In base code first it will process all the tables in stage 1 then in stage2
and so on, so that at some time all the tables are analyzed at least up to
certain stage.
But If we process all the stages for one table
On Sun, 2014-12-21 at 13:00 -0500, Tom Lane wrote:
Tomas Vondra t...@fuzzy.cz writes:
i.e. either destroy the whole context if possible, and just free the
memory when using a shared memory context. But I'm afraid this would
penalize the shared memory context, because that's intended for
On Tue, 2014-12-16 at 00:27 +0100, Tomas Vondra wrote:
plperl.c: In function 'array_to_datum_internal':
plperl.c:1196: error: too few arguments to function 'accumArrayResult'
plperl.c: In function 'plperl_array_to_datum':
plperl.c:1223: error: too few arguments to function
25 окт. 2014 г., в 4:31, Jim Nasby jim.na...@bluetreble.com написал(а):
Please don't top-post.
On 10/24/14, 3:40 AM, Borodin Vladimir wrote:
I have taken some backtraces (they are attached to the letter) of two
processes with such command:
pid=17981; while true; do date; gdb -batch -e
On Tue, 2014-04-01 at 13:08 -0400, Tom Lane wrote:
I think a patch that stood a chance of getting committed would need to
detect whether the aggregate was being called in simple or grouped
contexts, and apply different behaviors in the two cases.
The simple context doesn't seem like a big
26 matches
Mail list logo