2013/8/6 Alvaro Herrera :
> Pavel Stehule escribió:
>
>> you code can be translated to
>>
>> CREATE OR REPLACE
>> FUNCTION GROUP_CONCAT_ATOM(field1 TEXT, field2 TEXT, delimiter TEXT)
>> RETURNS TEXT AS $$
>> BEGIN
>> RETURN COALESCE(field1||delimiter||field2, field2, field1);
>> END;
>>
Pavel Stehule escribió:
> you code can be translated to
>
> CREATE OR REPLACE
> FUNCTION GROUP_CONCAT_ATOM(field1 TEXT, field2 TEXT, delimiter TEXT)
> RETURNS TEXT AS $$
> BEGIN
> RETURN COALESCE(field1||delimiter||field2, field2, field1);
> END;
> $$ LANGUAGE plpgsql;
Actually you do
Hello
2013/8/3 immersive.ex...@gmail.com :
> I needed a GROUP_CONCAT to port some queries to postgres.
>
> In discussions online, I found repeated advice for rewriting the queries,
> but no solid way to formulate the GROUP_CONCAT as a postgres function.
> Rewrite perhaps hundreds of queries that h
Hi,
I found the problem should be because I tried to clean RAM cache in the slave
by running "sync; echo 3 > /proc/sys/vm/drop_caches'
that caused the "receiver" of slave gone away.
ps -ef | grep receiver
postgres 6182 6178 0 12:11 ?00:00:06 postgres: wal receiver
process
No luck on posting at that blog; comments are limited to X characters.
Here is the final version with the minor update to the variable names
and comments:
-- group_concat.sql
-- permutation of GROUP_CONCAT parameter types with delimiter parameter
furnished:
CREATE OR REPLACE
FUNCTION GROUP_CONCAT
Well after testing, this is what I found:
When you try to use ANYELEMENT parameters, and even just a VARIADIC
TEXT[] parameter to support the optional delimiter argument:
FUNCTION GROUP_CONCAT_ATOM(ANYELEMENT, ANYELEMENT, VARIADIC
delimiters TEXT[])
when you go to create the aggregates, post
Hi,
I am doing some stress tests to a pair of PG servers to monitor the
pg_stat_replication, during the test, the pg_stat_replication suddenly became
empty.
PG version: 9.2.4
O/S: Ubuntu: 12.04
Since I need to monitor the replication lag from time to time, if the
pg_stat_replication becomes
On Mon, Aug 5, 2013 at 6:22 PM, P. Broennimann
wrote:
> Hello
>
> I'd like to use dblink to insert records (several) from a local table to a
> remote table. Is it possible to do this in one statement? Something like:
>
> insert into select col1, col2, col3 from
dblink_exec can be of a certai
On Mon, Aug 5, 2013 at 5:15 PM, Jack Christensen
wrote:
> When using a subquery as a source for row_to_json, depending on the order of
> arguments it may ignore renaming a column.
>
> jack=# create table player(
> jack(# player_id serial primary key,
> jack(# name varchar not null unique
> jac
When using a subquery as a source for row_to_json, depending on the
order of arguments it may ignore renaming a column.
jack=# create table player(
jack(# player_id serial primary key,
jack(# name varchar not null unique
jack(# );
NOTICE: CREATE TABLE will create implicit sequence
"player_
On Mon, Aug 5, 2013 at 2:21 AM, Thomas Kellerer wrote:
> Stephen Brearley, 02.08.2013 11:40:
> Why don't you use the official way to re-configure the Windows service and
> use pg_ctl unregister and pg_ctl register
> to make it use the correct data directory. There is no need to manually
> change
I believe his VARIADIC might cover the optional delimiter use, but
he suggests using an array aggregation function instead -which does
not return a string list, so you would still have to rework the
original queries.
Now you could write a GROUP_CONCAT aggregate that
Ahh! I was looking for that in the documentation!! I believe that is not
mentioned on the function documentation page. I had tried ANY that works for
other SQLs, but it had barfed on it. Let me test it and post the more efficient
code, if it works...
On 08/05/2013 01:04
Thanks Blade-of-Light (mysterious person)
As you’ll see from my later post, I managed to fix things, largely by
understanding that there was a Postgres service I needed to stop before I could
make the last change I needed. However, there is a lot of useful stuff in your
post, which I will tr
For posterity that finalized function could be posted here:
http://okbob.blogspot.com/2009/08/mysql-functions-for-postgresql.html
There's already a GROUP_CONCAT, listed there, but I guess this one was
lacking in some way.
On Mon, Aug 5, 2013 at 10:04 AM, Alvaro Herrera wrote:
> immersive.ex...
Hi all,
I have been posting on my blog about getting LibreOffice Base and
PostgreSQL working together. My latest actually focuses on getting
multiple users connecting to a single, remote, stand-alone PostgreSQL
server. You can set this up fairly quickly and - in my case, anyway - with
no program
immersive.ex...@gmail.com escribió:
> Note: I found some close-but-no cigar aggregates shared online, but
> they would not accept integer arguments, nor would they handle the
> optionally furnished delimiter. People would suggesting casting the
> argument to the pseudo-GROUP_CONCAT
Please,
take a look on contrib/dict_int and create your own dict_noop.
It should be easy. I think you could document it and share
with people (wiki.postgresql.org ?), since there were other people
interesting in noop dictionary. Also, don't forget to modify
your configuration - use ts_debug(), i
Hello,
I encountered such a problem. my goal is to extract links from a text
using tsearch2. Everything seemed to be well, unless I got some youtube
links - there are some small and big letters inside, and a tsearch
parser is lowering everything (from http://youtube.com/Y6dsHDX I got
http://y
I needed a GROUP_CONCAT to port some queries to postgres.
In discussions online, I found repeated advice for rewriting the
queries, but no solid way to formulate the GROUP_CONCAT as a
postgres function. Rewrite perhaps hundreds of queries that happen
to be in the app
On 08/05/2013 06:24 AM, Carlos Henrique Reimer wrote:
Hi,
Yes, I agree with you that it must be upgraded to a supported version
but as the developer has not homologated the system to some new PG
versions yet I need to find out some way to fix it with 8.2.
Will try to install PG in another windo
Hi,
Yes, I agree with you that it must be upgraded to a supported version but
as the developer has not homologated the system to some new PG versions yet
I need to find out some way to fix it with 8.2.
Will try to install PG in another windows box, copying the data directories
over the network an
On Sun, Aug 4, 2013 at 7:01 PM, Craig Ringer wrote:
>
>
> I think part of the issue is that people tend to consider stored
> procedures part of the application's internal implementation where you
> just change all the call sites when you change the function.
>
> Normally stored proc are really mo
Andres Freund writes:
> ... Also, there are global
> limits to the amount of filehandles that can simultaneously opened on a
> system.
Yeah. Raising max_files_per_process puts you at serious risk that
everything else on the box will start falling over for lack of available
FD slots. (PG itself
On 08/05/2013 10:42 AM, John R Pierce wrote:
On 8/5/2013 1:01 AM, KONDO Mitsumasa wrote:
When we open file, ext3 or ext4 file system seems to sequential search
inode for opening file in file directory.
no, ext3/4 uses H-tree structures to search directories over 1 block
long quite efficiently.
Luca Ferrari wrote:
On Fri, Aug 2, 2013 at 6:55 PM, wrote:
> thanks for the hint - this is probably one of the things to do.
> I have something else in mind, but at present I just suspect that this might
> happen:
> when I modify data and select _without an ordering_, I am pretty sure to get
On 2013-08-05 18:40:10 +0900, KONDO Mitsumasa wrote:
> (2013/08/05 17:14), Amit Langote wrote:
> >So, within the limits of max_files_per_process, the routines of file.c
> >should not become a bottleneck?
> It may not become bottleneck.
> 1 FD consumes 160 byte in 64bit system. See linux manual at "
Hello,
I encountered such a problem. my goal is to extract links from a text
using tsearch2. Everything seemed to be well, unless I got some youtube
links - there are some small and big letters inside, and a tsearch
parser is lowering everything (from http://youtube.com/Y6dsHDX I got
http://y
On Fri, Aug 2, 2013 at 6:55 PM, wrote:
> thanks for the hint - this is probably one of the things to do.
> I have something else in mind, but at present I just suspect that this might
> happen:
> when I modify data and select _without an ordering_, I am pretty sure to get
> the data
> in a dif
(2013/08/05 17:14), Amit Langote wrote:
So, within the limits of max_files_per_process, the routines of file.c
should not become a bottleneck?
It may not become bottleneck.
1 FD consumes 160 byte in 64bit system. See linux manual at "epoll".
Regards,
--
Mitsumasa KONDO
NTT Open Source Software
Hello
I'd like to use dblink to insert records (several) from a local table to a
remote table. Is it possible to do this in one statement? Something like:
insert into select col1, col2, col3 from
Can someone give me an example or tell me what commands to use? What I have
so far:
l_con
On 8/5/2013 1:01 AM, KONDO Mitsumasa wrote:
When we open file, ext3 or ext4 file system seems to sequential search
inode for opening file in file directory.
no, ext3/4 uses H-tree structures to search directories over 1 block
long quite efficiently.
--
john r pierce
On Mon, Aug 5, 2013 at 5:01 PM, KONDO Mitsumasa
wrote:
> Hi Amit,
>
>
> (2013/08/05 15:23), Amit Langote wrote:
>>
>> May the routines in fd.c become bottleneck with a large number of
>> concurrent connections to above database, say something like "pgbench
>> -j 8 -c 128"? Is there any other place
Hi Amit,
(2013/08/05 15:23), Amit Langote wrote:
May the routines in fd.c become bottleneck with a large number of
concurrent connections to above database, say something like "pgbench
-j 8 -c 128"? Is there any other place I should be paying attention
to?
What kind of file system did you use?
34 matches
Mail list logo