On Wed, Jul 19, 2017 at 8:53 PM, Guyren Howe wrote:
> Thanks. Seeking greater understanding, why is json_agg(*) not equivalent?
>
Are you referring to the fact that this provokes an error?
"select json_agg(*) from schemata;"
The json_agg(expression) function has an arity
On Jul 19, 2017, at 20:47 , Paul Jungwirth
wrote:
>
>> which is great. I have an array of perfect JSON objects. Now I just need
>> to turn that into a single JSON object.
>
> I think you're saying you want it as a single JSON *array*, right? An object
> of
which is great. I have an array of perfect JSON objects. Now I just need
to turn that into a single JSON object.
I think you're saying you want it as a single JSON *array*, right? An
object of objects doesn't make sense. Assuming that's right, this seems
to work:
db1=# select
Hi, ALL
On Thu, May 11, 2017 at 6:47 PM, Adrian Klaver
wrote:
> On 05/11/2017 07:26 AM, Igor Korot wrote:
>>
>> Adrian et al,
>>
>> On Thu, May 11, 2017 at 9:54 AM, Adrian Klaver
>> wrote:
>>>
>>> On 05/11/2017 06:24 AM, Igor Korot wrote:
>
Guyren Howe writes:
> I have an array of perfect JSON objects. Now I just need to turn that into a
> single JSON object. But when I do the obvious:
> SELECT array_to_json(
> SELECT
> ARRAY_AGG(foo) AS foos
> FROM (
>
> On 20 Jul 2017, at 5:26 AM, Hannes Erven wrote:
>
> Hi Glen,
>
>
>> I'd like to enforce that in a transaction, after a couple inserts & >
>> updates, a particular column has continuous values like 1, 2, 3, and
> > never any gaps. Is it possible to do?> > I gave a
9.6
I’ve a view that shows information about schemas:
SELECT schemata.catalog_name,
schemata.schema_name,
('/'::text || (schemata.schema_name)::text) AS schema_name_address
FROM information_schema.schemata
ORDER BY schemata.catalog_name, schemata.schema_name
Fine. I now want to
Matthew Byrne writes:
> Would a more feasible approach be to introduce new types (say, TEXT2 and
> JSONB2 - or something better-sounding) which are the same as the old ones
> but add for support \u and UTF 0? This would isolate nul-containing
> byte arrays to the
Luca Looz writes:
> Thanks for the explanation!
> Can these checks be implemented or the data needed is not there and adding
> it will only add an overhead for the majority of use cases?
It's hard to see how to do much better than we're doing without storing
more data
On 19/07/2017 11:52 PM, Tom Lane wrote:
Evan Martin writes:
I have an application that imports a lot of data and the does some
queries on it to build some caches in the database, all in one long
transaction. One of those cache updates repeatedly calls a plpgsql
Evan Martin writes:
> I have an application that imports a lot of data and the does some
> queries on it to build some caches in the database, all in one long
> transaction. One of those cache updates repeatedly calls a plpgsql
> function, which internally does
I have an application that imports a lot of data and the does some
queries on it to build some caches in the database, all in one long
transaction. One of those cache updates repeatedly calls a plpgsql
function, which internally does some SQL queries. Sometimes this is
much, much slower than
C GG writes:
> ... Is PostgreSQL smart enough to not have to rewrite the table and simply
> shed the domain for the underlying datatype?
Yes, in recent versions ... don't remember how far back exactly.
regards, tom lane
--
Sent via pgsql-general
I'm (still) trying to transition out from under the uniqueidentifier
contrib module to the native uuid datatype. I seem to be able to create a
domain for uniqueidentifier:
CREATE DOMAIN public.uniqueidentifier AS uuid;
and then use pg_restore to reload my data without the contrib module.
I'd
On Wed, Jul 19, 2017 at 12:12 PM, vstuart wrote:
> Hi David: I see what you are saying; sorry for the confusion. This is how
> postgres operates on my system:
>
None of that is surprising or unique. If you ask specific questions I'd
be willing to answer them but I'm
Hi David: I see what you are saying; sorry for the confusion. This is how
postgres operates on my system:
[victoria@victoria ~]$ echo $HOME
/home/victoria
[victoria@victoria ~]$ which postgres
/usr/bin/postgres
[victoria@victoria ~]$ postgres
postgres does not know where to find the
On Wed, Jul 19, 2017 at 11:49 AM, vstuart wrote:
> As a simple solution, I can sudo symlink MY ~/.psqlrc to that directory
> (/var/...; changing ownership also to postgres), but there appears to be
> some underlying issue, as Pg should find ~/.psqlrc, correct?
>
What
Hi Thom: thank you for your reply; I can't figure this one out! Per your reply:
[victoria@victoria ~]$ pg
[sudo -u postgres -i]
[sudo] password for victoria:
[postgres@victoria ~]$ psql -a
psql (9.6.3)
Type "help" for help.
postgres=# \q
[postgres@victoria ~]$ psql -af ~/.psqlrc
Luca Looz writes:
> After some tests it seems that this happens when the same row is covered by
> more than 1 update in the same transaction even without any change.
> Is this an expected behavior? Why it happens?
Yes, see comment in RI_FKey_fk_upd_check_required:
On 18 July 2017 at 19:02, vstuart wrote:
> My ~/.psqlrc file is ignored by my PostgreSQL installation (v.9.6.3; Arch
> Linux x86_64 platform).
>
> Suggestions?
Do you get anything with "psql -a"?
If not, what do you get when you use "psql -af ~/.psqlrc" ?
Thom
--
Gavin Wahl writes:
> Hello,
>
> I've read the documentation for two-phase commit and it sounds interesting,
> but
> I'm having trouble thinking of how to actually put it into production. How are
> people using it? Do you use a XA transaction manager or something custom
>
Hello.
I am trying to find workaround for cross-column statistics. For example, I
have tags with similarity:
select version();
version
Matthew Byrne writes:
> Are there any plans to support \u in JSONB and, relatedly, UTF code
> point 0 in TEXT?
No. It's basically never going to happen because of the widespread use
of C strings (nul-terminated strings) inside the backend. Making \0 a
legal member of
My ~/.psqlrc file is ignored by my PostgreSQL installation (v.9.6.3; Arch
Linux x86_64 platform).
Suggestions?
--
View this message in context:
http://www.postgresql-archive.org/psqlrc-file-is-ignored-tp5971773.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
--
yes i have estimated pg_xlog but server, some time, writes a lot of WAL file.
there are some robust alternative to replicate partial database in
Postgresql over WAN?
or, what's the best way to realign pglogical replica? pg_dump/restore?.
i have to find the best way to replicate only some table
I want to create a generic own_postgres.conf and include that in
postgres.conf
I ask a bit wrong. Is there a variable to include postgres version on
log_filename?
Best regards
basti
Am 18.07.2017 um 10:40 schrieb Achilleas Mantzios:
> On 18/07/2017 11:36, Walter Nordmann wrote:
>> May be i'm
I was analyzing an update function and i have noticed "Trigger for
constraint" entries for foreign keys that i wasn't using or referring.
After some tests it seems that this happens when the same row is covered by
more than 1 update in the same transaction even without any change.
Here is a
Are there any plans to support \u in JSONB and, relatedly, UTF code
point 0 in TEXT? To the best of my knowledge \u is valid in JSON and
code point 0 is valid in UTF-8 but Postgres rejects both, which severely
limits its usefulness in many cases.
I am currently working around the issue
On Wed, Jul 19, 2017 at 7:10 PM, Gavin Wahl wrote:
> I've read the documentation for two-phase commit and it sounds interesting,
> but
> I'm having trouble thinking of how to actually put it into production. How are
> people using it? Do you use a XA transaction manager or
Hello,
I've read the documentation for two-phase commit and it sounds interesting, but
I'm having trouble thinking of how to actually put it into production. How are
people using it? Do you use a XA transaction manager or something custom built?
How dos the reconciliation work when a component
Stephen Frost writes:
> * Tom Lane (t...@sss.pgh.pa.us) wrote:
>> Yeah, that's because eval_const_expressions doesn't know how to fold
>> a constant RowExpr to a simple Const. I have a patch laying about
>> someplace to improve that, but I keep forgetting about it until
>>
Tom,
* Tom Lane (t...@sss.pgh.pa.us) wrote:
> Dmitry Lazurkin writes:
> > I am trying to find workaround for cross-column statistics.
> > ...
> > Worn estimate. Planner doesn't use statistics. In code I see usage of
> > function scalargtsel which returns default selectivity
Dmitry Lazurkin writes:
> I am trying to find workaround for cross-column statistics.
> ...
> Worn estimate. Planner doesn't use statistics. In code I see usage of
> function scalargtsel which returns default selectivity because
> ROW('tag1', 0.9)::tag_sim is not Const.
Yeah,
Hello.
I am trying to find workaround for cross-column statistics. For example,
I have tags with similarity:
select version();
version
On 2017-07-19 13:37, Glen Huang wrote:
Hi,
I'd like to enforce that in a transaction, after a couple inserts &
updates, a particular column has continuous values like 1, 2, 3, and
never any gaps. Is it possible to do?
I gave a concrete example here:
https://stackoverflow.com/questions/45187113
Hi,
I'd like to enforce that in a transaction, after a couple inserts & updates, a
particular column has continuous values like 1, 2, 3, and never any gaps. Is it
possible to do?
I gave a concrete example here: https://stackoverflow.com/questions/45187113
36 matches
Mail list logo