> On Jun 27, 2017, at 3:30 PM, Peter Geoghegan wrote:
>
> On Tue, Jun 27, 2017 at 12:40 PM, armand pirvu wrote:
>> so how is it working in fact ? Isn't it working like looping in the
>> IVEE.dim_company and for each company_id if the record does have a
On Tue, Jun 27, 2017 at 12:40 PM, armand pirvu wrote:
> so how is it working in fact ? Isn't it working like looping in the
> IVEE.dim_company and for each company_id if the record does have a
> correspondent in csischema.dim_company then update csischema.dim_company set
Hi
Got question
birstdb=# \d csischema.dim_company
Table "csischema.dim_company"
Column |Type | Modifiers
-+-+---
company_id | integer | not null
company_name|
On Wed, Feb 1, 2017 at 4:15 PM, Johann Spies wrote:
> How do I formulate the on conflict do update-section of this query? When
> I try set title=q.title, q is unknown. When I try and change 'title' in
> the select-part to something else and try title=ti I get the
How do I formulate the on conflict do update-section of this query? When I
try set title=q.title, q is unknown. When I try and change 'title' in the
select-part to something else and try title=ti I get the message that ti
cannot be used in this part of the query.
INSERT INTO wos_2017_1.article
Justin Pryzby writes:
> I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
> running into a problem when the table has >830 columns (we have some tables
> which are at the 1600 column limit, and have previously worked around that
> limit using
> On Dec 4, 2016, at 9:32 AM, Justin Pryzby wrote:
>
> Our application INSERTs data from external sources, and infrequently UPDATEs
> the previously-inserted data (currently, it first SELECTs to determine whether
> to UPDATE).
>
> I'm implementing unique indices to allow
Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).
I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830
Our application INSERTs data from external sources, and infrequently UPDATEs
the previously-inserted data (currently, it first SELECTs to determine whether
to UPDATE).
I'm implementing unique indices to allow "upsert" (and pg_repack and..), but
running into a problem when the table has >830
Hi Rafal:
On Tue, Jul 21, 2015 at 12:43 PM, Rafal Pietrak ra...@ztk-rp.eu wrote:
W dniu 21.07.2015 o 09:34, Francisco Olarte pisze:
In this case I think you are mixing vouchers with voucher-numbers. IMO
you could get a better dessign by using an auxiliary table and not
nullifying the number
Hi Rafal:
On Mon, Jul 20, 2015 at 3:33 PM, Rafal Pietrak ra...@ztk-rp.eu wrote:
3. there are methods (like cryptographic random sequence), which
guarantee no conflicts. So one should resort to that.
Regarding the last point. Usually, I implement one-time used vouchers as
rows in table like:
Franscisco,
W dniu 21.07.2015 o 09:34, Francisco Olarte pisze:
Hi Rafal:
On Mon, Jul 20, 2015 at 3:33 PM, Rafal Pietrak ra...@ztk-rp.eu wrote:
Regarding the last point. Usually, I implement one-time used vouchers as
rows in table like:
CREATE TABLE (voucher int not null, consumed bool,
On 21 July 2015 at 11:43, Rafal Pietrak ra...@ztk-rp.eu wrote:
On the other hand, the ON CONFLICT RETRY has a nice feature for an
application programmer (like myself) that it leaves us free of the
implementation of the re-issue of an INSERT. One database-schema
designer does that for all of
Hi Igor:
On Mon, Jul 20, 2015 at 4:56 PM, Igor Neyman iney...@perceptron.com wrote:
Well, there is a caveat.
If I create table and couple indexes like this:
..
and populate them:
and then check the size of the indexes:
for select pg_relation_size('U1') I get 2834432
while select
Hi Alvaro.
On Mon, Jul 20, 2015 at 4:07 PM, Alvaro Herrera
alvhe...@2ndquadrant.com wrote:
Some web research suggests that random sequences are not great for indexes
because of the resultant keyspace fragmentation. I'm assuming that means
a low number of nodes in the btree leafs, so an
On 7/20/2015 7:01 AM, Geoff Winkless wrote:
Some web research suggests that random sequences are not great for
indexes because of the resultant keyspace fragmentation. I'm
assuming that means a low number of nodes in the btree leafs, so an
increase in memory usage for the index?
that
Hi Rafal:
On Mon, Jul 20, 2015 at 3:33 PM, Rafal Pietrak ra...@ztk-rp.eu wrote:
Regarding the last point. Usually, I implement one-time used vouchers as
rows in table like:
CREATE TABLE (voucher int not null, consumed bool, expire timestamp not
null default
Hi Daniel:
On Sun, Jul 19, 2015 at 9:03 PM, Daniel Verite dan...@manitou-mail.org wrote:
For SERIAL, it's too obvious to guess what is the next one,
so malicious people could claim access codes or vouchers
they don't own.
Why don't you use encryption? Specifically only on the external side.
For this purpose, I have seen it recommended to use a UUID instead of a
randomly generated integer. I do this myself for production applications
and over millions of records I have yet to log a conflict. Also, as stated
above, you could create a plpgsql function which would achieve exactly what
If I'm not mistaken, the conclusions from posts in this thread are:
1. recognizing of a RETRY action, as a separate case of ON CONFLICT
transaction continuation is not generally appreciated.
2. I shouldn't expect any hidden corruption/performance obstacles when
simply re-attempting of an INSERT
On 20 July 2015 at 14:33, Rafal Pietrak ra...@ztk-rp.eu wrote:
If I'm not mistaken, the conclusions from posts in this thread are:
3. there are methods (like cryptographic random sequence), which
guarantee no conflicts. So one should resort to that.
Some web research suggests that random
Geoff Winkless wrote:
On 20 July 2015 at 14:33, Rafal Pietrak ra...@ztk-rp.eu wrote:
If I'm not mistaken, the conclusions from posts in this thread are:
3. there are methods (like cryptographic random sequence), which
guarantee no conflicts. So one should resort to that.
Some web
On 20 July 2015 at 15:07, Alvaro Herrera alvhe...@2ndquadrant.com wrote:
Not sure what type of indexes would be affected by that problem, but I
don't think Postgres' btrees would be.
I admit it's not really my area.
Take it up with Drew Blas, I guess :)
On Mon, Jul 20, 2015 at 7:01 AM, Geoff Winkless pgsqlad...@geoff.dj wrote:
On 20 July 2015 at 14:33, Rafal Pietrak ra...@ztk-rp.eu wrote:
If I'm not mistaken, the conclusions from posts in this thread are:
3. there are methods (like cryptographic random sequence), which
guarantee no
Geoff Winkless wrote:
On 20 July 2015 at 14:33, Rafal Pietrak ra...@ztk-rp.eu wrote:
If I'm not mistaken, the conclusions from posts in this thread are:
3. there are methods (like cryptographic random sequence), which
guarantee no conflicts. So one should resort to that.
Some web
Hi,
W dniu 19.07.2015 o 09:33, Charles Clavadetscher pisze:
[---]
2. with current (as of 9.5) implementation I think I can always ON CONFLICT
DO NOTHING, and retry the INSERT from application level.
An UPSERT is try an INSERT and if there is a conflict, do nothing or UPDATE
On 19 July 2015 at 09:11, Rafal Pietrak ra...@ztk-rp.eu wrote:
I'm not particularly fond of using functions to accessing RDBMS instead
of tables.
And I'm not particularly fond of workarounds.
Use a combination of factors (a sequence ID and the key) for your
authorization. So in the
Hello,
I've just started to read through postgres-9.5 what's new ... before
giving it a try. The insert ... on conflict do update is particularly
atractive to me; but I was wondering why it does not cover the third
usage scenario of action that a programmer may need for a PK conflict
during
Hello
I've just started to read through postgres-9.5 what's new ... before giving
it
a try. The insert ... on conflict do update is particularly atractive to
me; but I
was wondering why it does not cover the third usage scenario of action that a
programmer may need for a PK conflict
Hi,
W dniu 19.07.2015 o 10:27, Geoff Winkless pisze:
On 19 July 2015 at 09:11, Rafal Pietrak ra...@ztk-rp.eu
mailto:ra...@ztk-rp.eu wrote:
I'm not particularly fond of using functions to accessing RDBMS instead
of tables.
And I'm not particularly fond of workarounds.
Use
On 19 July 2015 at 11:30, Rafal Pietrak ra...@ztk-rp.eu wrote:
when I have to invent/introduce additional
features/columns/attributes (like a key in addition to a sequence),
which are not required by the design, but necessary for implementation)
is a workaround (almost by definition).
I'm
Hi,
W dniu 19.07.2015 o 14:10, Geoff Winkless pisze:
On 19 July 2015 at 11:30, Rafal Pietrak ra...@ztk-rp.eu
mailto:ra...@ztk-rp.eu wrote:
when I have to invent/introduce additional
features/columns/attributes (like a key in addition to a sequence),
which are not required by
On 07/19/2015 08:04 AM, Rafal Pietrak wrote:
W dniu 19.07.2015 o 16:33, Adrian Klaver pisze:
On 07/19/2015 06:47 AM, Rafal Pietrak wrote:
Hi,
W dniu 19.07.2015 o 14:10, Geoff Winkless pisze:
On 19 July 2015 at 11:30, Rafal Pietrak ra...@ztk-rp.eu
[---]
Although a random can
On 07/19/2015 06:47 AM, Rafal Pietrak wrote:
Hi,
W dniu 19.07.2015 o 14:10, Geoff Winkless pisze:
On 19 July 2015 at 11:30, Rafal Pietrak ra...@ztk-rp.eu
mailto:ra...@ztk-rp.eu wrote:
when I have to invent/introduce additional
features/columns/attributes (like a key in addition to a
Rafal Pietrak wrote:
CREATE TABLE vouchers (a_voucher bigint PRIMARY KEY default
(random()*10)::bigint, issued date default now(), .);
Generators of truly unique pseudo-random values provide a
better ground for this. Consider for example:
Melvin Davidson wrote:
Aside from Tom Lane's comments, it seems to me you are reinventing the wheel
by generating random values for keys. Why not just use UUID
http://www.postgresql.org/docs/9.5/static/datatype-uuid.html
or serial
W dniu 19.07.2015 o 16:33, Adrian Klaver pisze:
On 07/19/2015 06:47 AM, Rafal Pietrak wrote:
Hi,
W dniu 19.07.2015 o 14:10, Geoff Winkless pisze:
On 19 July 2015 at 11:30, Rafal Pietrak ra...@ztk-rp.eu
[---]
Although a random can duplicate its previous values, my random(s)
Aside from Tom Lane's comments, it seems to me you are reinventing the
wheel by generating random values for keys. Why not just use UUID
http://www.postgresql.org/docs/9.5/static/datatype-uuid.html
or serial
http://www.postgresql.org/docs/9.5/static/datatype-numeric.html#DATATYPE-SERIAL?
Wouldn't
38 matches
Mail list logo