In fact it was a single delete statement.
From: Vladimir Nicolici
Sent: Tuesday, October 10, 2017 17:30
To: Achilleas Mantzios; pgsql-general@postgresql.org
Subject: RE: [GENERAL] Strange checkpoint behavior - checkpoints take alongtime
No, it didn’t. The delete was done in a single transaction.
No, it didn’t. The delete was done in a single transaction.
From: Achilleas Mantzios
Sent: Tuesday, October 10, 2017 17:18
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange checkpoint behavior - checkpoints take a longtime
Hello Vladimir,
maybe your update triggered auto_vacuum
Hello Vladimir,
maybe your update triggered auto_vacuum on those tables ? Default
autovacuum_freeze_max_age is exactly set at 200,000,000 . Did you check your
vacuum stats afterwards (pg_stat_*_tables) ?
Can you show the code which performed the deletes?
On 10/10/2017 16:56, Vladimir Nicolici
I experimented some more with the settings this weekend, while doing some large
write operations (deleting 200 million records from a table), and I realized
that the database is capable of generating much more WAL than I estimated.
And it seems that spikes in write activity, when longer than a
Further updates:
Yesterday checkpoints were finishing more or less on time with the
configuration for 25 minutes out of 30 minutes, taking 26 minutes at most.
So for today I reduced the time reserved for checkpoint writes to 20 minutes
out of 30 minutes, by setting checkpoint_completion_target
Hi,
On 2017-10-06 05:53:39 +0300, Vladimir Nicolici wrote:
> Hello, it’s postgres 9.6.
Consider setting checkpoint_flush_after to 16MB or something large like that.
> I will probably try the compression on Monday or Tuesday, I can only
> experiment with a single set of changes in a day, and I
: Friday, October 6, 2017 04:51
To: Vladimir Nicolici
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange checkpoint behavior - checkpoints take a longtime
Hi,
On 2017-10-05 22:58:31 +0300, Vladimir Nicolici wrote:
> I changed some configuration parameters during the night to the value
Hi,
On 2017-10-05 22:58:31 +0300, Vladimir Nicolici wrote:
> I changed some configuration parameters during the night to the values I was
> considering yesterday:
>
> - shared_buffers = 144GB #previously 96 GB
> - bgwriter_lru_maxpages = 100 #previously 400
> - checkpoint_timeout = 30min
mbination, I will probably set it to something like 0.90
target, so that it distributes the writes over 27 minutes.
Thanks,
Vlad
From: Igor Polishchuk
Sent: Friday, October 6, 2017 02:56
To: Vladimir Nicolici
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange checkpoint behavior - chec
Vladimir,
Just curious, if your goal is to reduce checkpoint overhead, shouldn’t you
decrease shared_buffers instead of increasing it?
With bigger shared_buffers, you can accumulate more dirty buffers for
checkpoint to take care.
I remember in early versions ( around 8.4), when
Some further updates about the issue.
I did a bit of benchmarking on the disk system with iozone, and the during the
test the SSDs seemed to be able to easily sustain 200 MB/second of writes each,
they fluctuated between 200 MB/s and 400 MB/s when doing 96 GB of random writes
in a file. That
I have a large database, 1.3 TB, with quite a bit of write activity. The
machine has, 2 cpus x 6 cores x 2 threads (2 x E5-2630 v2 @ 2.60GHz), 4 x EVO
Pro 2TB SSDs in a RAID 1+0 software raid configuration, on a SATA 3 controller.
The machine has a lot of memory, 384 GB, so it doesn’t do a lot
On Fri, Sep 01, 2017 at 11:08:32PM +0100, Paul Linehan wrote:
> which is fine (note that the field "mary" is sorted correctly) but
> I want "proper" JSON - i.e. with open and close square brackets
> i.e. ([ - ]) before and after the fields!
I don't know what that means. Do you mean that you want
Paul Linehan writes:
> I have a table (fred) that I want to transform into JSON and
> I use the following command (ignore the backslash stuff):
> ...
> which is fine (note that the field "mary" is sorted correctly) but
> I want "proper" JSON - i.e. with open and close square
I have a table (fred) that I want to transform into JSON and
I use the following command (ignore the backslash stuff):
SELECT REGEXP_REPLACE(ROW_TO_JSON(t)::TEXT, '', '\\', 'g')
FROM
(
SELECT * FROM fred
ORDER BY mary, jimmy, paulie
) AS t;
which gives
On Tue, Jul 4, 2017 at 10:18 PM, Chris Travers wrote:
> First, I haven't seen major problems of database bloat in a long time which
> is why I find this case strange. I wanted to ask here what may be causing
> it.
>
> Problem:
> ==
> Database is in the 100GB to
On Tue, Jul 4, 2017 at 10:18 PM, Chris Travers
wrote:
>
> Questions
> ===
> I assume that it is the fact that rows update frequently which is the
> problem here? But why doesn't Postgres re-use any of the empty disk pages?
>
Can you install the contrib extension
On Wed, Jul 5, 2017 at 3:51 PM, Bill Moran wrote:
> On Wed, 5 Jul 2017 13:28:29 +0200
> Chris Travers wrote:
>
> > On Wed, Jul 5, 2017 at 1:00 PM, PT wrote:
> >
> > > 2x the working size for a frequently updated table
On Wed, 5 Jul 2017 13:28:29 +0200
Chris Travers wrote:
> On Wed, Jul 5, 2017 at 1:00 PM, PT wrote:
>
> > 2x the working size for a frequently updated table isn't terrible bloat.
> > Or are
> > you saying it grows 2x every 24 hours and keeps
On Wed, Jul 5, 2017 at 7:28 AM, Chris Travers
wrote:
>
>
> On Wed, Jul 5, 2017 at 1:00 PM, PT wrote:
>
>>
>> 2x the working size for a frequently updated table isn't terrible bloat.
>> Or are
>> you saying it grows 2x every 24 hours and keeps
On Wed, Jul 5, 2017 at 1:00 PM, PT wrote:
>
> 2x the working size for a frequently updated table isn't terrible bloat.
> Or are
> you saying it grows 2x every 24 hours and keeps growing? The real question
> is
> how often the table is being vacuumed. How long have you
On Wed, 5 Jul 2017 07:18:03 +0200
Chris Travers wrote:
> Hi;
>
> First, I haven't seen major problems of database bloat in a long time which
> is why I find this case strange. I wanted to ask here what may be causing
> it.
>
> Problem:
> ==
> Database is in
On Wed, Jul 5, 2017 at 7:18 AM, Chris Travers
wrote:
> Hi;
>
> First, I haven't seen major problems of database bloat in a long time
> which is why I find this case strange. I wanted to ask here what may be
> causing it.
>
> Problem:
> ==
> Database is in the
Hi;
First, I haven't seen major problems of database bloat in a long time which
is why I find this case strange. I wanted to ask here what may be causing
it.
Problem:
==
Database is in the 100GB to 200GB size range, running on btrfs (not my
choice) with nodatacow enabled (which I set up
Got it
Thank you
> On Jun 12, 2017, at 4:16 PM, Adrian Klaver wrote:
>
> On 06/12/2017 02:07 PM, armand pirvu wrote:
>> Hi
>> I was doing a test upgrade from 9.5 to 9.6 and the following lines caught my
>> eye
>> postgres 10967 10911 0 15:59 pts/000:00:00
>>
On 06/12/2017 02:07 PM, armand pirvu wrote:
Hi
I was doing a test upgrade from 9.5 to 9.6 and the following lines caught my eye
postgres 10967 10911 0 15:59 pts/000:00:00 /usr/pgsql-9.6/bin/pg_upgrade
-d /var/lib/pgsql/9.5/data -D /var/lib/pgsql/9.6/data -b /usr/pgsql-9.5/bin -B
Hi
I was doing a test upgrade from 9.5 to 9.6 and the following lines caught my eye
postgres 10967 10911 0 15:59 pts/000:00:00 /usr/pgsql-9.6/bin/pg_upgrade
-d /var/lib/pgsql/9.5/data -D /var/lib/pgsql/9.6/data -b /usr/pgsql-9.5/bin -B
/usr/pgsql-9.6/bin -k -v
postgres 11141 1 0
On 04/23/2017 05:48 PM, Periko Support wrote:
Yes, with our datamax no issue.
Once u setup the label in LabelView u chose which printer to use.
That indicates to me it is a printer driver/setup issue and not anything
the database is causing. You might want to verify the e.g. media
settings
Yes, with our datamax no issue.
Once u setup the label in LabelView u chose which printer to use.
On Sun, Apr 23, 2017 at 1:27 PM, Adrian Klaver
wrote:
> On 04/23/2017 01:12 PM, Periko Support wrote:
>>
>> We use a label program called LabelView to create our labels,
On 04/23/2017 01:12 PM, Periko Support wrote:
We use a label program called LabelView to create our labels, we read
the record using ODBC.
The label has 8 fields which UPC is what it changes on each one.
You use the same program and setup to print to the DataMax printer and
you do not have
We use a label program called LabelView to create our labels, we read
the record using ODBC.
The label has 8 fields which UPC is what it changes on each one.
Thanks.
On Sat, Apr 22, 2017 at 11:28 AM, Adrian Klaver
wrote:
> On 04/21/2017 05:14 PM, Periko Support wrote:
On 04/21/2017 05:14 PM, Periko Support wrote:
Hi guys.
I would to explain my current strange issue with our label printer
-.
Windows 8.1 x64, psqlodbc 9.5x86/x64 drivers tested.
The issue is that every time we print labels that read data from the
DB, the print start printing
with pause on
Hi guys.
I would to explain my current strange issue with our label printer
Zebra ZT230 USB.
Windows 8.1 x64, psqlodbc 9.5x86/x64 drivers tested.
The issue is that every time we print labels that read data from the
DB, the print start printing
with pause on every label.
Is like, select data,
On 02/22/2017 09:16 AM, Jerry LeVan wrote:
There is a setting in the Mac System Preferences that will allow you to turn
off smart quotes.
Aah, Apple trying to be 'helpful'. Assumes everyone wants to output
print copy everywhere.
I did this and the program is acting properly once more
> On Feb 22, 2017, at 10:08 AM, Jerry LeVan wrote:
>
>>
>> On Feb 22, 2017, at 9:49 AM, Adrian Klaver wrote:
>>
>> On 02/22/2017 07:32 AM, Jerry LeVan wrote:
>>>
On Feb 22, 2017, at 9:20 AM, Adrian Klaver
> On Feb 22, 2017, at 9:49 AM, Adrian Klaver wrote:
>
> On 02/22/2017 07:32 AM, Jerry LeVan wrote:
>>
>>> On Feb 22, 2017, at 9:20 AM, Adrian Klaver
>>> wrote:
>>>
>>> On 02/22/2017 07:09 AM, Jerry LeVan wrote:
Sorry I accidentally
On 02/22/2017 07:32 AM, Jerry LeVan wrote:
On Feb 22, 2017, at 9:20 AM, Adrian Klaver wrote:
On 02/22/2017 07:09 AM, Jerry LeVan wrote:
Sorry I accidentally posted this to pgsql-general-owners earlier today…
How can this happen ( from the postgresql.log
On 02/22/2017 07:32 AM, Jerry LeVan wrote:
On Feb 22, 2017, at 9:20 AM, Adrian Klaver wrote:
On 02/22/2017 07:09 AM, Jerry LeVan wrote:
Sorry I accidentally posted this to pgsql-general-owners earlier today…
How can this happen ( from the postgresql.log
> On Feb 22, 2017, at 9:20 AM, Adrian Klaver wrote:
>
> On 02/22/2017 07:09 AM, Jerry LeVan wrote:
>> Sorry I accidentally posted this to pgsql-general-owners earlier today…
>>
>> How can this happen ( from the postgresql.log file)
>>
>> LOG: statement:
On 02/22/2017 07:09 AM, Jerry LeVan wrote:
Sorry I accidentally posted this to pgsql-general-owners earlier today…
How can this happen ( from the postgresql.log file)
LOG: statement: select * from knives where manufacturer=‘Boker’
ERROR: column "‘boker’" does not exist at character
Sorry I accidentally posted this to pgsql-general-owners earlier today…
How can this happen ( from the postgresql.log file)
LOG: statement: select * from knives where manufacturer=‘Boker’
ERROR: column "‘boker’" does not exist at character 41
Some background:
I am trying to move from
On Fri, 2016-12-02 at 07:04 -0700, pinker wrote:
>
> Hi!
> I have pgBadger report with strange data about prepared queries I
> cannot
> interpret by myself. If anybody could help me with interpretation,
> would be
> greatly appreciated.
> In first half of the day pgBadger shows there is no
Hi!
I have pgBadger report with strange data about prepared queries I cannot
interpret by myself. If anybody could help me with interpretation, would be
greatly appreciated.
In first half of the day pgBadger shows there is no prepare statements and
all queries are binded - how is that possible if
On Thu, Oct 20, 2016 at 6:27 AM, Bjørn T Johansen wrote:
> On Thu, 20 Oct 2016 14:04:51 +0200
> vinny wrote:
>
>> On 2016-10-20 13:51, Bjørn T Johansen wrote:
>> > I have the following SQL:
>> >
>> > SELECT * from table WHERE date BETWEEN
Bjørn:
On Thu, Oct 20, 2016 at 4:13 PM, Bjørn T Johansen wrote:
> Yes, the field name is actually dato but changed it to the English variant..
I routinely have databases with english column names for nearly
everything except 'fecha', spanish for date, and 'tipo', spanish for
On Thu, 20 Oct 2016 15:16:20 +0200
Francisco Olarte wrote:
> On Thu, Oct 20, 2016 at 1:51 PM, Bjørn T Johansen wrote:
> > I have the following SQL:
> > SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> > 00:00:00','DD.MM.
On Thu, Oct 20, 2016 at 1:51 PM, Bjørn T Johansen wrote:
> I have the following SQL:
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> 23:59:59','DD.MM.
> HH24:MI:SS')
> date is of type
Bottom quoting makes it difficult to reply properly, so reformated a bit:
On Thu, Oct 20, 2016 at 2:02 PM, William Ivanski
wrote:
> Em qui, 20 de out de 2016 às 09:52, Bjørn T Johansen
> escreveu:
>> date is of type timestamp.
> select * from table
On 2016-10-20 14:27, Bjørn T Johansen wrote:
On Thu, 20 Oct 2016 14:04:51 +0200
vinny wrote:
On 2016-10-20 13:51, Bjørn T Johansen wrote:
> I have the following SQL:
>
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND
On Thu, 2016-10-20 at 13:51 +0200, Bjørn T Johansen wrote:
> I have the following SQL:
>
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> 23:59:59','DD.MM.
> HH24:MI:SS')
>
> date is of type timestamp.
>
>
On Thu, 20 Oct 2016 14:04:51 +0200
vinny wrote:
> On 2016-10-20 13:51, Bjørn T Johansen wrote:
> > I have the following SQL:
> >
> > SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> > 00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
> >
Yes, that's what I am doing now but I was just wondering why the other way did
not work...
BTJ
On Thu, 20 Oct 2016 12:02:14 +
William Ivanski wrote:
> You could try:
>
> select * from table where date = '2016/10/20'::date
>
> Em qui, 20 de out de 2016 às
On 2016-10-20 13:51, Bjørn T Johansen wrote:
I have the following SQL:
SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
23:59:59','DD.MM.
HH24:MI:SS')
date is of type timestamp.
I was expecting to get all the
You could try:
select * from table where date = '2016/10/20'::date
Em qui, 20 de out de 2016 às 09:52, Bjørn T Johansen
escreveu:
> I have the following SQL:
>
> SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
> 00:00:00','DD.MM. HH24:MI:SS') AND
I have the following SQL:
SELECT * from table WHERE date BETWEEN to_timestamp('20.10.2016
00:00:00','DD.MM. HH24:MI:SS') AND to_timestamp('20.10.2016
23:59:59','DD.MM.
HH24:MI:SS')
date is of type timestamp.
I was expecting to get all the records that had datepart = 20.10.2016 but I
> Unnest is evaluated before LIMIT
> You need to use derived table
>
> postgres=# select unnest(a) from (select a from x limit 1) si
>
Thank you!
Regards
Johann
Hi
2016-07-14 14:55 GMT+02:00 Johann Spies :
> When I unnest (regexp_split_to_array) or regexep_split_to_table
>
> I get one value but the length of the array without the unnest is 193.
>
> Why would that be?
>
> wos=# select array_length(regexp_split_to_array(tsv::text,
When I unnest (regexp_split_to_array) or regexep_split_to_table
I get one value but the length of the array without the unnest is 193.
Why would that be?
wos=# select array_length(regexp_split_to_array(tsv::text, E'\\\s+'),1),
unnest(regexp_split_to_array(tsv::text, E'\\\s+'))
wos-# from
On 2016-02-01 12:35:35 -0600, Yu Nie wrote:
> Recently I am working with a large amount of taxis GIS data and had
> encountered
> some weird performance issues. I am hoping someone in this community can help
> me figure it out.
>
> The taxi data were loaded in 5 minute block into a table. I
t;>> shared cache was used for the second query. This does not work however for
>>>> the small table.
>>>>
>>>> Many thanks for your willingness to help!
>>>>
>>>> Best, Marco
>>>>
>>>> On Mon, Feb 1, 20
t;>>> wrote:
>>>>
>>>>> Fine. Please rerun both explains and my queries and send ALL output to
>>>>> a file (F8 in PGADMIN Sql). Then attach the file.
>>>>>
>>>>>
>>>>>
>>>>> Sent via t
ee attached for the requests results. I ran two queries (each
>>>>> with a different taxiid that is next to each other) for each table. Note
>>>>> that for the large table one is much faster than the other because the
>>>>> shared cache was used for th
please stop top posting, and quoting 100s and 100s of lines of old
dreck.This list uses inline posting, and its preferred to edit out
any unimportant junk from the quoted postings.
--
john r pierce, recycling bits in santa cruz
--
Sent via pgsql-general mailing list
Came a little late to the thread, see many comments inline below:
On Mon, 1 Feb 2016 13:16:13 -0600
Yu Nie wrote:
> Thanks a lot for your reply. I ran the query you suggested and here are
> the results
>
> Large table:
gmail.com> Cc: Bill Moran
<wmo...@potentialtech.com>, pgsql-general@postgresql.org Subject: Re: [GENERAL]
strange sql behavior
Thanks, Bill and Melvin!
Just some quick note/answers before I absorb all the information provided by
Bill.
1. I don't expect many users running queries
>
>
> Sent via the Samsung Galaxy S® 6, an AT 4G LTE smartphone
> Original message
> From: Yu Nie <niey...@gmail.com>
> Date: 2/1/2016 15:17 (GMT-05:00)
> To: melvin6925 <melvin6...@gmail.com>
> Subject: Re: [GENERAL] strange sql behavior
>
Hi Melvin,
Thanks a lot for your reply. I ran the query you suggested and here are
the results
Large table: "public";"data2011_01";784236885;0;0;"";"";"";"2016-01-19
17:31:08-06";156847423
Small table: "public";"data2013_01w";300786444;0;0;"";"";"2016-02-01
08:57:24-06";"2016-02-01
One thing to look at is the last time both tables were vacuumed/analyzed.
SELECT n.nspname,
s.relname,
c.reltuples::bigint,
n_tup_ins,
n_tup_upd,
n_tup_del,
date_trunc('second', last_vacuum) as last_vacuum,
date_trunc('second', last_autovacuum) as
wmo...@potentialtech.com> Date: 2/1/2016
14:41 (GMT-05:00) To: Yu Nie <niey...@gmail.com> Cc: Melvin Davidson
<melvin6...@gmail.com>, pgsql-general@postgresql.org Subject: Re: [GENERAL]
strange sql behavior
Came a little late to the thread, see many comments inline below:
On Mon, 1
wmo...@potentialtech.com>
> Date: 2/1/2016 14:41 (GMT-05:00)
> To: Yu Nie <niey...@gmail.com>
> Cc: Melvin Davidson <melvin6...@gmail.com>, pgsql-general@postgresql.org
> Subject: Re: [GENERAL] strange sql behavior
>
>
> Came a little late to the thread,
On 02/01/2016 10:35 AM, Yu Nie wrote:
Hi there,
Recently I am working with a large amount of taxis GIS data and had
encountered some weird performance issues. I am hoping someone in this
community can help me figure it out.
The taxi data were loaded in 5 minute block into a table. I have two
Hi Klaver,
Thanks.
1. I don't see order by time makes a difference - in fact, the "analyze"
seems to indicate the sorting is faster for the small table because it uses
less memory.
2. No, the large table has not been clustered. Both tables were created
exactly the same way, loading 5-minute
t;>> And were _your_ queries run on the same day at the same time within a
>>> few seconds of each other?
>>>
>>> Sent via the Samsung Galaxy S® 6, an AT 4G LTE smartphone
>>> Original message ----
>>> From: Yu Nie <niey...@gmail.com
25 PM, melvin6925 <melvin6...@gmail.com>
>>>> wrote:
>>>>
>>>>> Fine. Please rerun both explains and my queries and send ALL output to
>>>>> a file (F8 in PGADMIN Sql). Then attach the file.
>>>>>
>>>>>
>>>>
Hi there,
Recently I am working with a large amount of taxis GIS data and had
encountered some weird performance issues. I am hoping someone in this
community can help me figure it out.
The taxi data were loaded in 5 minute block into a table. I have two
separate such tables, one stores a
On 01/22/2016 01:41 AM, david.tu...@linuxbox.cz wrote:
Hi,
we have some question about behavior SELECT FOR UPDATE. We want find
record with open bounds tstzrange, close it a insert new open. We use
SELECT FOR UPDATE in function, but sometimes 2rows inserted. I show this
on simple example with
Hi,
we have some question about behavior SELECT FOR UPDATE. We want find record
with open bounds tstzrange, close it a insert new open. We use SELECT FOR
UPDATE in function, but sometimes 2rows inserted. I show this on simple
example with integer data type. Here is:
--tested on postgresql
Edson F. Lidorio wrote:
> Hello,
> I excuse my ignorance with SQL and my English.
> I wonder if these procedures are correct or is it a bug?
It's not a bug, but storing the result in a new table is senseless. Why
do are doing that?
Andreas
--
Really, I'm not out to
.org
Subject: [GENERAL] Strange syntax with select
Hello,
I excuse my ignorance with SQL and my English.
I wonder if these procedures are correct or is it a bug?
I'm trying to put the result of a select within the v_saldo_ini variable,
except I realized that the postgresql created a table with
Edson,
I've attached a script that shows how to use bash to assign a variable from
a SQL statement.
On Fri, Dec 25, 2015 at 12:48 PM, Adrian Klaver
wrote:
> On 12/25/2015 08:26 AM, Edson F. Lidorio wrote:
>
>>
>>
>>
>
>>>
>>> I'musing versionbelowPostgreSQL.
>>>
>>>
;
> sum
>
> -
>
> 55
>
> (1 row)
>
>
>
> Bye
>
> Charles
>
>
>
> *From:* pgsql-general-ow...@postgresql.org [
> mailto:pgsql-general-ow...@postgresql.org
> <pgsql-general-ow...@postgresql.org>] *On Behalf Of *Edson F. Lidorio
>
On 12/25/2015 08:37 AM, Melvin Davidson wrote:
FYI, it is always wise (and polite) to advise what version of PostgreSQL
you are using and what O/S you are using.
Actually the OP put that at the bottom of the first post:
"I'm using version below PostgreSQL.
PostgreSQL 9.4.5 on
r 2015 14:23
*To:* pgsql-general@postgresql.org
*Subject:* [GENERAL] Strange syntax with select
Hello,
I excuse my ignorance with SQL and my English.
I wonder if these procedures are correct or is it a bug?
I'm trying to put the result of a select within the v_saldo_ini
variable, excep
On 12/25/2015 08:26 AM, Edson F. Lidorio wrote:
On 25-12-2015 13:09, Charles Clavadetscher wrote:
Hello
I am not in clear what your use case is, but you may have a look at that:
http://www.depesz.com/2013/02/25/variables-in-sql-what-how-when/
The bottom line is that in a psql interactive
Hello,
I excuse my ignorance with SQL and my English.
I wonder if these procedures are correct or is it a bug?
I'm trying to put the result of a select within the v_saldo_ini
variable, except I realized that the postgresql created a table with
v_saldo_ini variable.
See the steps below:
On 12/25/2015 08:26 AM, Edson F. Lidorio wrote:
I'musing versionbelowPostgreSQL.
PostgreSQL 9.4.5 on x86_64-unknown-linux-gnu, compiled by gcc (Debian
4.9.2-10) 4.9.2, 64-bit
--
Edson
via pgadmin not accept this syntax.
You have any other way to do?
Should have added to previous
On 10/23/2015 08:23 AM, Day, David wrote:
One app is connecting through libpqxx, the other 2, glassfish and
ejabber chat agent, I am not certain.
It is only on this one machine where I am having a connection issue
with any of these apps.
I can connect via the identical roles that these
Subject: Re: [GENERAL] strange connection problem.
"Day, David" <d...@redcom.com> writes:
> I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
> connect to the database via psql or pgadminIII both locally and remotely.
> However, all of the loca
"Day, David" writes:
> I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
> connect to the database via psql or pgadminIII both locally and remotely.
> However, all of the local apps ( 3 different ones ) that would connect to it
> are failing to connect.
On Fri, Oct 23, 2015 at 10:29 AM, Day, David wrote:
> I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
> connect to the database via psql or pgadminIII both locally and remotely.
>
> However, all of the local apps ( 3 different ones ) that would connect
On 10/23/2015 07:29 AM, Day, David wrote:
I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
connect to the database via psql or pgadminIII both locally and remotely.
However, all of the local apps ( 3 different ones ) that would connect
to it are failing to connect.
I
st all all 192.168.32.0/19 trust
I don’t believe this to be a postgres problem, but you don’t know what you
don’t know.
From: David G. Johnston [mailto:david.g.johns...@gmail.com]
Sent: Friday, October 23, 2015 10:36 AM
To: Day, David
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERA
I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
connect to the database via psql or pgadminIII both locally and remotely.
However, all of the local apps ( 3 different ones ) that would connect to it
are failing to connect.
I see no error events in the
On 2015-10-23 08:29, Day, David wrote:
> I have a development site ( FreeBSD 10.1 and Postgres 9.3 ) where I can
> connect to the database via psql or pgadminIII both locally and remotely.
>
> However, all of the local apps ( 3 different ones ) that would connect to it
> are failing to
-general@postgresql.org
*Subject:* Re: [GENERAL] Strange TRIGGER failure with FOR ... IN ... LOOP
... INSERT
Hello
You declare your variable r as of type
application.store_ldap_profile_defaults%rowtype, but then select only 4 of
the 5 fields of the table to put in there. The last one
Hi
-Original Message-
From: pgsql-general-ow...@postgresql.org [mailto:pgsql-general-
ow...@postgresql.org] On Behalf Of Adrian Klaver
Sent: Donnerstag, 27. August 2015 15:41
To: Christopher BROWN br...@reflexe.fr
Cc: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange
On 08/27/2015 04:49 AM, Christopher BROWN wrote:
Hello,
I'm new to this list but have been using PostgreSQL for a moment. I've
encountered an error using PostgreSQL 9.4.4 which can be reproduced
using the SQL below.
The trigger init_store_ldap_profiles_trigger fails if the function
plpgsql VOLATILE;
Bye
Charles
From: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Christopher BROWN
Sent: Donnerstag, 27. August 2015 13:50
To: pgsql-general@postgresql.org
Subject: [GENERAL] Strange TRIGGER failure with FOR ... IN ... LOOP
On 08/27/2015 06:33 AM, Christopher BROWN wrote:
Hello Adrian,
Yep, Charles' explanation helped me understand what was going on.
Before that, I was as confused as you were (in your first reply) about
how access_mode could be NULL (with the same reasoning). In any case,
thanks for your links ;
: pgsql-general-ow...@postgresql.org
[mailto:pgsql-general-ow...@postgresql.org] On Behalf Of Charles Clavadetscher
Sent: Donnerstag, 27. August 2015 14:57
To: pgsql-general@postgresql.org
Subject: Re: [GENERAL] Strange TRIGGER failure with FOR ... IN ... LOOP ...
INSERT
Hello
You declare your
1 - 100 of 823 matches
Mail list logo