On 2017-10-18 06:50:19 +0200, Laurent Laborde wrote:
> On Tue, Oct 17, 2017 at 1:38 PM, Geoff Winkless wrote:
>
> > On 17 October 2017 at 11:59, Laurent Laborde wrote:
> >
> >> What's the point of the seagate archive now ?
> >> Ironwolf, for the same
On Tue, Oct 17, 2017 at 1:38 PM, Geoff Winkless wrote:
> On 17 October 2017 at 11:59, Laurent Laborde wrote:
>
>> What's the point of the seagate archive now ?
>> Ironwolf, for the same public price, have better performance (obviously)
>> and, more
On Tue, Oct 17, 2017 at 7:13 PM, Michael Paquier
wrote:
> Note that Peter has also worked on provising Debian packages for the
> utility down to 9.4 if I recall correctly, which is nice, but if you
> want the heap checks you will need to compile things by youself. We
>
On Wed, Oct 18, 2017 at 8:02 AM, said assemlal wrote:
> Thanks for your response.
>
> We are currently running postgresql-9.4.14
> I see there are some tools to check if the indexes/pages are not corrupted.
> But is there a faster way to check if a PGDATA instance is
PostgreSQL VIEWs have a useful feature where INSTEAD OF triggers can be
defined to divert INSERT/DELETE/UPDATE actions into an underlying table
(or other location), creating the effect of a "writeable view" (and I
believe in more recent PostgreSQL versions this is pretty much automatic).
Thanks for your response.
We are currently running postgresql-9.4.14
I see there are some tools to check if the indexes/pages are not corrupted.
But is there a faster way to check if a PGDATA instance is clean ?
Thanks.
On Mon, Oct 16, 2017 at 9:18 PM Michael Paquier
Ok I needed a ::timestamptz at time zone 'UTC' and a >= :)
On 17 October 2017 at 22:29, Glenn Pierce wrote:
> Hi so I have a simple table as
>
> \d sensor_values_days;
> Table "public.sensor_values_days"
> Column | Type |
On Tue, Oct 17, 2017 at 2:29 PM, Glenn Pierce wrote:
> and I have a simple query that fails
>
This is not failure, this is a query that found zero matching records.
>
> Ie
>
> SELECT sensor_id, MAX(ts), date_trunc('day', ts), COALESCE(MAX(value),
> 'NaN')::float FROM
Hi so I have a simple table as
\d sensor_values_days;
Table "public.sensor_values_days"
Column | Type | Modifiers
---+--+--
ts| timestamp with time zone | not null
value |
On 10/17/2017 10:39 AM, Dillon Tang wrote:
***Must sit onsite in Cypress,CA or Eden Prairie, MN***
This is the wrong list. Please use pgsql-jobs.
Thank you,
JD
--
Command Prompt, Inc. || http://the.postgres.company/ || @cmdpromptinc
PostgreSQL Centered full stack support, consulting and
***Must sit onsite in Cypress,CA or Eden Prairie, MN***
What is the specific title of the position?
Senior posgreSQL Database Administrator Consultant
What Project/Projects will the candidate be working on while on assignment?
The candidate will be working on the EDSS to Documentum migration
Ron Johnson writes:
> Where can I look to see (roughly) how much more RAM/CPU/disk needed when
> moving from 8.4 and 9.2?
It's entirely possible you'll need *less*, as you'll be absorbing the
benefit of several years' worth of performance improvements. But this
is such a
Where can I look to see (roughly) how much more RAM/CPU/disk needed when
moving from 8.4 and 9.2?
Thanks
--
World Peace Through Nuclear Pacification
--
Sent via pgsql-general mailing list (pgsql-general@postgresql.org)
To make changes to your subscription:
On 17 October 2017 at 11:59, Laurent Laborde wrote:
> What's the point of the seagate archive now ?
> Ironwolf, for the same public price, have better performance (obviously)
> and, more surprising, a better MTBF.
>
I have no real insight into whether Seagate are still
Friendly greetings !
i remember an interesting talk from seagate at pgcon2015 about SMR disk
technology, and i use them for archive & backup (personal usage).
However, the highest capacity on the seagate archive product line (the one
using SMR) is 8TB.
Seagate have a 8TB ironwolf product at
It is not psql but libpq which uses .pgpass file, as per my knowledge
there is no way of preventing it unless you have your own customize version
of libpq which do not have such option in it. But there is a workaround to
prevent libpg from using default .pgpass file is to set 'PGPASSFILE'
On Tue, Oct 17, 2017 at 09:06:59AM +0300, Allan Kamau wrote:
> Is there a way to instruct psql not to try reading ~/.pgpass file?
https://www.postgresql.org/docs/current/static/libpq-envars.html
PGPASSFILE behaves the same as the passfile connection parameter.
passfile
Specifies the name of the
rverghese schrieb am 11.10.2017 um 20:38:
> You mean at the user permissions level? Yes, I could, but would mean doing so
> table by table, which is not our current structure. I guess there is nothing
> at the database level.
Not at the database level, but at the schema level:
You can revoke
Hi,
I am executing many "COPY" commands via psql serially from a bash script
from a compute node that accesses a mounted home directory located on a
remote server.
After about a thousand or so executions of the "COPY" command I get the
error "psql: could not get home directory to locate password
On 11 October 2017 at 20:38, rverghese wrote:
> I guess there is nothing at the database level.
Although not safe (as the user can reset this parameter), you could set
default_transaction_read_only for the application user.
postgres=# ALTER USER jdoe IN DATABASE postgres SET
20 matches
Mail list logo