Not to get off topic, can you authenticate database users via Kerberos?
Thanks,
~Ben
On Wed, Mar 7, 2018 at 10:19 AM, Stephen Frost wrote:
> Greetings,
>
> * Bjørn T Johansen (b...@havleik.no) wrote:
> > Is it possible to use one authentication method as default, like LDAP,
> and if the user is
The short answer I will provide from my experience is that you can't do it.
Your DBA will have access to just about anything across all tables and
databases.
The longer answer are ones that others have pointed out. If a DBA should be
restricted from tables, they probably shouldn't be your DBA. You
Oh this is an easy one. There are simpler solutions for PITR. Also, a PITR
is a very specific case of a database use, if it even uses one. Generally
speaking, you would not want to keep encrypted data within a database.
There simply isn't a need for it. Just use a file or a folder. You can't do
any
I have been thinking about how to apply NLP work to logs. The problem is
that logs are structured data. It would depend on what you are looking for,
what your outcomes are, and how you want to generate it. The other problem
is that your problems are going to be very rare events. Rare event
detectio
The subject says it all. I am using a very old database that I cant
upgrade. Do I have to manage it with pgadmin3 or can I use postgresql 4?
Thanks,
~Ben
it really isnt
supported at all. Either is fine as I can use pgadmin3 if I have to but I
would prefer at least 1 piece of software to have support rather than none.
Thanks,
~Ben
On Mon, Aug 5, 2019, 4:48 PM Adrian Klaver
wrote:
> On 8/5/19 1:15 PM, Benedict Holland wrote:
> > The subject s
We cant upgrade because of business reasons but that is very high on our
agenda to fix. To me, there is a huge difference between unsupported and
wont work. I dont expect support for this but are there significant changes
between 8.4 and 9.2 that would prevent pgadmin4 from working?
Thanks,
~Ben
All stored procedures run in their own transaction. My guess is that when
you turn off autocommit, you are not committing something after your
transaction ends. Also, I have never seen a rollback at the start of a
proc. A pure hypothetical is that it is doing nothing or definitely not
what you thin
I think a check will also work but I second triggers.
Thanks,
~Ben
On Wed, Aug 7, 2019, 2:21 PM Rob Sargent wrote:
>
> On 8/7/19 12:07 PM, stan wrote:
> > Have table that contains employee keys, paired up with work type keys
> > (both foreign keys) and a 3rd column that you enter a billing
Does the problem go away if you install pg11? Are the drives you are
getting your logs from encrypted?
Thanks,
~Ben
On Fri, Aug 9, 2019, 3:17 AM Luca Ferrari wrote:
> On Fri, Aug 9, 2019 at 8:46 AM Ramesh Maddi
> wrote:
> > ./postgresql-9.6.6-1-linux-x64.run --installer-language en
> --service
It looks like your encoding is correct. You are getting letters. If your
encoding was just wrong. You would end up with a lot of strange characters.
If this is tied to one client, it sounds like an encryption issue and
mounting drives for logging that the client cant de-encrypt.
Thanks,
~Ben
On F
You want Alembic and an afternoon of python writing. You just described an
ORM.
On Sun, Dec 13, 2020, 12:53 PM Tomas Vondra
wrote:
> On 12/13/20 6:34 PM, Adrian Klaver wrote:
> > On 12/12/20 6:58 PM, Tim Uckun wrote:
> >> I want to dump my postgres schema to disk in neat directories like
> >> pg
I mean, you want to dump a schema into a directory. Alembic, sqlalchemy and
pathlib will do that in a few minutes. If you want to sync changes then
write alembic change scripts and reflect those changes in the ORM, alembic
does that. Modifying the ORM and reflecting those changes is a terrible
idea
You want to do NLP in postgres? I would say that you would need a tool like
opennlp to get your tokens and phases, then run a fuzzy matching algorithm.
Unless postgres has nlp capabilities but I am not sure I would use them.
You actually want something fairly complex.
Thanks,
Ben
On Sun, Jan 17,
Pg_dump does what you want but you could also set up replication.
Thanks,
Ben
On Thu, Jan 21, 2021, 8:42 AM Ray O'Donnell wrote:
> On 21/01/2021 13:13, sivapostg...@yahoo.com wrote:
> > Hello,
> >
> > I'm from SQL Server now developing my current project using PG.
> >
> > In our earlier project
Pg_dump and pg_restore are what you want. They create sql files that you
pass around.
But seriously, if you have two servers running, replicate one or have your
application point to a server address that you can direct via dns to
whatever active one you want. Postgres makes replication simple. The
I mean... the best? You just laid out a bunch of steps to define a process.
I will tell you that this sounds like a bad process and that there are
easier ways to make changes like inside of a transaction or backing up a
database before making changes or creating a table replication job, making
chan
ocess that the OP is comfortable with and a lot
of people agreed to, it will work. Sometimes it is easier to simply
replicate the existing bad process that a team agrees to rather than making
a better process.
Thanks,
Ben
On Fri, Jan 22, 2021, 9:21 AM Rory Campbell-Lange
wrote:
> On 22/
ectory on your dev server,
> custom postgresql.conf (that among other things uses a different port
> number) and pg_hba.conf files,
>
> TBH, tarring data/ isn't really necessary.
>
> Happiness Always
> BKR Sivaprakash
>
> On Friday, 22 January, 2021, 09:28:13 pm IST, Rory C
Sql server is a 10k dollar to 1 million dollar application (or more) that
is specifically optimized for windows and had limited to no support
anywhere else. Postgres is free and from my experience, comes within 5% of
any other dbms. Inserting 1 row at a time with auto commit on will be a bit
slow b
see the timings, tomorrow.
>
>
> On Wednesday, 17 February, 2021, 07:29:29 pm IST, Benedict Holland <
> benedict.m.holl...@gmail.com> wrote:
>
>
> Sql server is a 10k dollar to 1 million dollar application (or more) that
> is specifically optimized for windows and had l
For work planning, create a temporary table, copy from a generated data
set, and test it out. It really depends. Normally, you don't iterate in SQL
since rows are assumed to be independent. You think about SQL in terms of
sets and let application code handle the data transfers. It also really
depen
It's a very wierd concern for me. I have never liked that justification as
we convert 1:1 SAS to python. If you use Django, converting it to flask is
really hard. If you use postgresql, converting it to oracle is really hard.
I love stored procedures and triggers. Many of my colleagues don't
under
You have to create the database to restore to. Then pg_resrore should work.
Thanks,
Ben
On Tue, Jul 19, 2022, 12:44 PM Ronald Haynes wrote:
> Thanks Ray, running
>
> pSql -f backup-file.sql
>
>
>
> psql: error: FATAL: database "rhaynes74" does not exist
>
> Which seems odd since rhaynes74 is a
This seems like a bad use of a stored procedure. Why wouldn't you spin up
40 clients with a table name and run it across 40 connections? But also, I
don't like loops in stored procedures. Working with loops in a set based
system hurts me but it's personal preference.
Like, I could write this in py
What would you be missing? You can encrypt databases. You can encrypt the
s3 buckets using kms. You can govern access via ssh Auth. When you do
backups, you can encrypt the tar.gz files or whatever format and store it
on s3. Same with the wal files. The fact that oracle charges for this is a
joke.
t up a credit card database. I assume those are locked down way
more than I ever will lock down a system.
On Wed, Dec 21, 2022, 6:13 PM Ron wrote:
> And encrypting a tar.gz file presumes a pretty small database. (The
> --jobs= option was added to pg_dump/pg_restore for just that reason.)
>
You could always create a uuid matching table for anything displayed to
users and keep a private ID for anything internal. From my particle
standpoint, one is 8 bytes, the other is 16 or 32. Any database
implementation should guarantee a unique value. I have had cases where it
didn't work but those
Why is it a terrible idea? I have been using them for years without a
single problem. I don't rely on them for create order. Terrible seem a bit
extreme.
Thanks,
Ben
On Sat, Jan 28, 2023, 3:39 PM Erik Wienhold wrote:
> > On 27/01/2023 01:48 CET Ron wrote:
> >
> > On 1/26/23 15:55, Erik Wienhol
Well... until two processes generate an identical UUID. That happened to me
several times. It's rare but when that happens, oh boy that is a mess to
figure out.
Thanks,
Ben
On Thu, Feb 2, 2023, 10:17 AM Miles Elam wrote:
> On Wed, Feb 1, 2023 at 10:48 AM Kirk Wolak wrote:
>
>>
>>
>> On Wed, Fe
No idea at all. We had the data for the insert and had to insert it again.
It was extremely confusing but oh boy did it wreck our systems.
Thanks,
Ben
On Thu, Feb 2, 2023, 6:17 PM Ron wrote:
> On 2/2/23 17:11, Peter J. Holzer wrote:
> > On 2023-02-02 10:22:09 -0500, Benedict Holl
That is the answer. Postgresql can upsert easily via triggers and on
conflict.
Thanks,
Ben
On Thu, Apr 6, 2023, 5:01 PM Adrian Klaver
wrote:
> On 4/5/23 23:21, Louis Tian wrote:
> > This is a question/feature request.
> >
>
> > Given the definition of upsert, I'd expect an upsert command to do
This went on too long. On conflict is the solution. It has been since at
least 9. I have run that in a production stored proc without a single
problem.
This is an actual and literal solution.
Thanks,
Ben
On Sat, Apr 8, 2023, 5:51 PM Adrian Klaver
wrote:
> On 4/6/23 17:49, Louis Tian wrote:
> >
For documents that long I would seriously consider using large objects and
refencing them with their OIDs. Text fields get put in a special location
within the database. It's similar (possibly exactly) to using large
objects. Also, you can potentially compress them to save space on write and
read.
Yea. For ease of use, out of the box solutions that will just work, large
objects. You might know them as BLOBS in other SQL varieties. If you are
dealing with that much data, I'm going to assume that storage isn't really
your concern. I wouldn't even waste time compressing. I use them frequently
t
MySQL isn't ACID. Postgresql is a full-featured database that doesn't allow
injection. It is very safe and secure. Also, the way that PostgreSQL has a
much better user management system and database/table level access ACLs.
Basically, you just asked a bunch of people who have used PostgreSQL over
When you create a table in a transaction, it will commit the transaction
and prevent a rollback. MySQL is not ACID.
Thanks,
~Ben
On Tue, Mar 12, 2019 at 11:44 AM Ron wrote:
> The Percona fork of MySQL makes active-active clustering very simple to
> set up.
>
> On 3/12/19 9:10
I am not saying it is not well documented. I am saying that it isn't ACID
compliant, which it isn't, as they document.
It comes up far more often than I would like, particularly with migrations
and schema changes. It is one of the huge reasons I actively refuse to work
with MySQL. I have never upg
I store large models in the database because I need to have a historical
data to compare to. That said, I could probably also automate a git repo
but it will be just that much more work and git with binary files really
doesn't make sense. Storage is really cheap and I assume the database
stores byt
Hi All,
I am fairly this question has many answers but here it goes:
I need a tool that can track schema changes in a postgesql database, write
scripts to alter the tables, and store those changes in git. Are there
tools that exist that can do this?
BTW, I know if I use a framework with some for
~Ben
On Thu, May 16, 2019 at 1:06 PM Mark Fletcher wrote:
> On Thu, May 16, 2019 at 9:41 AM Benedict Holland <
> benedict.m.holl...@gmail.com> wrote:
>
>>
>> I need a tool that can track schema changes in a postgesql database,
>> write scripts to alter the tables
Hello all,
I really can't figure this one out. I am trying to get debug working on a
postgrseql 10 database on ubuntu. I installed the proper package
postgresql-10-pldebugger*. *When I go to create the extention, I get the
error:
create extension pldbgapi
ERROR: could not open extension control f
This also sounds like a fairly advanced setup and a corporate environment.
Postgresql offers paid support and you probably want that.
Thanks,
Ben
On Tue, Jul 11, 2023, 4:33 PM Adrian Klaver
wrote:
> On 7/11/23 14:45, Johnathan Tiamoh wrote:
> > Hello Adrian
> >
> > 1) PgBouncer version.
> >
> >
Install docker and pull dpage/pgadmin4.
Seriously. It just works. All of the time, every time, with updates. Docker
is amazing.
On Fri, Mar 26, 2021, 2:55 PM Adrian Klaver
wrote:
> On 3/26/21 11:00 AM, Carlos Montenegro wrote:
> > Hello Adrian.
> > First I installed postgresql version 12.6 then
The containers for postgres and pgadmin4 are incredible. This entire
thread, worrying about configurations settings and options, it all
disappears. You just run the container. There are tutorials to help you get
it set up with nginx. It just works. The developers and maintainers have
done an incred
Why not use the docker container?
Thanks,
Ben
On Fri, Apr 2, 2021, 8:25 PM rob stone wrote:
> Hello,
>
> On Fri, 2021-04-02 at 22:01 +, Allie Crawford wrote:
> > Hi,
> > I am new in postgresql and I am trying to install PostgreSQL on Linux
> > server but instead of using the default locatio
I love how we would admonish sql but love lisp. There isn't a perfect
language. SQL is fine. C is damn good. C++ is impossible, Java is C++ but
simple, Python is a C wrapper. God help us if we settled on Fortran. We
would still have single core processors. Lisp at least allowed
multithreading but i
In my opinion, ext4 will solve any and all problems without a very deep
understanding of file system architecture. In short, i would stick with
ext4 unless you have a good reason not to. Maybe there is one. I have done
this a long time and never thought twice about which file system should
support
Honestly, I think if you are at the point I performance where you care
about a file system then you are in the upper 5% or higher. I am trying to
get my users to not do 100k individual inserts in django rather than 1
insert using sql. As for backups, I don't care how long they take. I ha e
separate
Right... which is why you do pg_basebackup infrequently. It also captures
WALs when conducting the backup. The tradeoff is that if you have a huge
amount of WAL files then running a restore can be quite time-consuming.
There isn't really a clear win here though. You trade off a long time
backup tha
Check out rolling updates with kubernetis and yes, a containerized
postgresql environment is likely the best option. The really nice bit about
containers is that you don't actually care about underlying hardware. You
need a docker daemon. K8 manages clusters and it sounds like your use case
would b
I would recommend storing a link to the file rather than the file itself.
Other than that, use BLOBS. I would probably recommend not storing any
binary objects in a database for a variety of reasons but if you have to
then bytea will work, assuming they are smaller than 2gb.
Thanks,
- Ben
On Wed,
Pg_dump or do a wal archive.
Thanks,
Ben
On Fri, Feb 4, 2022, 10:49 PM Ron wrote:
> On 2/4/22 9:14 PM, sivapostg...@yahoo.com wrote:
>
> Hello,
> Got few (currently 3 will increase as days go) database in A2 hosting
> unmanaged vps server, running Ubuntu server 18 lts
>
> Planning to have an ad
This is a strange post. Why is SQL bad and how do your reconcile that with
managing 99%+ of all data? It's so bad that we have systems that plug into
sql to query data outside of tables like Athena or Excel.
Why are you not using pgadmin4? Yes. Psql as a command line isn't great for
humans. It's s
So to summarize, people are bad programmers who refuse to learn SQL So SQL
is the problem? Common. You cannot bring that to a postgres list serve.
Look. It's not perfect. It's a pain. It is hard to generate queries (oh my
God why are you doing this?) and it's hard to work with. You are describing
55 matches
Mail list logo