Thanks Peter, we will look into it. -- Khushboo
On Tue, Jan 18, 2022 at 4:30 AM Peter <p...@citylink.dinoex.sub.org> wrote: > > Folks, > > I am looking at change 72f3730c34ed8f741dfee880551c30632dfec263 > > > --- a/web/pgadmin/utils/driver/psycopg2/connection.py > +++ b/web/pgadmin/utils/driver/psycopg2/connection.py > @@ -313,6 +318,13 @@ class Connection(BaseConnection): > os.environ['PGAPPNAME'] = '{0} - {1}'.format( > config.APP_NAME, conn_id) > > + if config.SERVER_MODE and \ > + session['_auth_source_manager_obj']['current_source'] > == \ > + KERBEROS and 'KRB5CCNAME' in session\ > + and manager.kerberos_conn: > + lock.acquire() > + environ['KRB5CCNAME'] = session['KRB5CCNAME'] > + > pg_conn = psycopg2.connect( > host=manager.local_bind_host if manager.use_ssh_tunnel > else manager.host, > > > Su You do a locking here. And You do the locking only when we have a > KRB5CCNAME. > > Now imagine when we allow both auth styles, INTERNAL *and* Kerberos. > If I get this right, then the users with INTERNAL auth will *not* have > their connect locked. > > But we have only *one* Environment, there can be only one. And this is > effective for the entire process, for all threads, for all sessions, > for all users at the same time. > > So when an INTERNAL auth user will connect the database, from another > thread, within that time-slice, then they will happily use these > kerb creds from the environment which do not belong to them. > > So if I get this right, then we must lock *every* connect from > everybody, and do that for *every* connect at any place throughout > the entire application. > > > When I did this as a proof-of-concept with 4.19, I was absolutely not > sure if I had found all the connects throughout the application - but > at least I found more than You. ;) > > There are at least two more in that same file psycopg2/connection.py, > rather near the end. They concern querying and cancelling transactions > and async stuff - I didn't fully understand it. But I might assume > that we need to do the same gimmick there as well. > > > Then there is this one: > > --- a/web/pgadmin/misc/bgprocess/processes.py > +++ b/web/pgadmin/misc/bgprocess/processes.py > @@ -278,13 +279,16 @@ class BatchProcess(object): > env['PROCID'] = self.id > env['OUTDIR'] = self.log_dir > env['PGA_BGP_FOREGROUND'] = "1" > + if config.SERVER_MODE and session and \ > + session['_auth_source_manager_obj']['current_source'] == \ > + KERBEROS: > + env['KRB5CCNAME'] = session['KRB5CCNAME'] > > if self.env: > env.update(self.env) > > > This should be a background process, separate PID, that belongs to > us. So there is no need to lock that one, but, if my beforesaid holds > true, then nevertheless the point where we copy the environment must > probably be locked: > > > --- pgadmin/misc/bgprocess/processes.py.orig 2020-03-19 > 18:26:42.102998018 +0 > 100 > +++ pgadmin/misc/bgprocess/processes.py 2020-03-20 02:22:54.678435000 +0100 > @@ -338,7 +341,14 @@ > ) > > # Make a copy of environment, and add new variables to support > - env = os.environ.copy() > + > + pgconn_lock.acquire() > + env = os.environ.copy() > + pgconn_lock.release() > + if request.environ.get("REMOTE_USER") is not None: > + env['KRB5CCNAME'] = os.getenv('PGADMIN4_KRB5CCPATH') + \ > + request.environ.get("REMOTE_USER") > + > env['PROCID'] = self.id > env['OUTDIR'] = self.log_dir > env['PGA_BGP_FOREGROUND'] = "1" > >