James Busser wrote: > Locally there have been some lapses in government information > management, e.g. some media (that happened to contain intact health > information) got sold, also there was a break-in at a local public > health office and its computer (which contained a database) was stolen.
Can you point me at any published details of the latter? Something similar happened here over a decade ago and it has been largely forgotten, which is a shame, because it was a salutary lesson. > Which reminded me to clarify: > > - if a server that is hosting gnumed's postgres databases is taken into > a computer shop for service, and the server's hard drive(s) are booted > from another system on which the user has root access, will that root > access grant them permission to all of the the drive's data? Unless it is encrypted. and if it is encrypted, you need to make sure the encryption key is not stored on the drive (as MS Windows 200 and maybe 2003 do by default fro their encrypted file systems...) Generally the encryption key needs to be entered at boot time on the console, or read from removable media if you want unattended boot capability - but the latter means that the encryption key disc will still be in the server when the burglar nicks it in the middle of the night. > - if access to the postgres data is controlled by entries in the > pg_hba.conf and pg_ident.conf files, and if their entries are modifiable > per above (or if the user can just install their own copy of postgres > and configure it as they like), would that provide a means to defeat any > userids and passwords that had been created for the gnumed databases? Yes, any PG security can be completely bypassed if you have access to the config files. > - are the gnumed databases only as secure as the users' passwords, and > so that if it is known that doctor edgar smith is a member of a > surgery/practice and it is guessed that configured into gnumed might be > a userid esmith does read-access to the entire data set depend on esmith > having created an adequately strong password (not esmith)? It is true of anything that allows username/password authorisation - it is only as secure as the password. Apart from easily guessable passwords, the other threat is stolen passwords through keyloggers or other malware running on teh client workstation. > - - is it built-in or easily added to GNUmed to be able to specify > minimum requirements for a valid password? Presumably these are stored > encrypted to that while an administrator could over-write a password, > they could not know the actual password that had been used? PG stores passwords in salted hash form (there is a config option for what sort of hash, I think - MD5 is deafult, which is fine) but it would be up to GNUmed to enforce rules on passwords (assuming GNUmed supervises password set-up and changing). > - if a backup dump were used to recreate a GNUmed database, would the > dumps contain userids and encrypted password values and would these > credentials (and the associated security) be restored as part of getting > the database back up? Depends what switches are given to pg_dump when the dump is created. But user details and credentials can be included in a dump. > - what about dumps? Presumably these are simply structured data, > unencrypted, so anyone who gets a hold of a dump could freely extract > from it. Certainly the personally identifying information could be > useful and the linked information reassembled by recreating the database > per above. Thus, prior burning dump files to disk, each dump file should > be encrypted, presumably using a special key for this purpose, and using > it consistently so that the poor people who must legitimately restore > from backup can do so? Yes, you really should encrypt database dumps before storing them on removable media, which is particularly vulnerable to loss or theft (eg from briefcases left in cars etc). Using a public/private key pair with gpg (GnuPG) works well because you don't need to store the private key on the server where the db dumps will be encrypted, just the public key. > - how should subscribers to gnotary coordinate their processes? They > should presumably > - - write the dump into a backup directory > - - hash the dump and write a copy of the hash (maybe in the backup > directory?) > - - send the hash to gnotary > - - save into the backup directory a copy of the returned, signed, > timestamped message > - - encrypt the dump > - - burn backup directory contents to CD > (or two CDs if duplicate copies are meant to be kept) > - - clear out (or nest more deeply) the contents of the backup directory A really belt-and-braces approach would automate a test restore of at least one of the dumps from the removal media, from time to time - perhaps weekly. Really such test restores should be done on a separate computer with a different CD-ROM drive. We have automated this to a degree - we copy encrypted db dumps produced nightly from our main PG database which is on a server in a data centre to a local server, where the dump is burnt to CD-ROM. Every day an admin person changes the disc in the local server for a new CD-ROM and puts teh previous one in our safe. In my office I have an old laptop with Fedora Core on it. Once a week, I take one of the archive CD-ROMs, put it in the laptop, load the private key from a memory stick which is stored in a different safe (and there are redundant copies of the private key on all sorts of media including paper in safes in multiple locations to guard against loss of the private key). The laptop then labours all day decrypting the dump and restoring it using pg_restore, and then runs some summary statistics on the restored dump as an integrity check (although if pg_restore doesn't report errors, it is pretty safe to assume that the restore worked OK) - the results are stored and used to compare against summary stats from future restore tests. Takes a few seconds to glance at the results to confirm that the restore from CD-ROM worked OK. The DB directory on the laptop is then deleted and it is put back in teh safe until next week. All the foregoing is being documented, at which stage I hope to be able to hand over the entire process to admin staff, with clear instructions about reporting errors. Nevertheless, it only takes a few minutes of my time each week. It took a contractor about a day to set all this up for us - not rocket science, but he did it much quicker than I could have. Tim C > > PS are these questions too "general purpose" for a gnumed-devel list? > Maybe in a perfect world would be better-suited to a -user list, but OK > for now? > > > _______________________________________________ > Gnumed-devel mailing list > [email protected] > http://lists.gnu.org/mailman/listinfo/gnumed-devel > _______________________________________________ Gnumed-devel mailing list [email protected] http://lists.gnu.org/mailman/listinfo/gnumed-devel
