+1 to keeping it private unless someone has had a chance to scrub sensitive
data.
*Harold Dost* | @hdost
On Mon, Nov 16, 2020 at 8:25 PM Kevin Fenzi wrote:
> On Mon, Nov 16, 2020 at 11:14:39AM -0800, Adam Williamson wrote:
> > On Mon, 2020-11-16 at 11:
On Mon, Nov 16, 2020 at 11:14:39AM -0800, Adam Williamson wrote:
> On Mon, 2020-11-16 at 11:01 -0800, Kevin Fenzi wrote:
> > On Mon, Nov 16, 2020 at 12:58:12PM -0500, Matthew Miller wrote:
> > > This is a long shot, I know. When we converted to MediaWiki from MoinMoin
> > > in
> > > 2008, pages we
On Mon, 2020-11-16 at 11:01 -0800, Kevin Fenzi wrote:
> On Mon, Nov 16, 2020 at 12:58:12PM -0500, Matthew Miller wrote:
> > This is a long shot, I know. When we converted to MediaWiki from MoinMoin in
> > 2008, pages were imported, but history was lost. I often like to go
> > spelunking in the past
On Mon, Nov 16, 2020 at 12:58:12PM -0500, Matthew Miller wrote:
> This is a long shot, I know. When we converted to MediaWiki from MoinMoin in
> 2008, pages were imported, but history was lost. I often like to go
> spelunking in the past, and it's often frustrating to hit this barrier.
>
> As I re
On Mon, 16 Nov 2020 at 12:58, Matthew Miller
wrote:
> This is a long shot, I know. When we converted to MediaWiki from MoinMoin
> in
> 2008, pages were imported, but history was lost. I often like to go
> spelunking in the past, and it's often frustrating to hit this barrier.
>
> As I recall, Moi
This is a long shot, I know. When we converted to MediaWiki from MoinMoin in
2008, pages were imported, but history was lost. I often like to go
spelunking in the past, and it's often frustrating to hit this barrier.
As I recall, MoinMoin stored everything in flat files, so it wouldn't even
really
+1
-re
On 3/6/19 5:23 PM, ke...@scrye.com wrote:
From: Kevin Fenzi
Per averi, cloud.gnome.org is to be removed.
This host has no impact on fedora releases and shouldn't freeze.
Signed-off-by: Kevin Fenzi
---
inventory/group_vars/gnome-backups | 1 +
roles/gnome_backups/files/back
+1
On Wed, 6 Mar 2019 at 17:24, wrote:
>
> From: Kevin Fenzi
>
> Per averi, cloud.gnome.org is to be removed.
> This host has no impact on fedora releases and shouldn't freeze.
>
> Signed-off-by: Kevin Fenzi
> ---
> inventory/group_vars/gnome-backups |
From: Kevin Fenzi
Per averi, cloud.gnome.org is to be removed.
This host has no impact on fedora releases and shouldn't freeze.
Signed-off-by: Kevin Fenzi
---
inventory/group_vars/gnome-backups | 1 +
roles/gnome_backups/files/backup.sh | 1 -
roles/gnome_backups/files/ssh_confi
On 10/11/18 9:52 PM, Randy Barlow wrote:
> On Wed, 2018-10-10 at 00:17 +, ke...@scrye.com wrote:
>> +/usr/bin/pg_dump --exclude-table-data users --exclude-table-data
>> tokens --exclude-table-data 'social*' --exclude-table-data sessions
>> -C $DB | /usr/bin/pxz -
On Wed, 2018-10-10 at 00:17 +, ke...@scrye.com wrote:
> +/usr/bin/pg_dump --exclude-table-data users --exclude-table-data
> tokens --exclude-table-data 'social*' --exclude-table-data sessions
> -C $DB | /usr/bin/pxz -T4 > /backups/$DB-public-$(date +%F).dump.xz
It
t; +++ b/roles/postgresql_server/files/backup-database.anitya
> @@ -10,7 +10,7 @@ DB=anitya
> # Make it use a limited number of threads because pxz will use all the
> # cpus which causes pg_dump to starve which causes...
>
> -/usr/bin/pg_dump -T users -T tokens -T 'social*' -T
> +++ b/roles/postgresql_server/files/backup-database.anitya
> @@ -10,7 +10,7 @@ DB=anitya
> # Make it use a limited number of threads because pxz will use all the
> # cpus which causes pg_dump to starve which causes...
>
> -/usr/bin/pg_dump -T users -T tokens -T 'social*'
because pxz will use all the
# cpus which causes pg_dump to starve which causes...
-/usr/bin/pg_dump -T users -T tokens -T 'social*' -T sessions -C $DB |
/usr/bin/pxz -T4 > /backups/$DB-public-$(date +%F).dump.xz
+/usr/bin/pg_dump --exclude-table-data users --exclude-table-data toke
setting up a piwik instance to be used on our
>> websites. However, this instance lives in the cloud and thus can't hit
>> the VPN.
>>
>> A question I ran into and was told to email the list about and/or
>> bring up at the meeting, is: What should our backups st
instance lives in the cloud and thus can't hit
> the VPN.
>
> A question I ran into and was told to email the list about and/or
> bring up at the meeting, is: What should our backups story look like
> for such cases? Mostly, we have a MySQL database on this node that
> wil
es in the cloud and thus can't hit
> the VPN.
>
> A question I ran into and was told to email the list about and/or bring
> up at the meeting, is: What should our backups story look like for such
> cases? Mostly, we have a MySQL database on this node that will get
> really
into and was told to email the list about and/or bring
up at the meeting, is: What should our backups story look like for such
cases? Mostly, we have a MySQL database on this node that will get
really big fairly quickly (right now /var/lib/mysql is about 1.1GB and
piwik only has one site added, and
On Wed, 20 Apr 2016 16:43:56 +
Stephen John Smoogen wrote:
> On 20 April 2016 at 16:37, Kevin Fenzi wrote:
> > A bit before the freeze, the pkgs02 backups of /srv failed.
> >
> > This is the largest and most anoying backup we have since it's tons
> > and
On Wed, Apr 20, 2016 at 10:37:44AM -0600, Kevin Fenzi wrote:
> A bit before the freeze, the pkgs02 backups of /srv failed.
>
> This is the largest and most anoying backup we have since it's tons and
> tons and tons of git repos with small files in it. After the freeze, I
> w
+1 because I forgot.
On 20 April 2016 at 16:43, Stephen John Smoogen wrote:
> On 20 April 2016 at 16:37, Kevin Fenzi wrote:
>> A bit before the freeze, the pkgs02 backups of /srv failed.
>>
>> This is the largest and most anoying backup we have since it's tons and
>
On 20 April 2016 at 16:37, Kevin Fenzi wrote:
> A bit before the freeze, the pkgs02 backups of /srv failed.
>
> This is the largest and most anoying backup we have since it's tons and
> tons and tons of git repos with small files in it. After the freeze, I
> wonder if it would
A bit before the freeze, the pkgs02 backups of /srv failed.
This is the largest and most anoying backup we have since it's tons and
tons and tons of git repos with small files in it. After the freeze, I
wonder if it wouldn't be worthwhile to run a 'git gc' on all our pkgs
g
+1. Does not look like it would cause problems can can be backed out.
On 14 September 2015 at 12:56, Valentin Gologuzov wrote:
> Hi everyone,
> Copr has got a new component: dist-git. It's used to store users SRPMS and
> we would like to backup it.
>
> +1s?
>
> diff -
On Mon, 14 Sep 2015 20:56:28 +0200
Valentin Gologuzov wrote:
> Hi everyone,
> Copr has got a new component: dist-git. It's used to store users
> SRPMS and we would like to backup it.
>
> +1s?
>
> diff --git a/inventory/backups b/inventory/backups
> index e1
Hi everyone,
Copr has got a new component: dist-git. It's used to store users SRPMS
and we would like to backup it.
+1s?
diff --git a/inventory/backups b/inventory/backups
index e1693d5..0a1753b 100644
--- a/inventory/backups
+++ b/inventory/backups
@@ -20,6 +20,7 @@ db-koji01
Applied, thanks.
kevin
pgpkeiHpF0G2G.pgp
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure
>
> +1s? I would apply this, run the backup playbook and then watch
> tomorrow's backups to confirm they work. If not we could revert this.
>
+1 please apply.
--
Stephen J Smoogen.
___
infrastructure mailing list
infr
his issue.
>
> +1s? I would apply this, run the backup playbook and then watch
> tomorrow's backups to confirm they work. If not we could revert this.
+1 from me
It's a local, easily revertible change and the backups aren't happening
as it is - I don't see much risk
row's backups to confirm they work. If not we could revert this.
kevin
--
diff --git a/playbooks/rdiff-backup.yml b/playbooks/rdiff-backup.yml
index 9accda6..fd9dec7 100644
--- a/playbooks/rdiff-backup.yml
+++ b/playbooks/rdiff-backup.yml
@@ -20,11 +20,11 @@
tasks:
- name: run rdiff-back
Pushed, thanks.
kevin
pgpB3xoJqdf1e.pgp
Description: OpenPGP digital signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure
On Tue, Jul 28, 2015 at 03:30:45PM -0600, Kevin Fenzi wrote:
> qadevel.qa.fedoraproject.org was out of backups waiting for a firewall
> change so the backup01 machine could reach it. This has now been
> completed, so I'd like to readd it to backups.
>
> +1s?
>
> diff -
+1
With kind regards,
Patrick Uiterwijk
Fedora Infra
- Original Message -
> qadevel.qa.fedoraproject.org was out of backups waiting for a firewall
> change so the backup01 machine could reach it. This has now been
> completed, so I'd like to readd it to backups.
>
>
qadevel.qa.fedoraproject.org was out of backups waiting for a firewall
change so the backup01 machine could reach it. This has now been
completed, so I'd like to readd it to backups.
+1s?
diff --git a/inventory/backups b/inventory/backups
index ed92428..e1693d5 100644
--- a/inventory/ba
On Mon, Oct 27, 2014 at 08:22:15AM -0600, Kevin Fenzi wrote:
> I'd like to add the production taskotron01.qa to backups per
> https://fedorahosted.org/fedora-infrastructure/ticket/4560
>
> We couldn't add this before because it needed a RHIT firewall change to
>
On Mon, Oct 27, 2014 at 08:22:15AM -0600, Kevin Fenzi wrote:
> I'd like to add the production taskotron01.qa to backups per
> https://fedorahosted.org/fedora-infrastructure/ticket/4560
>
> We couldn't add this before because it needed a RHIT firewall change to
>
I'd like to add the production taskotron01.qa to backups per
https://fedorahosted.org/fedora-infrastructure/ticket/4560
We couldn't add this before because it needed a RHIT firewall change to
allow backup03 to talk to taskotron01.qa. This has been completed, so
it should just be a
2013/12/13 Andrea Veri
> 2013/12/13 Kevin Fenzi
>
>> Some minor nits:
>>
>> EXCLUDE_LIST='/etc/rsyncd/backup.exclude'
>>
>> Doesn't seem to be defined anywhere? Should that be in the users
>> directory and defined?
>
>
> Theoretically that entry will be read by rsync when the connection on the
>
2013/12/13 Kevin Fenzi
> Some minor nits:
>
> EXCLUDE_LIST='/etc/rsyncd/backup.exclude'
>
> Doesn't seem to be defined anywhere? Should that be in the users
> directory and defined?
Theoretically that entry will be read by rsync when the connection on the
relevant machine will be accepted and i
Some minor nits:
EXCLUDE_LIST='/etc/rsyncd/backup.exclude'
Doesn't seem to be defined anywhere? Should that be in the users
directory and defined?
'recurse=yes' when adding the directories doesn't seem needed?
$item should be '{{ item }}' :)
(but I can clean that up later when I fix the rest
Team (thanks Kevin and Stephen) is willing to host our backups on one of
>> the backups nodes. (backup03)
>>
>> Given no backups are being taken since more than 10 days I'm hereby
>> asking to receive a freeze break to finally set up a nightly rsync run from
>> b
On 12 December 2013 15:59, Andrea Veri wrote:
> Hi,
>
> as we discussed earlier today in #fedora-noc, the GNOME Project is
> currently not having a backup plan in place and the Fedora Infrastructure
> Team (thanks Kevin and Stephen) is willing to host our backups on one of
>
Hi,
as we discussed earlier today in #fedora-noc, the GNOME Project is
currently not having a backup plan in place and the Fedora Infrastructure
Team (thanks Kevin and Stephen) is willing to host our backups on one of
the backups nodes. (backup03)
Given no backups are being taken since more than
On Fri, 1 Nov 2013 15:53:54 -0600
Tim Flink wrote:
With that +2, I've committed and pushed the change.
Thanks,
Tim
> I've recently set up a phabricator instance to trial for qa work. Even
> though it's a dev host, we're still using it as our bug tracker etc.
> and I'd like to have it backed up
sure, +1
kevin
signature.asc
Description: PGP signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure
Seems fair to me. I say +1
On Fri, Nov 1, 2013 at 10:53 PM, Tim Flink wrote:
> I've recently set up a phabricator instance to trial for qa work. Even
> though it's a dev host, we're still using it as our bug tracker etc.
> and I'd like to have it backed up.
>
> This patch adds qadevel.cloud.fed
I've recently set up a phabricator instance to trial for qa work. Even
though it's a dev host, we're still using it as our bug tracker etc.
and I'd like to have it backed up.
This patch adds qadevel.cloud.fedoraproject.org as a host to be backed
up and adds /srv/backup (where the mysqldump goes)
a
On Fri, 17 May 2013 11:10:03 -0600
Stephen John Smoogen wrote:
> On 17 May 2013 10:36, Kevin Fenzi wrote:
>
> > Greetings.
> >
> > Backups of lockbox01 are unfortunately backing up .snapshot
> > directories and their contents. This pulls in a ton of stuff and
>
On 17 May 2013 10:36, Kevin Fenzi wrote:
> Greetings.
>
> Backups of lockbox01 are unfortunately backing up .snapshot directories
> and their contents. This pulls in a ton of stuff and makes backups take
> forever and contain things we don't want.
>
> This one line ch
On Fri, 17 May 2013 10:36:21 -0600
Kevin Fenzi wrote:
> Greetings.
>
> Backups of lockbox01 are unfortunately backing up .snapshot
> directories and their contents. This pulls in a ton of stuff and
> makes backups take forever and contain things we don't want.
>
> T
Greetings.
Backups of lockbox01 are unfortunately backing up .snapshot directories
and their contents. This pulls in a ton of stuff and makes backups take
forever and contain things we don't want.
This one line change should hopefully make it exclude .snapshot dirs.
(Apparently it ne
On Thu, 9 May 2013 15:35:44 -0600
Kevin Fenzi wrote:
> Another fun mail to start some discussion. ;)
>
> The topic this time is backups.
>
> Currently, we do backups two ways:
>
> 1. We have a backup server in phx2 with a tape drive running bacula.
> It backs up
Another fun mail to start some discussion. ;)
The topic this time is backups.
Currently, we do backups two ways:
1. We have a backup server in phx2 with a tape drive running bacula. It
backs up machines and spools the backups to tape. It backs up the
following:
ask01
bastion01
bastion02
On 18 March 2013 19:25, seth vidal wrote:
> After a super-fun-time debacle restoring a single file today I'd like
> to talk about our backups a bit.
>
> Right now our backups are:
>
> - bacula to a few central servers and then off to tape.
>
> That seems like it is
On Mon, 18 Mar 2013 21:25:23 -0400
seth vidal wrote:
> After a super-fun-time debacle restoring a single file today I'd like
> to talk about our backups a bit.
>
> Right now our backups are:
>
> - bacula to a few central servers and then off to tape.
(we also have dis
After a super-fun-time debacle restoring a single file today I'd like
to talk about our backups a bit.
Right now our backups are:
- bacula to a few central servers and then off to tape.
That seems like it is not scaling super-duper well for our size of disk
storage. It also seems like it
Applied. Thanks.
kevin
signature.asc
Description: PGP signature
___
infrastructure mailing list
infrastructure@lists.fedoraproject.org
https://admin.fedoraproject.org/mailman/listinfo/infrastructure
On 8 May 2012 08:42, Kevin Fenzi wrote:
> After the move the other week to the hosted01/02 cluster, I switched
> backups to backup hosted02. However, we backed out this migration and
> went back to hosted03 as the active node.
>
> I'd like to switch our backups back to hosted0
On 05/08/2012 10:42 AM, Kevin Fenzi wrote:
> After the move the other week to the hosted01/02 cluster, I switched
> backups to backup hosted02. However, we backed out this migration and
> went back to hosted03 as the active node.
>
> I'd like to switch our backups back t
After the move the other week to the hosted01/02 cluster, I switched
backups to backup hosted02. However, we backed out this migration and
went back to hosted03 as the active node.
I'd like to switch our backups back to hosted03 so we are actually
backing up the thing that our users are
On Thu, 2011-10-27 at 12:23 -0400, seth vidal wrote:
> On Thu, 2011-10-27 at 09:21 -0700, Toshio Kuratomi wrote:
> > +1not having backups for hosted is a little scarey :-)
> >
> > If it can't handle the load we can revert and try something else.
>
>
> +1
On Thu, 2011-10-27 at 09:21 -0700, Toshio Kuratomi wrote:
> +1not having backups for hosted is a little scarey :-)
>
> If it can't handle the load we can revert and try something else.
+1
But watch it close and if it starts to overlord - let's punt and do the
sb05
+1not having backups for hosted is a little scarey :-)
If it can't handle the load we can revert and try something else.
-Toshio
On Thu, Oct 27, 2011 at 10:18:12AM -0600, Kevin Fenzi wrote:
> In the past, hosted01 has rsync'd it's content to hosted02, and then
> hoste
In the past, hosted01 has rsync'd it's content to hosted02, and then
hosted02 gets backed up to backup01. However, we are currently having
an odd network issue on serverbeach05/hosted02, causing it to not have
external connectivity periodically, which means backups to backup01
fail. ;(
On Wednesday 24 March 2010 12:08:00 pm Toshio Kuratomi wrote:
> On Wed, Mar 24, 2010 at 10:54:07AM -0500, Mike McGrath wrote:
> > ---
> >
> > configs/db/backup-dbs |2 ++
> > 1 files changed, 2 insertions(+), 0 deletions(-)
> >
> > diff --git a/configs/db/backup-dbs b/configs/db/backup-dbs
>
On Wed, 24 Mar 2010, Toshio Kuratomi wrote:
> On Wed, Mar 24, 2010 at 10:54:07AM -0500, Mike McGrath wrote:
> > ---
> > configs/db/backup-dbs |2 ++
> > 1 files changed, 2 insertions(+), 0 deletions(-)
> >
> > diff --git a/configs/db/backup-dbs b/configs/db/backup-dbs
> > index 83ed6c8..7af66
On Wed, Mar 24, 2010 at 10:54:07AM -0500, Mike McGrath wrote:
> ---
> configs/db/backup-dbs |2 ++
> 1 files changed, 2 insertions(+), 0 deletions(-)
>
> diff --git a/configs/db/backup-dbs b/configs/db/backup-dbs
> index 83ed6c8..7af66fb 100755
> --- a/configs/db/backup-dbs
> +++ b/configs/db
On Wed, 24 Mar 2010, Jesse Keating wrote:
> On Wed, 2010-03-24 at 10:54 -0500, Mike McGrath wrote:
> > # Sync out
> > for host in db01 db02 db03; do
> > +# Sleep if any other rsyncs are already
> > +while ssh $host "pgrep rsync" | grep -q [0-9]; do sleep 10; done
>
> Seems like a pretty
On Wed, 2010-03-24 at 10:54 -0500, Mike McGrath wrote:
> # Sync out
> for host in db01 db02 db03; do
> +# Sleep if any other rsyncs are already
> +while ssh $host "pgrep rsync" | grep -q [0-9]; do sleep 10; done
Seems like a pretty wide net, what if somebody is rsyncing something
else t
---
configs/db/backup-dbs |2 ++
1 files changed, 2 insertions(+), 0 deletions(-)
diff --git a/configs/db/backup-dbs b/configs/db/backup-dbs
index 83ed6c8..7af66fb 100755
--- a/configs/db/backup-dbs
+++ b/configs/db/backup-dbs
@@ -20,6 +20,8 @@ mv $DEST/$HOSTNAME.new $DEST/$HOSTNAME
# Sync
70 matches
Mail list logo