Re: [Nix-dev] Going through hell with systemd timers

2016-01-28 Thread zimbatm
We really need to get rid of that SSL_CERT_FILE environment variable...
related issue: https://github.com/NixOS/nixpkgs/issues/8486

On Thu, 28 Jan 2016 at 15:06 4levels <4lev...@gmail.com> wrote:

> Dear Guillaume,
>
> you saved my day (and sleepless last night)!
> This line worked all the magic for me:
>
> environment = {
>   inherit (config.environment.variables) SSL_CERT_FILE;
> };
>
> Make sure to pass through our office whenever you're near Gent, Belgium,
> we're keeping a bottle of champagne chilled for you ;-)
>
> And again nix-dev proves to be the most valuable resource in our NixOs /
> NixOps experience!
>
> Kind regards to you all, and as before, keep up the amazing work and great
> attitude..
>
>
> Erik
>
> On Thu, Jan 28, 2016 at 11:19 AM Guillaume Maudoux (Layus) <
> layus...@gmail.com> wrote:
>
>> Hi,
>>
>> My experience with urlwatch was that the SSL_CERT_FILE env var was
>> missing.
>> This may also be your issue if you are using the network.
>>
>> It is however always possible to run the service manually, and see the
>> logs.
>> A service declaration with a startAt attribute creates two services, .service
>> and .timer.
>> You can start your service with # systemctl start .service and
>> see the logs in journald.
>> (No need to wait for the timer.)
>>
>> I made urlwatch work with the following snippet :
>>
>>   systemd.services.urlwatch = rec {
>> description = "Run urlwatch (${startAt})";
>> startAt = "hourly";
>> environment = {
>>   inherit (config.environment.variables) SSL_CERT_FILE;
>> };
>>
>> serviceConfig = {
>>   User = "layus"; # should use a user unit...
>>   ExecStart = "${urlwatch}/bin/urlwatch -v";
>> };
>>   };
>>
>> For debugging, I used:
>> # systemctl start urlwatch.service : Start the service once.
>> $ systemctl status urlwatch.service -l -n 1000 : See the systemd logs
>> for the last run, up to 1000 full lines.
>> # journalctl -xef --unit=urlwatch : Print all the logs, -f follows the
>> output in real time.
>>
>> A very simple trick is indeed to dump the environment at the start of the
>> script.
>>
>> Layus.
>>
>> Le 28/01/16 10:59, 4levels a écrit :
>>
>> Hi Zimbatm (is that your name :-)
>>
>> I'm currently trying to debug by using printenv to view the differences
>> in both environments.  I'm not yet familiar with using nix-shell :-s
>> But since I cannot even deploy or rebuild switch anymore (see my other
>> email to this list) I'm pretty stuck.
>>
>> I'm currently repartitioning and reinstalling the Vultr machines (*sigh*)
>> to see if that brings any changes..
>>
>> Thank you for your pointer to the environment vars!
>>
>> Erik
>>
>> On Thu, Jan 28, 2016 at 10:56 AM zimbatm < 
>> zimb...@zimbatm.com> wrote:
>>
>>> One common error with system services are missing environment variable.
>>> When testing with your shell you will have $HOME set for example.
>>>
>>> On Thu, 28 Jan 2016 09:43 4levels < <4lev...@gmail.com>4lev...@gmail.com>
>>> wrote:
>>>
 Hi Exi,

 thank you for your reply.

 This is the timers config I'm using (note that I'm starting this every
 5 minutes to troubleshoot, is supposed to run every 2 hours or so)

 backup = {
   description = "Backup service";
   after = [ "network.target" "mysql.target" ];
   path = [ pkgs.procps pkgs.gawk pkgs.nettools pkgs.mysql pkgs.php 
 pkgs.duplicity pkgs.postfix ];
   script =
   ''
   ./s3Backup.sh
   '';
   startAt = "*-*-* *:0/5:00";



 And the contents of the s3Backup.sh script:

 s3Backup = name:
   ''
 #!${pkgs.bash}/bin/bash

 ${builtins.readFile ./src/envrc}

 # Your GPG key
 GPG_KEY=

 # export 
 PATH="$PATH:/var/setuid-wrappers:/run/current-system/sw/bin:/run/current-system/sw/sbin"

 # Set up some variables for logging
 LOGFILE="/var/lib/projects/${name}/log/duplicity-backup.log"
 DAILYLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.daily.log"
 
 FULLBACKLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.full.log"
 HOST=`hostname`
 DATE=`date +%Y-%m-%d`
 MAILADDR="d...@domain.com"
 TODAY=$(date +%d%m%Y)

 # The S3 destination followed by bucket name
 DEST="s3://s3.amazonaws.com/projects-backup-eu-west/${name} 
 "

 is_running=$(ps -ef | grep duplicity  | grep python | wc -l)

 if [ ! -f $FULLBACKLOGFILE ]; then
   touch $FULLBACKLOGFILE
 fi

 if [ $is_running -eq 0 ]; then
   # Clear the old daily log file
   cat /dev/null > ''${DAILYLOGFILE}

   # Trace function for logging, don't change this
   trace () {
 stamp=`date +%Y-%m-%d_%H:%M:%S`
 echo "$stamp: $*" >> ''${DAILYLOGFILE}
   }

   # 

Re: [Nix-dev] Going through hell with systemd timers

2016-01-28 Thread exi
Hi Erik,

does duplicity use an ssh connection? Does it depend on your ssh
passphrase to be present? Do you use a ssh agent?

"BackendException" from the traceback looks more like a connection issue
than a nix issue.
Which user is running the timer command?
Could you post your timer config?

Regards,

exi

On 28.01.2016 09:07, 4levels wrote:
> Hi Nix-Devs,
>
> yesterday I came to a point of really wanting to break something out
> of sheer frustration over failing systemd timer calls.
>
> I've setup a duplicity backup script over s3 that works flawlessly
> when invoked from terminal, but fails misrably when being called from
> a timer.
>
> I've tried everything I know, including but not limited to adding my
> full user $PATH to the script, adding all possible related packages to
> the path directive, .. nothing seems to work.
>
> The duplicity error is very vague (BackendException) and when adding
> maximum verbosity to the duplicity call ( -v9 ) I do get some error
> which seems to be related to a very old duplicity bug.  Since
> duplicity uses python (the version I could trace seems to be 2.7) with
> python-boto for the s3 backend - the issue seems to be related to
> this, but I can't figure out what could be the reason since all
> required packages are installed and operational from the commandline.
>
> Has anyone experience with running python-based code in systemd timer
> calls (without being bitten)?
>
> On top of that, Github went down for a couple of hours last night and
> to make things even worse, NixOps cannot finish a deploy on any of the
> 5 machines I'm managing with it anymore, with a vague error message:
>
> v-ams02...> updating GRUB 2 menu...
> v-ams02...> Died at
> /nix/var/nix/profiles/system/bin/switch-to-configuration line 264.
> v-ams02...> error: unable to activate new configuration
>
> Kind regards.
>
> Erik 
>
> Duplicity error with maximum verbosity:
> Backend error detail: Traceback (most recent call last):
>   File
> "/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/bin/.duplicity-wrapped",
> line 1519, in 
> with_tempdir(main)
>   File
> "/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/bin/.duplicity-wrapped",
> line 1513, in with_tempdir
> fn()
>   File
> "/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/bin/.duplicity-wrapped",
> line 1354, in main
> action = commandline.ProcessCommandLine(sys.argv[1:])
>   File
> "/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/commandline.py",
> line 1070, in ProcessCommandLine
> backup, local_pathname = set_backend(args[0], args[1])
>   File
> "/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/commandline.py",
> line 961, in set_backend
> globals.backend = backend.get_backend(bend)
>   File
> "/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/backend.py",
> line 223, in get_backend
> obj = get_backend_object(url_string)
>   File
> "/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/backend.py",
> line 209, in get_backend_object
> return factory(pu)
>   File
> "/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/backends/_boto_single.py",
> line 161, in __init__
> self.resetConnection()
>   File
> "/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/backends/_boto_single.py",
> line 187, in resetConnection
> raise BackendException(err.message)
>
>
>
> !DSPAM:56a9cc5f200881139745903!
>
>
>
> ___
> nix-dev mailing list
> nix-dev@lists.science.uu.nl
> http://lists.science.uu.nl/mailman/listinfo/nix-dev
>
>
> !DSPAM:56a9cc5f200881139745903!
>
>


___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


[Nix-dev] Going through hell with systemd timers

2016-01-28 Thread 4levels
Hi Nix-Devs,

yesterday I came to a point of really wanting to break something out of
sheer frustration over failing systemd timer calls.

I've setup a duplicity backup script over s3 that works flawlessly when
invoked from terminal, but fails misrably when being called from a timer.

I've tried everything I know, including but not limited to adding my full
user $PATH to the script, adding all possible related packages to the path
directive, .. nothing seems to work.

The duplicity error is very vague (BackendException) and when adding
maximum verbosity to the duplicity call ( -v9 ) I do get some error which
seems to be related to a very old duplicity bug.  Since duplicity uses
python (the version I could trace seems to be 2.7) with python-boto for the
s3 backend - the issue seems to be related to this, but I can't figure out
what could be the reason since all required packages are installed and
operational from the commandline.

Has anyone experience with running python-based code in systemd timer calls
(without being bitten)?

On top of that, Github went down for a couple of hours last night and to
make things even worse, NixOps cannot finish a deploy on any of the 5
machines I'm managing with it anymore, with a vague error message:

v-ams02...> updating GRUB 2 menu...
v-ams02...> Died at
/nix/var/nix/profiles/system/bin/switch-to-configuration line 264.
v-ams02...> error: unable to activate new configuration

Kind regards.

Erik

Duplicity error with maximum verbosity:
Backend error detail: Traceback (most recent call last):
  File
"/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/bin/.duplicity-wrapped",
line 1519, in 
with_tempdir(main)
  File
"/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/bin/.duplicity-wrapped",
line 1513, in with_tempdir
fn()
  File
"/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/bin/.duplicity-wrapped",
line 1354, in main
action = commandline.ProcessCommandLine(sys.argv[1:])
  File
"/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/commandline.py",
line 1070, in ProcessCommandLine
backup, local_pathname = set_backend(args[0], args[1])
  File
"/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/commandline.py",
line 961, in set_backend
globals.backend = backend.get_backend(bend)
  File
"/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/backend.py",
line 223, in get_backend
obj = get_backend_object(url_string)
  File
"/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/backend.py",
line 209, in get_backend_object
return factory(pu)
  File
"/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/backends/_boto_single.py",
line 161, in __init__
self.resetConnection()
  File
"/nix/store/ap2bv0p5m8napigg7f6yciap4nm61ap8-duplicity-0.7.02/lib/python2.7/site-packages/duplicity/backends/_boto_single.py",
line 187, in resetConnection
raise BackendException(err.message)
___
nix-dev mailing list
nix-dev@lists.science.uu.nl
http://lists.science.uu.nl/mailman/listinfo/nix-dev


Re: [Nix-dev] Going through hell with systemd timers

2016-01-28 Thread 4levels
Hi Exi,

thank you for your reply.

This is the timers config I'm using (note that I'm starting this every 5
minutes to troubleshoot, is supposed to run every 2 hours or so)

backup = {
  description = "Backup service";
  after = [ "network.target" "mysql.target" ];
  path = [ pkgs.procps pkgs.gawk pkgs.nettools pkgs.mysql pkgs.php
pkgs.duplicity pkgs.postfix ];
  script =
  ''
  ./s3Backup.sh
  '';
  startAt = "*-*-* *:0/5:00";



And the contents of the s3Backup.sh script:

s3Backup = name:
  ''
#!${pkgs.bash}/bin/bash

${builtins.readFile ./src/envrc}

# Your GPG key
GPG_KEY=

# export 
PATH="$PATH:/var/setuid-wrappers:/run/current-system/sw/bin:/run/current-system/sw/sbin"

# Set up some variables for logging
LOGFILE="/var/lib/projects/${name}/log/duplicity-backup.log"
DAILYLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.daily.log"
FULLBACKLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.full.log"
HOST=`hostname`
DATE=`date +%Y-%m-%d`
MAILADDR="d...@domain.com"
TODAY=$(date +%d%m%Y)

# The S3 destination followed by bucket name
DEST="s3://s3.amazonaws.com/projects-backup-eu-west/${name}"

is_running=$(ps -ef | grep duplicity  | grep python | wc -l)

if [ ! -f $FULLBACKLOGFILE ]; then
  touch $FULLBACKLOGFILE
fi

if [ $is_running -eq 0 ]; then
  # Clear the old daily log file
  cat /dev/null > ''${DAILYLOGFILE}

  # Trace function for logging, don't change this
  trace () {
stamp=`date +%Y-%m-%d_%H:%M:%S`
echo "$stamp: $*" >> ''${DAILYLOGFILE}
  }

  # Dump $PATH
  trace "Current PATH: $PATH"

  # How long to keep backups for
  OLDER_THAN="1M"

  # The source of your backup
  SOURCE=/var/lib/projects/${name}

  FULL=
  tail -1 ''${FULLBACKLOGFILE} | grep ''${TODAY} > /dev/null
  if [ $? -ne 0 -a $(date +%d) -eq 1 ]; then
FULL=full
  fi;

  trace "Backup for local filesystem started"

  trace "... removing old backups"

  duplicity remove-older-than ''${OLDER_THAN} ''${DEST}
--s3-use-new-style >> ''${DAILYLOGFILE} 2>&1

  trace "... backing up filesystem"

  duplicity \
''${FULL} \
-v9 \
--s3-use-new-style --s3-european-buckets --no-encryption \
--include=/var/lib/projects/${name}/data/backup \
--exclude=/** \
--allow-source-mismatch \
''${SOURCE} ''${DEST} >> ''${DAILYLOGFILE} 2>&1

  trace "Backup for local filesystem complete"
  trace ""

  # Send the daily log file by email
  BACKUPSTATUS=`cat "$DAILYLOGFILE" | grep Errors | awk '{ print $2 }'`
  if [ "$BACKUPSTATUS" != "0" ]; then
echo -e "Subject: Duplicity Backup Log for $HOST - $DATE -
${name}\n\n$(cat $DAILYLOGFILE)" | sendmail $MAILADDR
  elif [ "$FULL" = "full" ]; then
echo "$(date +%d%m%Y_%T) Full Back Done" >> $FULLBACKLOGFILE
  fi

  # Append the daily log file to the main log file
  cat "$DAILYLOGFILE" >> $LOGFILE

fi

unset AWS_ACCESS_KEY_ID
unset AWS_SECRET_ACCESS_KEY
unset PASSPHRASE
  '';



On Thu, Jan 28, 2016 at 10:34 AM exi  wrote:

> Hi Erik,
>
> does duplicity use an ssh connection? Does it depend on your ssh
> passphrase to be present? Do you use a ssh agent?
>
> "BackendException" from the traceback looks more like a connection issue
> than a nix issue.
> Which user is running the timer command?
> Could you post your timer config?
>
> Regards,
>
> exi
>
>
> On 28.01.2016 09:07, 4levels wrote:
>
> Hi Nix-Devs,
>
> yesterday I came to a point of really wanting to break something out of
> sheer frustration over failing systemd timer calls.
>
> I've setup a duplicity backup script over s3 that works flawlessly when
> invoked from terminal, but fails misrably when being called from a timer.
>
> I've tried everything I know, including but not limited to adding my full
> user $PATH to the script, adding all possible related packages to the path
> directive, .. nothing seems to work.
>
> The duplicity error is very vague (BackendException) and when adding
> maximum verbosity to the duplicity call ( -v9 ) I do get some error which
> seems to be related to a very old duplicity bug.  Since duplicity uses
> python (the version I could trace seems to be 2.7) with python-boto for the
> s3 backend - the issue seems to be related to this, but I can't figure out
> what could be the reason since all required packages are installed and
> operational from the commandline.
>
> Has anyone experience with running python-based code in systemd timer
> calls (without being bitten)?
>
> On top of that, Github went down for a couple of hours last night and to
> make things even worse, NixOps cannot finish a deploy on any of the 5
> machines I'm managing with it anymore, with a vague error message:
>
> v-ams02...> updating GRUB 2 menu...
> v-ams02...> Died at
> 

Re: [Nix-dev] Going through hell with systemd timers

2016-01-28 Thread 4levels
Hi Zimbatm (is that your name :-)

I'm currently trying to debug by using printenv to view the differences in
both environments.  I'm not yet familiar with using nix-shell :-s
But since I cannot even deploy or rebuild switch anymore (see my other
email to this list) I'm pretty stuck.

I'm currently repartitioning and reinstalling the Vultr machines (*sigh*)
to see if that brings any changes..

Thank you for your pointer to the environment vars!

Erik

On Thu, Jan 28, 2016 at 10:56 AM zimbatm  wrote:

> One common error with system services are missing environment variable.
> When testing with your shell you will have $HOME set for example.
>
> On Thu, 28 Jan 2016 09:43 4levels <4lev...@gmail.com> wrote:
>
>> Hi Exi,
>>
>> thank you for your reply.
>>
>> This is the timers config I'm using (note that I'm starting this every 5
>> minutes to troubleshoot, is supposed to run every 2 hours or so)
>>
>> backup = {
>>   description = "Backup service";
>>   after = [ "network.target" "mysql.target" ];
>>   path = [ pkgs.procps pkgs.gawk pkgs.nettools pkgs.mysql pkgs.php 
>> pkgs.duplicity pkgs.postfix ];
>>   script =
>>   ''
>>   ./s3Backup.sh
>>   '';
>>   startAt = "*-*-* *:0/5:00";
>>
>>
>>
>> And the contents of the s3Backup.sh script:
>>
>> s3Backup = name:
>>   ''
>> #!${pkgs.bash}/bin/bash
>>
>> ${builtins.readFile ./src/envrc}
>>
>> # Your GPG key
>> GPG_KEY=
>>
>> # export 
>> PATH="$PATH:/var/setuid-wrappers:/run/current-system/sw/bin:/run/current-system/sw/sbin"
>>
>> # Set up some variables for logging
>> LOGFILE="/var/lib/projects/${name}/log/duplicity-backup.log"
>> DAILYLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.daily.log"
>> FULLBACKLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.full.log"
>> HOST=`hostname`
>> DATE=`date +%Y-%m-%d`
>> MAILADDR="d...@domain.com"
>> TODAY=$(date +%d%m%Y)
>>
>> # The S3 destination followed by bucket name
>> DEST="s3://s3.amazonaws.com/projects-backup-eu-west/${name} 
>> "
>>
>> is_running=$(ps -ef | grep duplicity  | grep python | wc -l)
>>
>> if [ ! -f $FULLBACKLOGFILE ]; then
>>   touch $FULLBACKLOGFILE
>> fi
>>
>> if [ $is_running -eq 0 ]; then
>>   # Clear the old daily log file
>>   cat /dev/null > ''${DAILYLOGFILE}
>>
>>   # Trace function for logging, don't change this
>>   trace () {
>> stamp=`date +%Y-%m-%d_%H:%M:%S`
>> echo "$stamp: $*" >> ''${DAILYLOGFILE}
>>   }
>>
>>   # Dump $PATH
>>   trace "Current PATH: $PATH"
>>
>>   # How long to keep backups for
>>   OLDER_THAN="1M"
>>
>>   # The source of your backup
>>   SOURCE=/var/lib/projects/${name}
>>
>>   FULL=
>>   tail -1 ''${FULLBACKLOGFILE} | grep ''${TODAY} > /dev/null
>>   if [ $? -ne 0 -a $(date +%d) -eq 1 ]; then
>> FULL=full
>>   fi;
>>
>>   trace "Backup for local filesystem started"
>>
>>   trace "... removing old backups"
>>
>>   duplicity remove-older-than ''${OLDER_THAN} ''${DEST} 
>> --s3-use-new-style >> ''${DAILYLOGFILE} 2>&1
>>
>>   trace "... backing up filesystem"
>>
>>   duplicity \
>> ''${FULL} \
>> -v9 \
>> --s3-use-new-style --s3-european-buckets --no-encryption \
>> --include=/var/lib/projects/${name}/data/backup \
>> --exclude=/** \
>> --allow-source-mismatch \
>> ''${SOURCE} ''${DEST} >> ''${DAILYLOGFILE} 2>&1
>>
>>   trace "Backup for local filesystem complete"
>>   trace ""
>>
>>   # Send the daily log file by email
>>   BACKUPSTATUS=`cat "$DAILYLOGFILE" | grep Errors | awk '{ print $2 }'`
>>   if [ "$BACKUPSTATUS" != "0" ]; then
>> echo -e "Subject: Duplicity Backup Log for $HOST - $DATE - 
>> ${name}\n\n$(cat $DAILYLOGFILE)" | sendmail $MAILADDR
>>   elif [ "$FULL" = "full" ]; then
>> echo "$(date +%d%m%Y_%T) Full Back Done" >> $FULLBACKLOGFILE
>>   fi
>>
>>   # Append the daily log file to the main log file
>>   cat "$DAILYLOGFILE" >> $LOGFILE
>>
>> fi
>>
>> unset AWS_ACCESS_KEY_ID
>> unset AWS_SECRET_ACCESS_KEY
>> unset PASSPHRASE
>>   '';
>>
>>
>>
>> On Thu, Jan 28, 2016 at 10:34 AM exi  wrote:
>>
>>> Hi Erik,
>>>
>>> does duplicity use an ssh connection? Does it depend on your ssh
>>> passphrase to be present? Do you use a ssh agent?
>>>
>>> "BackendException" from the traceback looks more like a connection issue
>>> than a nix issue.
>>> Which user is running the timer command?
>>> Could you post your timer config?
>>>
>>> Regards,
>>>
>>> exi
>>>
>>>
>>> On 28.01.2016 09:07, 4levels wrote:
>>>
>>> Hi Nix-Devs,
>>>
>>> yesterday I came to a point of really wanting to break something out of
>>> sheer frustration over failing systemd timer calls.
>>>
>>> I've setup a duplicity backup script over s3 that works 

Re: [Nix-dev] Going through hell with systemd timers

2016-01-28 Thread Guillaume Maudoux (Layus)
Hi,

My experience with urlwatch was that the SSL_CERT_FILE env var was missing.
This may also be your issue if you are using the network.

It is however always possible to run the service manually, and see the logs.
A service declaration with a startAt attribute creates two services,
.service and .timer.
You can start your service with |# systemctl start .service|
and see the logs in journald.
(No need to wait for the timer.)

I made urlwatch work with the following snippet :

|systemd.services.urlwatch = rec { description = "Run urlwatch
(${startAt})"; startAt = "hourly"; environment = { inherit
(config.environment.variables) SSL_CERT_FILE; }; serviceConfig = { User
= "layus"; # should use a user unit... ExecStart =
"${urlwatch}/bin/urlwatch -v"; }; }; |

For debugging, I used:
|# systemctl start urlwatch.service| : Start the service once.
|$ systemctl status urlwatch.service -l -n 1000| : See the systemd logs
for the last run, up to 1000 full lines.
|# journalctl -xef --unit=urlwatch| : Print all the logs, -f follows the
output in real time.

A very simple trick is indeed to dump the environment at the start of
the script.

Layus.

Le 28/01/16 10:59, 4levels a écrit :

> Hi Zimbatm (is that your name :-)
>
> I'm currently trying to debug by using printenv to view the
> differences in both environments.  I'm not yet familiar with using
> nix-shell :-s
> But since I cannot even deploy or rebuild switch anymore (see my other
> email to this list) I'm pretty stuck.
>
> I'm currently repartitioning and reinstalling the Vultr machines
> (*sigh*) to see if that brings any changes..
>
> Thank you for your pointer to the environment vars!
>
> Erik
>
> On Thu, Jan 28, 2016 at 10:56 AM zimbatm  > wrote:
>
> One common error with system services are missing environment
> variable. When testing with your shell you will have $HOME set for
> example.
>
>
> On Thu, 28 Jan 2016 09:43 4levels <4lev...@gmail.com
> > wrote:
>
> Hi Exi,
>
> thank you for your reply.
>
> This is the timers config I'm using (note that I'm starting
> this every 5 minutes to troubleshoot, is supposed to run every
> 2 hours or so)
>
> backup = {
>   description = "Backup service";
>   after = [ "network.target" "mysql.target" ];
>   path = [ pkgs.procps pkgs.gawk pkgs.nettools pkgs.mysql pkgs.php 
> pkgs.duplicity pkgs.postfix ];
>   script =
>   ''
>   ./s3Backup.sh
>   '';
>   startAt = "*-*-* *:0/5:00";
>
>
>
> And the contents of the s3Backup.sh script:
>
> s3Backup = name:
>   ''
> #!${pkgs.bash}/bin/bash
>
> ${builtins.readFile ./src/envrc}
>
> # Your GPG key
> GPG_KEY=
>
> # export 
> PATH="$PATH:/var/setuid-wrappers:/run/current-system/sw/bin:/run/current-system/sw/sbin"
>
> # Set up some variables for logging
> LOGFILE="/var/lib/projects/${name}/log/duplicity-backup.log"
> 
> DAILYLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.daily.log"
> 
> FULLBACKLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.full.log"
> HOST=`hostname`
> DATE=`date +%Y-%m-%d`
> MAILADDR="d...@domain.com "
> TODAY=$(date +%d%m%Y)
>
> # The S3 destination followed by bucket name
> DEST="s3://s3.amazonaws.com/projects-backup-eu-west/${name}
> "
>
> is_running=$(ps -ef | grep duplicity  | grep python | wc -l)
>
> if [ ! -f $FULLBACKLOGFILE ]; then
>   touch $FULLBACKLOGFILE
> fi
>
> if [ $is_running -eq 0 ]; then
>   # Clear the old daily log file
>   cat /dev/null > ''${DAILYLOGFILE}
>
>   # Trace function for logging, don't change this
>   trace () {
> stamp=`date +%Y-%m-%d_%H:%M:%S`
> echo "$stamp: $*" >> ''${DAILYLOGFILE}
>   }
>
>   # Dump $PATH
>   trace "Current PATH: $PATH"
>
>   # How long to keep backups for
>   OLDER_THAN="1M"
>
>   # The source of your backup
>   SOURCE=/var/lib/projects/${name}
>
>   FULL=
>   tail -1 ''${FULLBACKLOGFILE} | grep ''${TODAY} > /dev/null
>   if [ $? -ne 0 -a $(date +%d) -eq 1 ]; then
> FULL=full
>   fi;
>
>   trace "Backup for local filesystem started"
>
>   trace "... removing old backups"
>
>   duplicity remove-older-than ''${OLDER_THAN} ''${DEST} 
> --s3-use-new-style >> ''${DAILYLOGFILE} 2>&1
>
>   trace "... backing up filesystem"
>
>   duplicity \
> 

Re: [Nix-dev] Going through hell with systemd timers

2016-01-28 Thread zimbatm
One common error with system services are missing environment variable.
When testing with your shell you will have $HOME set for example.

On Thu, 28 Jan 2016 09:43 4levels <4lev...@gmail.com> wrote:

> Hi Exi,
>
> thank you for your reply.
>
> This is the timers config I'm using (note that I'm starting this every 5
> minutes to troubleshoot, is supposed to run every 2 hours or so)
>
> backup = {
>   description = "Backup service";
>   after = [ "network.target" "mysql.target" ];
>   path = [ pkgs.procps pkgs.gawk pkgs.nettools pkgs.mysql pkgs.php 
> pkgs.duplicity pkgs.postfix ];
>   script =
>   ''
>   ./s3Backup.sh
>   '';
>   startAt = "*-*-* *:0/5:00";
>
>
>
> And the contents of the s3Backup.sh script:
>
> s3Backup = name:
>   ''
> #!${pkgs.bash}/bin/bash
>
> ${builtins.readFile ./src/envrc}
>
> # Your GPG key
> GPG_KEY=
>
> # export 
> PATH="$PATH:/var/setuid-wrappers:/run/current-system/sw/bin:/run/current-system/sw/sbin"
>
> # Set up some variables for logging
> LOGFILE="/var/lib/projects/${name}/log/duplicity-backup.log"
> DAILYLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.daily.log"
> FULLBACKLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.full.log"
> HOST=`hostname`
> DATE=`date +%Y-%m-%d`
> MAILADDR="d...@domain.com"
> TODAY=$(date +%d%m%Y)
>
> # The S3 destination followed by bucket name
> DEST="s3://s3.amazonaws.com/projects-backup-eu-west/${name} 
> "
>
> is_running=$(ps -ef | grep duplicity  | grep python | wc -l)
>
> if [ ! -f $FULLBACKLOGFILE ]; then
>   touch $FULLBACKLOGFILE
> fi
>
> if [ $is_running -eq 0 ]; then
>   # Clear the old daily log file
>   cat /dev/null > ''${DAILYLOGFILE}
>
>   # Trace function for logging, don't change this
>   trace () {
> stamp=`date +%Y-%m-%d_%H:%M:%S`
> echo "$stamp: $*" >> ''${DAILYLOGFILE}
>   }
>
>   # Dump $PATH
>   trace "Current PATH: $PATH"
>
>   # How long to keep backups for
>   OLDER_THAN="1M"
>
>   # The source of your backup
>   SOURCE=/var/lib/projects/${name}
>
>   FULL=
>   tail -1 ''${FULLBACKLOGFILE} | grep ''${TODAY} > /dev/null
>   if [ $? -ne 0 -a $(date +%d) -eq 1 ]; then
> FULL=full
>   fi;
>
>   trace "Backup for local filesystem started"
>
>   trace "... removing old backups"
>
>   duplicity remove-older-than ''${OLDER_THAN} ''${DEST} 
> --s3-use-new-style >> ''${DAILYLOGFILE} 2>&1
>
>   trace "... backing up filesystem"
>
>   duplicity \
> ''${FULL} \
> -v9 \
> --s3-use-new-style --s3-european-buckets --no-encryption \
> --include=/var/lib/projects/${name}/data/backup \
> --exclude=/** \
> --allow-source-mismatch \
> ''${SOURCE} ''${DEST} >> ''${DAILYLOGFILE} 2>&1
>
>   trace "Backup for local filesystem complete"
>   trace ""
>
>   # Send the daily log file by email
>   BACKUPSTATUS=`cat "$DAILYLOGFILE" | grep Errors | awk '{ print $2 }'`
>   if [ "$BACKUPSTATUS" != "0" ]; then
> echo -e "Subject: Duplicity Backup Log for $HOST - $DATE - 
> ${name}\n\n$(cat $DAILYLOGFILE)" | sendmail $MAILADDR
>   elif [ "$FULL" = "full" ]; then
> echo "$(date +%d%m%Y_%T) Full Back Done" >> $FULLBACKLOGFILE
>   fi
>
>   # Append the daily log file to the main log file
>   cat "$DAILYLOGFILE" >> $LOGFILE
>
> fi
>
> unset AWS_ACCESS_KEY_ID
> unset AWS_SECRET_ACCESS_KEY
> unset PASSPHRASE
>   '';
>
>
>
> On Thu, Jan 28, 2016 at 10:34 AM exi  wrote:
>
>> Hi Erik,
>>
>> does duplicity use an ssh connection? Does it depend on your ssh
>> passphrase to be present? Do you use a ssh agent?
>>
>> "BackendException" from the traceback looks more like a connection issue
>> than a nix issue.
>> Which user is running the timer command?
>> Could you post your timer config?
>>
>> Regards,
>>
>> exi
>>
>>
>> On 28.01.2016 09:07, 4levels wrote:
>>
>> Hi Nix-Devs,
>>
>> yesterday I came to a point of really wanting to break something out of
>> sheer frustration over failing systemd timer calls.
>>
>> I've setup a duplicity backup script over s3 that works flawlessly when
>> invoked from terminal, but fails misrably when being called from a timer.
>>
>> I've tried everything I know, including but not limited to adding my full
>> user $PATH to the script, adding all possible related packages to the path
>> directive, .. nothing seems to work.
>>
>> The duplicity error is very vague (BackendException) and when adding
>> maximum verbosity to the duplicity call ( -v9 ) I do get some error which
>> seems to be related to a very old duplicity bug.  Since duplicity uses
>> python (the version I could trace seems to be 2.7) with python-boto for the
>> s3 backend - the issue seems to be related to this, but I can't figure out
>> 

Re: [Nix-dev] Going through hell with systemd timers

2016-01-28 Thread 4levels
Dear Guillaume,

you saved my day (and sleepless last night)!
This line worked all the magic for me:

environment = {
  inherit (config.environment.variables) SSL_CERT_FILE;
};

Make sure to pass through our office whenever you're near Gent, Belgium,
we're keeping a bottle of champagne chilled for you ;-)

And again nix-dev proves to be the most valuable resource in our NixOs /
NixOps experience!

Kind regards to you all, and as before, keep up the amazing work and great
attitude..


Erik

On Thu, Jan 28, 2016 at 11:19 AM Guillaume Maudoux (Layus) <
layus...@gmail.com> wrote:

> Hi,
>
> My experience with urlwatch was that the SSL_CERT_FILE env var was missing.
> This may also be your issue if you are using the network.
>
> It is however always possible to run the service manually, and see the
> logs.
> A service declaration with a startAt attribute creates two services, .service
> and .timer.
> You can start your service with # systemctl start .service and
> see the logs in journald.
> (No need to wait for the timer.)
>
> I made urlwatch work with the following snippet :
>
>   systemd.services.urlwatch = rec {
> description = "Run urlwatch (${startAt})";
> startAt = "hourly";
> environment = {
>   inherit (config.environment.variables) SSL_CERT_FILE;
> };
>
> serviceConfig = {
>   User = "layus"; # should use a user unit...
>   ExecStart = "${urlwatch}/bin/urlwatch -v";
> };
>   };
>
> For debugging, I used:
> # systemctl start urlwatch.service : Start the service once.
> $ systemctl status urlwatch.service -l -n 1000 : See the systemd logs for
> the last run, up to 1000 full lines.
> # journalctl -xef --unit=urlwatch : Print all the logs, -f follows the
> output in real time.
>
> A very simple trick is indeed to dump the environment at the start of the
> script.
>
> Layus.
>
> Le 28/01/16 10:59, 4levels a écrit :
>
> Hi Zimbatm (is that your name :-)
>
> I'm currently trying to debug by using printenv to view the differences in
> both environments.  I'm not yet familiar with using nix-shell :-s
> But since I cannot even deploy or rebuild switch anymore (see my other
> email to this list) I'm pretty stuck.
>
> I'm currently repartitioning and reinstalling the Vultr machines (*sigh*)
> to see if that brings any changes..
>
> Thank you for your pointer to the environment vars!
>
> Erik
>
> On Thu, Jan 28, 2016 at 10:56 AM zimbatm < 
> zimb...@zimbatm.com> wrote:
>
>> One common error with system services are missing environment variable.
>> When testing with your shell you will have $HOME set for example.
>>
>> On Thu, 28 Jan 2016 09:43 4levels < <4lev...@gmail.com>4lev...@gmail.com>
>> wrote:
>>
>>> Hi Exi,
>>>
>>> thank you for your reply.
>>>
>>> This is the timers config I'm using (note that I'm starting this every 5
>>> minutes to troubleshoot, is supposed to run every 2 hours or so)
>>>
>>> backup = {
>>>   description = "Backup service";
>>>   after = [ "network.target" "mysql.target" ];
>>>   path = [ pkgs.procps pkgs.gawk pkgs.nettools pkgs.mysql pkgs.php 
>>> pkgs.duplicity pkgs.postfix ];
>>>   script =
>>>   ''
>>>   ./s3Backup.sh
>>>   '';
>>>   startAt = "*-*-* *:0/5:00";
>>>
>>>
>>>
>>> And the contents of the s3Backup.sh script:
>>>
>>> s3Backup = name:
>>>   ''
>>> #!${pkgs.bash}/bin/bash
>>>
>>> ${builtins.readFile ./src/envrc}
>>>
>>> # Your GPG key
>>> GPG_KEY=
>>>
>>> # export 
>>> PATH="$PATH:/var/setuid-wrappers:/run/current-system/sw/bin:/run/current-system/sw/sbin"
>>>
>>> # Set up some variables for logging
>>> LOGFILE="/var/lib/projects/${name}/log/duplicity-backup.log"
>>> DAILYLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.daily.log"
>>> 
>>> FULLBACKLOGFILE="/var/lib/projects/${name}/log/duplicity-backup.full.log"
>>> HOST=`hostname`
>>> DATE=`date +%Y-%m-%d`
>>> MAILADDR="d...@domain.com"
>>> TODAY=$(date +%d%m%Y)
>>>
>>> # The S3 destination followed by bucket name
>>> DEST="s3://s3.amazonaws.com/projects-backup-eu-west/${name} 
>>> "
>>>
>>> is_running=$(ps -ef | grep duplicity  | grep python | wc -l)
>>>
>>> if [ ! -f $FULLBACKLOGFILE ]; then
>>>   touch $FULLBACKLOGFILE
>>> fi
>>>
>>> if [ $is_running -eq 0 ]; then
>>>   # Clear the old daily log file
>>>   cat /dev/null > ''${DAILYLOGFILE}
>>>
>>>   # Trace function for logging, don't change this
>>>   trace () {
>>> stamp=`date +%Y-%m-%d_%H:%M:%S`
>>> echo "$stamp: $*" >> ''${DAILYLOGFILE}
>>>   }
>>>
>>>   # Dump $PATH
>>>   trace "Current PATH: $PATH"
>>>
>>>   # How long to keep backups for
>>>   OLDER_THAN="1M"
>>>
>>>   # The source of your backup
>>>   SOURCE=/var/lib/projects/${name}
>>>
>>>   FULL=
>>>   tail -1 ''${FULLBACKLOGFILE} | grep ''${TODAY} > /dev/null
>>>   if [ $? -ne 0 -a $(date +%d) -eq 1 ]; then
>>> FULL=full