Thus said David Mason on Fri, 03 Oct 2014 12:49:17 -0400:

>  3) It seems like a lot more overhead, compared to a local run of fossil

I'm not  sure why  you need  to parse anything.  Here is  a low-overhead
script that detects updates to a remote repository:

#!/bin/sh
OLD=$HOME/old.rss
NEW=$HOME/new.rss
touch $OLD
curl -s 'http://www.fossil-scm.org/index.html/timeline.rss?y=ci&tag=trunk' | 
sed -e '/pubDate/d' > /tmp/new.rss
diff $OLD $NEW >/dev/null || {
  cp -f $NEW $OLD
  echo new check-ins on trunk
  exit 0
}
echo no new check-ins on trunk
exit 1

But, this raises the question... if you're just trying to determine when
to  update, why  not  just  run ``fossil  update''  on  a schedule?  You
mentioned that fossil knows when it has  ``work to do'' but I'm not sure
I understand exactly what you mean by it. Fossil certainly knows when it
should transfer content from the remote  side to the local side and vice
versa.  It also  knows  when it  should merge  files  into your  working
checkout (because fossil update causes this to happen).

Here was the original example:

fossil update -q && fossil update 2>&1 | mail -s 'Fossil update' m...@he.re

Are you simply looking for a way to be notified via email when there are
changes that  have been  updated into  your working  checkout? If  so, I
think the above script could be substituted for ``fossil update -q'':

shouldiupdate.sh && fossil update 2>&1 | mail -s 'Fossil update' m...@he.re

Andy
--
TAI64 timestamp: 40000000542ee560
_______________________________________________
fossil-users mailing list
fossil-users@lists.fossil-scm.org
http://lists.fossil-scm.org:8080/cgi-bin/mailman/listinfo/fossil-users

Reply via email to