Bug#737121: ikiwiki: [PATCH] Implement configuration option to set the user agent string for outbound HTTP requests

2014-01-31 Thread Tuomas Jormola
Hi,

On Thu, Jan 30, 2014 at 02:47:41PM -0400, Joey Hess wrote:
 This looks good, but have you checked that all of the places
 $config{useragent} is passed to do the right thing when it is undef?
Yes I have.

Case 1: IkiWiki::useragent() using LWP::UserAgent-new().
$agent variable defined from the option hash passed to LWP::UserAgent-new():
https://metacpan.org/source/GAAS/libwww-perl-6.05/lib/LWP/UserAgent.pm#L28
Value of $agent variable checked with defined(), if undef using
the default implementation as if no agent key passed to
LWP::UserAgent-new() at all:
https://metacpan.org/source/GAAS/libwww-perl-6.05/lib/LWP/UserAgent.pm#L112

Case 2: Where LWPx::ParanoidAgent-new() is being used.
LWPx::ParanoidAgent is a sub-class of LWP::UserAgent, so option hash
passed to LWPx::ParanoidAgent-new() is used as with LWP::UserAgent-new():
https://metacpan.org/source/SAXJAZMAN/LWPx-ParanoidAgent-1.10/lib/LWPx/ParanoidAgent.pm#L22
https://metacpan.org/source/SAXJAZMAN/LWPx-ParanoidAgent-1.10/lib/LWPx/ParanoidAgent.pm#L30

On a related note, should the cookie jar option be passed when using
LWPx::ParanoidAgent-new()?

So I think it's ok.

br,
Tuomas

 
 -- 
 see shy jo




signature.asc
Description: Digital signature


Bug#737121: ikiwiki: [PATCH] Implement configuration option to set the user agent string for outbound HTTP requests

2014-01-30 Thread Tuomas Jormola
Package: ikiwiki
Version: 3.20140125
Severity: wishlist

By default, LWP::UserAgent used by IkiWiki to perform outbound HTTP
requests sends the string libwww-perl/version number as User-Agent
header in HTTP requests. Some blogging platforms have blacklisted the
user agent and won't serve any content for clients using this user agent
string. With IkiWiki configuration option useragent it's now possible
to define a custom string that is used for the value of the User-Agent
header.
---
 IkiWiki.pm   |9 +
 IkiWiki/Plugin/openid.pm |2 +-
 IkiWiki/Plugin/pinger.pm |2 +-
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/IkiWiki.pm b/IkiWiki.pm
index b7080bb..eb48096 100644
--- a/IkiWiki.pm
+++ b/IkiWiki.pm
@@ -527,6 +527,14 @@ sub getsetup () {
safe = 0, # hooks into perl module internals
rebuild = 0,
},
+   useragent = {
+   type = string,
+   default = undef,
+   example = Wget/1.13.4 (linux-gnu),
+   description = set custom user agent string for outbound HTTP 
requests e.g. when fetching aggregated RSS feeds,
+   safe = 0,
+   rebuild = 0,
+   },
 }
 
 sub defaultconfig () {
@@ -2301,6 +2309,7 @@ sub useragent () {
return LWP::UserAgent-new(
cookie_jar = $config{cookiejar},
env_proxy = 1, # respect proxy env vars
+   agent = $config{useragent},
);
 }
 
diff --git a/IkiWiki/Plugin/openid.pm b/IkiWiki/Plugin/openid.pm
index d369e30..3b96e4b 100644
--- a/IkiWiki/Plugin/openid.pm
+++ b/IkiWiki/Plugin/openid.pm
@@ -238,7 +238,7 @@ sub getobj ($$) {
my $ua;
eval q{use LWPx::ParanoidAgent};
if (! $@) {
-   $ua=LWPx::ParanoidAgent-new;
+   $ua=LWPx::ParanoidAgent-new(agent = $config{useragent});
}
else {
$ua=useragent();
diff --git a/IkiWiki/Plugin/pinger.pm b/IkiWiki/Plugin/pinger.pm
index fb0f3ba..b2d54af 100644
--- a/IkiWiki/Plugin/pinger.pm
+++ b/IkiWiki/Plugin/pinger.pm
@@ -72,7 +72,7 @@ sub ping {
my $ua;
eval q{use LWPx::ParanoidAgent};
if (!$@) {
-   $ua=LWPx::ParanoidAgent-new;
+   $ua=LWPx::ParanoidAgent-new(agent = 
$config{useragent});
}
else {
eval q{use LWP};
-- 
1.7.9.5


-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#734529: libbluray: Please update to 0.5.0

2014-01-07 Thread Tuomas Jormola
Package: libbluray
Version: 1:0.4.0-1
Severity: wishlist

Hi!

Could you upload the latest upstream 0.5.0, please?
http://git.videolan.org/?p=libbluray.git;a=blob_plain;f=ChangeLog;hb=8711586f343390f44f6d6d36b725f558a4e7c43e

br,
Tuomas Jormola


-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#724669: gst-plugins-bad1.0: Please tighten build dependency on libopenal-dev

2013-09-26 Thread Tuomas Jormola
Source: gst-plugins-bad1.0
Version: 1.2.0-1
Severity: minor

Hi,

I was backporting gstreamer 1.2.0 packages from Debian/sid to
Ubuntu/precise and noticed that building of OpenAL support in
gst-plugins-bad1.0 failed because stock libopenal-dev was too old
but that wasn't caught by dpkg-checkbuilddeps. Please add versioned
build depend libopenal-dev (= 1.14) since that's what the configure
script checks for. Thanks

Dear Maintainer,
*** Please consider answering these questions, where appropriate ***

   * What led up to the situation?
   * What exactly did you do (or not do) that was effective (or
 ineffective)?
   * What was the outcome of this action?
   * What outcome did you expect instead?

*** End of the template - remove these lines ***


-- System Information:
Debian Release: wheezy/sid
  APT prefers precise-updates
  APT policy: (900, 'precise-updates'), (900, 'precise-security'), (500, 
'precise')
Architecture: amd64 (x86_64)

Kernel: Linux 3.2.0-52-generic (SMP w/4 CPU cores)
Locale: LANG=fi_FI.UTF-8, LC_CTYPE=fi_FI.UTF-8 (charmap=UTF-8)
Shell: /bin/sh linked to /bin/bash


-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#611068: ikiwiki: Create index pages for empty directories in underlays

2011-01-25 Thread Tuomas Jormola
Package: ikiwiki
Version: 3.20101023
Severity: wishlist


Hi,

Is creation of index pages by the autoindex plugin limited only to files under
srcdir by some justified reason? Currently autoindex doesn't create index pages
for empty directories of underlays. If index pages aren't supposed to be
created for empty directories under underlay dirs, why iterate over the
underlay dirs in the first place (the for loop at the line 41 of autoindex.pm).
The attached patch corrects this. Please consider applying if this is ok.

br,
Tuomas Jormola
From 5d18914e77d852fc10c09f9903dba61a050ddeac Mon Sep 17 00:00:00 2001
From: Tuomas Jormola t...@solitudo.net
Date: Tue, 25 Jan 2011 11:46:57 +0200
Subject: [PATCH] Allow creation of index pages outside of srcdir

After this change autoindex creates index pages also for empty directories
included in underlays.
---
 IkiWiki/Plugin/autoindex.pm |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/IkiWiki/Plugin/autoindex.pm b/IkiWiki/Plugin/autoindex.pm
index 11595e2..f45ab9e 100644
--- a/IkiWiki/Plugin/autoindex.pm
+++ b/IkiWiki/Plugin/autoindex.pm
@@ -57,7 +57,7 @@ sub refresh () {
if (! -d _) {
$pages{pagename($f)}=1;
}
-   elsif ($dir eq $config{srcdir}) {
+   else {
$dirs{$f}=1;
}
}
-- 
1.7.0.4



Bug#611068: ikiwiki: Create index pages for empty directories in underlays

2011-01-25 Thread Tuomas Jormola
On Tue, Jan 25, 2011 at 01:55:10PM -0400, Joey Hess wrote:
 Tuomas Jormola wrote:
  Is creation of index pages by the autoindex plugin limited only to files 
  under
  srcdir by some justified reason? Currently autoindex doesn't create index 
  pages
  for empty directories of underlays.
 
 autoindex does not create index pages for empty directories *anywhere*. 
 The directory has to contain some files for an index page to be created
 for it.
Sorry, I didn't try to refer to empty directories but directories with
subdirectories or non-index pages or other files, i.e. a directory needing an
automatically generated index page. Bad wording.

 It seems wrong for a missing file in an underlay to result in
 every wiki using that underlay having an index page added. Ikiwiki's own
 basewiki has a wikiicons subdirectory without an index page. And
 ikiwiki cannot modify the underlay to have autoindex add the file to it.
Consider the following example. It models my setup where I encountered this
issue.

The file $srcdir/software/foo/index.mdwn
is the index/home page for a piece of software called Foo stored in the wiki.
This page says something about Foo but is not part of the Foo distribution.

The Foo software is located in another Git repo that is totally independent of
the wiki repo. If you clone it, you'll get the following kind of structure.

1) README for Foo in MarkDown format
   $foorepodir/README.mdwn
2) Sources for Foo. It's not really relevant what you have here.
   $foorepodir/src/*
3) API docs autogenerated from the sources in MarkDown format
   $foorepodir/api/ikiwiki/*/*.mdwn (e.g.
   $foorepodir/api/ikiwiki/someclass/index.mdwn)

Now I want to have Foo docs in MarkDown as part of the wiki, but I don't want to
check them in the wiki repo and maintain these autogenerated files manually for
the wiki. Instead, I want to use an underlay to get always up-to-date versions
directly from the Foo Git repo.

So the Git repo $wikirepodir for the wiki is structured like this (for the
relevant parts)
1) $srcdir is $wikirepodir/site, i.e. I have
   $wikirepodir/site/software/foo/index.mdwn
2) In ikiwiki.setup, the dirs $wikirepodir/underlays/* are added to the list
   $config{add_underlays}
3) The Foo Git repo is added as a submodule to the wiki Git repo. The submodule
   dir is $wikirepodir/underlays/foo/software/foo, ѕo you would have the file
   $wikirepodir/underlays/foo/software/foo/README.mdwn etc. as described above.

Now when the wiki is built, I get exactly what I want.

$destdir/software/foo/index.html
$destdir/software/foo/README/index.html
$destdir/software/foo/src/*
$destdir/software/foo/api/ikiwiki/**/index.html

Except that since autoindex doesn't work for underlays, I don't get e.g.
$destdir/software/foo/api/index.html or destdir/software/foo/src/index.html.
And the parentlinks section on the pages generated from the underlay MarkDown
pages (e.g. $destdir/software/foo/api/ikiwiki/someclass/index.html) would be
broken since the wiki doesn't contain a parent wiki page software/foo/api since
there's no such page in the Foo Git repo which is used as an underlay nor in
the wiki Git repo, since autoindex doesn't currently create this page.

But if IkiWiki was modified with the patch attached to this bug report, these
index pages would get created by the autoindex plugin and they would get
committed to the wiki Git repo, i.e. $wikirepodir/software/foo/src/index.mdwn
and $wikirepodir/software/foo/api/index.mdwn. So the directory structure from
the underlay would get copied to the wiki Git repo and the autogenerated
index pages would get committed there. As you said, it's impossible to alter
the underlay directories from within IkiWiki, of course.

I hope this clarified this issue, which might well be a non-issue if you
think it that way. But maybe this kind of index generation for underlays could
be a configurable option for the autoindex plugin? Surely it should be disabled
by default not to introduce possibly unwanted side effects when upgrading.

  If index pages aren't supposed to be
  created for empty directories under underlay dirs, why iterate over the
  underlay dirs in the first place (the for loop at the line 41 of 
  autoindex.pm).
 
 Because it needs to know if a given index page is provided by an
 underlay, so as to avoid replacing such a page with one it creates.
Ah, I see. Of course.

Br,
Tuomas

 
 -- 
 see shy jo




signature.asc
Description: Digital signature


Bug#603026: mpd: Please enable pipe output plugin

2010-11-10 Thread Tuomas Jormola
Package: mpd
Version: 0.15.4-1ubuntu3
Severity: wishlist

Hi,

Please enable pipe output plugin (--enable-pipe-output). I'd like to forward
the raw PCM audio stream from mpd to liquidsoap and encode to Ogg/FLAC and then
push the FLAC stream to Icecast2 for client consumption. With the pipe output
plugin the first step should be pretty easy thing to do.



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#602012: git: Use author date instead of commiter date when resolving timestamps of files, patch included

2010-10-31 Thread Tuomas Jormola
Package: ikiwiki
Version: 3.20101023
Severity: minor


Hi,

Git stores two different timestamps: commiter date and author date. Committer
date represents time of each commit and author date represents the original
date of the commit. Usually these two dates are the same. However, they might
differ if history of the repository is altered using git rebase. After the
rebase point, all consecutive commits will get the current date as the
committer date.

How this affects IkiWiki? Well, currently if using git as RCS backend, IkiWiki
uses the committer date as the timestamp for files. If rebasing happens in the
IkiWiki repo, all files after the rebase point will get their ctime timestamp
to be the date when rebasing occurred. And thus e.g. the last change
information printed at the bottom of each page in the default template is lost.
This is wrong since the content of the commits remain unchanged. The author
date should be used instead, as it remains static even when the original commit
that brought in the change to content gets re-written. Author date always shows
when the content of a file was changed, and that's what IkiWiki should be
concerned, not the commit date. Default git log and git whatchanged outputs
also display just the author date, not commit date of files. Also it looks like
that author date is correctly used already in the code dealing with
recentchanges stuff in the Git plugin.

This change is very simple to do, just change the log format pattern to use %at
instead of %ct. Patch included. Thanks.
From 5c8451fc9b70f3651e805ff082abfe718b81767f Mon Sep 17 00:00:00 2001
From: Tuomas Jormola t...@solitudo.net
Date: Sun, 31 Oct 2010 19:53:06 +0200
Subject: Use author date instead of commit date


Signed-off-by: Tuomas Jormola t...@solitudo.net
---
 IkiWiki/Plugin/git.pm |2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/IkiWiki/Plugin/git.pm b/IkiWiki/Plugin/git.pm
index f5101d9..3fa29b2 100644
--- a/IkiWiki/Plugin/git.pm
+++ b/IkiWiki/Plugin/git.pm
@@ -688,7 +688,7 @@ sub findtimes ($$) {
if (! keys %time_cache) {
my $date;
foreach my $line (run_or_die('git', 'log',
-   '--pretty=format:%ct',
+   '--pretty=format:%at',
'--name-only', '--relative')) {
if (! defined $date  $line =~ /^(\d+)$/) {
$date=$line;
-- 
1.7.0.4



Bug#601912: ikiwiki: htmltidy plugin: missing checkconfig hook registration, patch included

2010-10-30 Thread Tuomas Jormola
Package: ikiwiki
Version: 3.20101023
Severity: minor


htmltidy plugin is missing checkconfig registration hook. Because of this,
ikiwiki will print numerous warnings if the stupfile does not contain
'htmltidy' option. The problem is easy to fix, someone just forgot to add the
hook call.
From 9d1681dfd04b72725c52b65de58e8dd5d82ec3f5 Mon Sep 17 00:00:00 2001
From: Tuomas Jormola t...@solitudo.net
Date: Sun, 31 Oct 2010 03:02:38 +0200
Subject: Added missing hook registration for checkconfig


Signed-off-by: Tuomas Jormola t...@solitudo.net
---
 IkiWiki/Plugin/htmltidy.pm |1 +
 1 files changed, 1 insertions(+), 0 deletions(-)

diff --git a/IkiWiki/Plugin/htmltidy.pm b/IkiWiki/Plugin/htmltidy.pm
index 185d01d..1108aeb 100644
--- a/IkiWiki/Plugin/htmltidy.pm
+++ b/IkiWiki/Plugin/htmltidy.pm
@@ -15,6 +15,7 @@ use IPC::Open2;
 sub import {
 	hook(type = getsetup, id = tidy, call = \getsetup);
 	hook(type = sanitize, id = tidy, call = \sanitize);
+	hook(type = checkconfig, id = tidy, call = \checkconfig);
 }
 
 sub getsetup () {
-- 
1.7.0.4



Bug#601915: ikiwiki: cgi plugin: run format hooks in CGI mode, patch included

2010-10-30 Thread Tuomas Jormola
Package: ikiwiki
Version: 3.20101023
Severity: wishlist


Hi,

Is there any particular reason why format hooks are not run against the content
generated by IkiWiki CGI? For instance I'd like to include some JavaScript code
fetch from an external JavaScript file also in the CGI content without forking
any templates just for this cause. So I wrote a plugin that rewrites the
content HTML to include proper script elements in the document. Works great
for static files, but not for the content output by IkiWiki in CGI mode.

If there's no obstacles with this idea that I'm not aware of, please consider
merging the attached patch that enables format hooks for CGI. Thanks.
From 5943cc18abf19b85e91fa7ad20ebd7a9f7375413 Mon Sep 17 00:00:00 2001
From: Tuomas Jormola t...@solitudo.net
Date: Sun, 31 Oct 2010 03:36:20 +0200
Subject: Run format hooks in CGI mode


Signed-off-by: Tuomas Jormola t...@solitudo.net
---
 IkiWiki/CGI.pm |9 -
 1 files changed, 8 insertions(+), 1 deletions(-)

diff --git a/IkiWiki/CGI.pm b/IkiWiki/CGI.pm
index f2a32a9..b0fd762 100644
--- a/IkiWiki/CGI.pm
+++ b/IkiWiki/CGI.pm
@@ -46,7 +46,14 @@ sub showform (;@) {
my $cgi=shift;
 
printheader($session);
-   print misctemplate($form-title, $form-render(submit = $buttons), @_);
+   my $content = misctemplate($form-title, $form-render(submit = 
$buttons), @_);
+   run_hooks(format = sub {
+   $content=shift-(
+   page = undef,
+   content = $content,
+   );
+   });
+   print $content;
 }
 
 sub redirect ($$) {
-- 
1.7.0.4



Bug#598415: Duplicated script/ element for ikiwiki.js

2010-09-28 Thread Tuomas Jormola
Package: ikiwiki
Version: 3.20100815
Severity: minor


Hi,

If both relativedate and toggle plugins are enabled and features of both
plugins are used on a wiki page, a script tag for ikiwiki.js is rendered
twice in the generated HTML for the page.

Looks like both plugins inject the script tags directly to the HTML using
format() hook. Of course they know nothing if another plugin has already
re-wrote the HTML document with the same content they are about to write, thus
a duplicate script tag is added to the HTML document.

I guess more elegant solution would be if IkiWiki provided a framework for this
kind of stuff. Plugins would be able to register external resources they
require (i.e. JavaScript files and style sheets). IkiWiki would guarantee that
each named resource identified by type of the resource (JS or CSS) and file
name would get rendered inside the head tag only once using the correct tag
for the resource type (script/ for JS and link/ for CSS). Besides Perl
code that handles page rendering, this would probably require some changes to
the default templates and that might break some custom templates, though.

Disclaimer: Personally, I'd love this kind of feature for other reasons also. I
want to include some 3rd party JavaScript code on all the pages in my IkiWiki
installation. I don't want to create and maintain a fork of the upstream page
template just to insert a couple of script tags in head. Currently I solve
this dilemma by utilizing an ugly hack involving dynamic JavaScript loading
inside a script tag embedded in sidebar.mdwn. I'd prefer writing an IkiWiki
plugin that could tell IkiWiki to include proper script tags for my
JavaScript files in the head of the default IkiWiki templates using method
described earlier. I don't want to write a plugin that does ugly format()
things like relativedate and toggle currently do, either ;)

Regards,
Tuomas Jormola



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#582390: gitweb.js not included in the gitweb package

2010-05-20 Thread Tuomas Jormola
Package: gitweb
Version: 1:1.7.0.4-1
Severity: normal


Hi,

I'm filing this report from a Ubuntu lucid system, but it appears that the
current version (1:1.7.1-1) of the package in Debian/sid also suffers from this
bug, see http://packages.debian.org/sid/all/gitweb/filelist

The gitweb package does not include file gitweb.js from the upstream sources.
But reference to the JavaScript file included in the HTML emit by gitweb:

script type=text/javascript src=gitweb.js/script

gitweb Debian package should install the JavaScript file and make it
configurable in /etc/gitweb.conf. Thanks.

-- System Information:
Debian Release: squeeze/sid
  APT prefers lucid-updates
  APT policy: (999, 'lucid-updates'), (999, 'lucid-security'), (999, 'lucid')
Architecture: amd64 (x86_64)

Kernel: Linux 2.6.32-22-server (SMP w/4 CPU cores)
Locale: LANG=C, lc_ctype=fi...@euro (charmap=ISO-8859-15)
Shell: /bin/sh linked to /bin/bash

Versions of packages gitweb depends on:
ii  git-core 1:1.7.0.4-1 fast, scalable, distributed revisi
ii  perl 5.10.1-8ubuntu2 Larry Wall's Practical Extraction 

gitweb recommends no packages.

Versions of packages gitweb suggests:
pn  git-doc   none (no description available)

-- debconf-show failed



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#536197: apticron: Update mailx dependency

2009-07-08 Thread Tuomas Jormola
Package: apticron
Version: 1.1.32
Severity: minor


Hi,

Now that apticron works correctly also with heirloom-mailx (Bug#530347), you
could update the mailx dependecy. Please replace the dependency

Depends: bsd-mailx | mailutils, apt (= 0.6.8), ucf (= 0.28), ${misc:Depends}

either with explicit depend

Depends: bsd-mailx | heirloom-mailx | mailutils, apt (= 0.6.8), ucf (= 0.28), 
${misc:Depends}

or just

Depends: mailx, apt (= 0.6.8), ucf (= 0.28), ${misc:Depends}

The latter is of course cleaner, but theoretically it might cause breakage in
the future, if another incompatible mailx program is included in Debian in the
future.



-- 
To UNSUBSCRIBE, email to debian-bugs-dist-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Bug#512293: Invalid depend: sun-java6-jre-headless

2009-01-19 Thread Tuomas Jormola
Package: ca-certificates-java
Version: 20081028

Hi,

ca-certificates-java has an alternate depend on sun-java6-jre-headless.
However, this package does not exist. There's no headless version of
the non-free Sun JRE package. ca-certificates-java should depend on
sun-java6-jre instead of sun-java6-jre-headless.

-- 
Tuomas Jormola t...@solitudo.net


signature.asc
Description: Digital signature


Bug#511300: Acknowledgement (Support locking so that concurrent instances of the same backup action not possible)

2009-01-13 Thread Tuomas Jormola
On Mon, Jan 12, 2009 at 10:10:46PM +0100, intrigeri wrote:
 Tuomas Jormola wrote (12 Jan 2009 15:01:03 GMT) :
  On Mon, Jan 12, 2009 at 01:17:59PM +0100, intrigeri wrote:
  As pointed out in your comments, the checkpidalive portability needs
  to be fixed; do you intend to do so at some point?
  Quick check on Linux/AIX/Solaris/Mac OS X systems would suggest that
  ps -A | awk '{print $1}' would give you the list of all the pids running
  on the system. On HP-UX, it's ps -e. So maybe something like this could
  be done (untested, and we should check at least how *BSD ps behaves)
 
  function checkpidalive() {
  local pid=$1
  [ -z $pid ]  return 2
  [ -d /proc/$pid ]  return 0
  local psargs
  local uname=`uname`
  case $uname in
  HP-UX) psargs=-e ;;
  *) psargs=-A ;;
  esac
  ps $psargs | awk '{print $1}' | grep -q ^${pid}$
  return $?
  }
 
 Seems fine to me at the first glance, but... have you checked how
 other programs do so? I bet there is a robust, long-time used piece of
 code somewhere that does exactly this and takes care of the usual
 weird corner case.
In shell, I guess you're pretty much limited to use what ever commands
you have at your disposal. ps is the utility to get information about
running processes, so I don't see there's lots of other options...

-- 
Tuomas Jormola t...@solitudo.net


signature.asc
Description: Digital signature


Bug#511299: [PATCH] support manually started backup actions

2009-01-12 Thread Tuomas Jormola
Hi,

On Mon, Jan 12, 2009 at 03:10:04PM +0100, intrigeri wrote:
 intrigeri wrote (12 Jan 2009 12:12:37 GMT) :
  BTW, your patch had inverted logics: as documented, the isnow
  function returns 1 on success (weird, sure, but it works...)
 
 Sorry, your patch was right, my mistake.
Yeah, it's a bit weird.. I struggeled whether to use just true and false
as return values, in which case the natural way would've been to return
0 for true and 1 for false since it's bash code. But then I decided to
add 3rd possible return value in case of error so simple
if foo; then...  wouldn't work anymore anyway. But feel free to
refactor :)

-- 
Tuomas Jormola t...@solitudo.net


signature.asc
Description: Digital signature


Bug#511300: Acknowledgement (Support locking so that concurrent instances of the same backup action not possible)

2009-01-12 Thread Tuomas Jormola
On Mon, Jan 12, 2009 at 01:17:59PM +0100, intrigeri wrote:
 Hello,
 
 Tuomas Jormola wrote (09 Jan 2009 11:33:15 GMT) :
  Naive sample implementation of this feature, not tested at all.
  Maybe better use as inspiration rather than concrete
  implementation...
 
 This seems like a good start to implement an important missing
 feature. Great :)
 
 As pointed out in your comments, the checkpidalive portability needs
 to be fixed; do you intend to do so at some point?
Quick check on Linux/AIX/Solaris/Mac OS X systems would suggest that
ps -A | awk '{print $1}' would give you the list of all the pids running
on the system. On HP-UX, it's ps -e. So maybe something like this could
be done (untested, and we should check at least how *BSD ps behaves)

function checkpidalive() {
local pid=$1
[ -z $pid ]  return 2
[ -d /proc/$pid ]  return 0
local psargs
local uname=`uname`
case $uname in
HP-UX) psargs=-e ;;
*) psargs=-A ;;
esac
ps $psargs | awk '{print $1}' | grep -q ^${pid}$
return $?
}

 Also, why is the hostname needed in the lock file name?
Well I thought that the lock directory might be shared among different
hosts that might all run rdiff-backup, and there might be similarily
named configs on these hosts. But if you hear the voice of reason in
your head, you'll notice that there's too many ifs and unnecessary
complexity involved. Just get rid of the hostname in the file name, it's
not needed at all :)

-- 
Tuomas Jormola t...@solitudo.net


signature.asc
Description: Digital signature


Bug#511299: [PATCH] support manually started backup actions

2009-01-12 Thread Tuomas Jormola
Uh, just forget the last message, I was referring to checkpidalive() of
the locking patch I sent...


signature.asc
Description: Digital signature


Bug#494676: Version 6.9

2009-01-09 Thread Tuomas Jormola
And now version 6.9 is available,
http://sourceforge.net/project/showfiles.php?group_id=13764package_id=11481release_id=649937
Please upload to experimental, at least.


signature.asc
Description: Digital signature


Bug#511299: [PATCH] support manually started backup actions

2009-01-09 Thread Tuomas Jormola
Package: backupninja
Version: 0.9.6-4
Severity: wishlist
Tags: patch

Hi,

Currently backupninja schedules all the backup action jobs that are
configured in the /etc/backup.d directory. However, I'd like to keep
configuration for some of the hosts under the directory but I would
rather not try backuping them up automatically each day since the
devices are seldom connected to the network. Instead, I'd like to run
these actions by launching backupninja on the backup server manually
with --run command line argument when I know the devices are ready for
backup. The attached patch (not very well tested, thougH) implements
scheduling

when = manual

which will cause backupninja to skip the configurations that are to be
run manually.

Regards,

-- 
Tuomas Jormola t...@solitudo.net
diff -ur backupninja-0.9.5.orig/man/backup.d.5 backupninja-0.9.5/man/backup.d.5
--- backupninja-0.9.5.orig/man/backup.d.5	2006-04-04 20:29:40.0 +0300
+++ backupninja-0.9.5/man/backup.d.5	2009-01-09 12:35:23.225766134 +0200
@@ -67,7 +67,7 @@
 
 .SH SCHEDULING
 
-By default, each configuration file is processed everyday at 01:00 (1 AM). This can be changed by specifying the 'when' option in a backup action's config file or in the global configuration file. 
+By default, each configuration file is processed everyday at 01:00 (1 AM). This can be changed by specifying the 'when' option in a backup action's config file or in the global configuration file. Special value 'manual' will disable scheduling for the backup action. It is possible to run the backup action manually by invoking \fBninjahelper(1)\fP with --run command line argument.
 
 For example:
   when = sundays at 02:00
@@ -76,6 +76,7 @@
   when = everyday at 01
   when = Tuesday at 05:00
   when = hourly
+  when = manual
 
 These values for when are invalid:
   when = tuesday at 2am
diff -ur backupninja-0.9.5.orig/src/backupninja.in backupninja-0.9.5/src/backupninja.in
--- backupninja-0.9.5.orig/src/backupninja.in	2007-10-12 20:42:46.0 +0300
+++ backupninja-0.9.5/src/backupninja.in	2009-01-09 12:32:36.326445517 +0200
@@ -201,6 +201,9 @@
 function isnow() {
 	local when=$1
 	set -- $when
+
+	[ $when == manual ]  return 0
+
 	whendayofweek=$1; at=$2; whentime=$3;
 	whenday=`toint $whendayofweek`
 	whendayofweek=`tolower $whendayofweek`


signature.asc
Description: Digital signature


Bug#511300: Support locking so that concurrent instances of the same backup action not possible

2009-01-09 Thread Tuomas Jormola
Package: backupninja
Version: 0.9.6-4
Severity: wishlist

Hi,

Apparently no locking of anykind is performed when backupninja launches
backup actions. This might cause launching of several concurrent
instances of the same backup action job, which is not desirable. This
would happen if the first backup action does not finish until the next
one is scheduled. For example, one might have an rdiff-backup job
scheduled to run daily, which normally completes in matter of minutes or
hours. But due to large changes in data and congested
network link, it might take more than 24 hours to do the backup data
transfer. I think backupninja should implement per-backup action locking
so that in this case further invocations of the job would be skipped
until the long-running job started previous day is finished.

Regards,

-- 
Tuomas Jormola t...@solitudo.net


signature.asc
Description: Digital signature


Bug#511300: Acknowledgement (Support locking so that concurrent instances of the same backup action not possible)

2009-01-09 Thread Tuomas Jormola
Naive sample implementation of this feature, not tested at all. Maybe better
use as inspiration rather than concrete implementation...

-- 
Tuomas Jormola t...@solitudo.net
diff -ur backupninja-0.9.5.orig/etc/backupninja.conf.in backupninja-0.9.5/etc/backupninja.conf.in
--- backupninja-0.9.5.orig/etc/backupninja.conf.in	2007-03-04 12:29:26.0 +0200
+++ backupninja-0.9.5/etc/backupninja.conf.in	2009-01-09 12:50:24.494527275 +0200
@@ -62,6 +62,9 @@
 # where backupninja libs are found
 libdirectory = @pkglibdir@
 
+# where backupninja stores action lock files
+lockdirectory = @localstatedir@/lock
+
 # whether to use colors in the log file
 usecolors = yes
 
diff -ur backupninja-0.9.5.orig/man/backupninja.conf.5 backupninja-0.9.5/man/backupninja.conf.5
--- backupninja-0.9.5.orig/man/backupninja.conf.5	2005-11-19 19:11:28.0 +0200
+++ backupninja-0.9.5/man/backupninja.conf.5	2009-01-09 12:51:00.844527176 +0200
@@ -66,6 +66,10 @@
 .B scriptdirectory 
 Where backupninja handler scripts are found
 
+.TP 
+.B lockdirectory 
+Where backupninja stores action lock files
+
 .TP
 .B usecolors
 If set to 'yes', use colors in the log file and debug output.
diff -ur backupninja-0.9.5.orig/src/backupninja.in backupninja-0.9.5/src/backupninja.in
--- backupninja-0.9.5.orig/src/backupninja.in	2007-10-12 20:42:46.0 +0300
+++ backupninja-0.9.5/src/backupninja.in	2009-01-09 13:29:38.552333945 +0200
@@ -227,6 +227,27 @@
 	return 1
 }
 
+# Returns the hostname in a portable way
+function findhostname() {
+	local hostname=$HOSTNAME
+	[ -z $hostname ]  hostname=`hostname 2/dev/null`
+	if [ -n $hostname ]; then
+		echo $hostname
+		return
+	fi
+	echo localhost
+}
+
+# Return 1 if given PID is running, 0 if not and 2 in case of error.
+# TODO: Only Linux and supported, should support other operating systems
+# by running ps with suitable arguments for the system and parsing the result
+function checkpidalive() {
+	local pid=$1
+	[ -z $pid ]  return 2
+	[ -d /proc/$pid ]  return 0
+	return 1
+}
+
 function usage() {
 	cat  EOF
 $0 usage:
@@ -273,6 +294,41 @@
 	local run=no
 	setfile $file
 
+	# skip over this config if another instance is already running
+	getconf lockdir @localstatedir@/lock/backupninja
+	if ! [ -d $lockdir ]; then
+		if ! mkdir -p $lockdir /dev/null 21; then
+			msg *failed* -- $file
+			errormsg=$errormsg\n== could not create lock directory $lockdir ==\n
+			error  finished action $file: ERROR
+			return
+		fi
+	fi
+	if ! [ -w $lockdir ]; then
+		msg *failed* -- $file
+		errormsg=$errormsg\n== lock directory $lockdir not writable ==\n
+		error  finished action $file: ERROR
+		return
+	fi
+	local hostname=`findhostname`
+	local basename=`basename $file`
+	local lockfile=${lockdir}/${hostname}_${basename}.lock
+	local currentargs=${bash_ar...@]}
+	if [ -e $lockfile ]; then
+		local previouspid=`cat $lockfile | cut -d' ' -f1`
+		local previousargs=`cat $lockfile | cut -d' ' -f2-`
+		checkpidalive $previouspid
+		local previouspidalive=$?
+		if [ $previouspidalive == 0 ]  [ $previousargs == $currentargs ]; then
+			info  skipping action $file because the action is currently running already
+			return
+		else
+			echo $$ $currentargs  $lockfile
+		fi
+	else
+		echo $$ $currentargs  $lockfile
+	fi
+
 	# skip over this config if when option
 	# is not set to the current time.
 	getconf when $defaultwhen


signature.asc
Description: Digital signature


Bug#510204: Update clearlooks dependency

2008-12-30 Thread Tuomas Jormola
Package: ldm
Version: 2:2.0.27-1

ldm depends on gtk2-engines-clearlooks. However, this package exists no
more, since the theme has been merged to base GTK+ engines package
gtk2-engines. It would be cleaner if ldm dependency would get updated to
reference gtk2-engines instead of gtk2-engines-clearlooks.


signature.asc
Description: Digital signature


Bug#508604: Acknowledgement (Mechanism to automatically save and restore the state of virtual machines on host shutdown/start-up)

2008-12-13 Thread Tuomas Jormola
Hi,

I tuned my script to be more intelligent. It now detects save/restore
conflict with Xen. Domain state file name is configurable. More robust
save and restore. Option to start QEMU/KVM virtual machines if no
restore file for the domain can be located, thus emulating libvirt
autostart feature, which must be switched off at libvirt level.

-- 
Tuomas Jormola t...@solitudo.net
#!/bin/sh
#
# This script automatically saves running VMs
# managed by libvirtd on stop and restores VMs on start
#
# (C) 2008 Tuomas Jormola t...@solitudo.net
#
### BEGIN INIT INFO
# Required-Start:$network $local_fs
# Required-Stop: 
# Default-Start: 2 3 4 5
# Default-Stop:  0 1 6
# Short-Description: save and restore libvirtd managed VMs
### END INIT INFO

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
domain_state_file_name_pattern=%n
enabled=false

. /lib/lsb/init-functions

if test -f /etc/default/libvirt-domain-state; then
. /etc/default/libvirt-domain-state
fi

test $enabled = true || exit 0

xen_enabled=0
if test -f /etc/default/xendomains; then
. /etc/default/xendomains
if test -n $XENDOMAINS_MIGRATE || test $XENDOMAINS_SAVE = true; 
then
xen_enabled=1
fi
fi
if test $xen_enabled -eq 1; then
log_warning_msg Automatic domain state save and restore implemented by 
Xen
exit 1
fi
if test -z $domain_state_file_directory; then
log_failure_msg Domain save directory not defined
exit 1
fi
if ! test -d $domain_state_file_directory; then
log_failure_msg Domain save directory not found: 
$domain_state_file_directory
exit 1
fi
if test -z $domain_state_file_name_pattern; then
log_failure_msg Domain state file name pattern not defined
exit 1
fi
if echo $domain_state_file_name_pattern | grep -q '/'; then
log_failure_msg Domain state file name pattern must not contain 
slashes
exit 1
fi
if ! echo $domain_state_file_name_pattern | grep -q '%n'; then
log_failure_msg Domain state file name pattern must contain %n
exit 1
fi
if test -z $libvirt_hypervisor_uri; then
log_failure_msg libvirt hypervisor connection URI not defined
exit 1
fi
if test $libvirt_hypervisor_uri != `virsh -q -c $libvirt_hypervisor_uri uri 
2/dev/null`; then
log_failure_msg Failed to connect to libvirt hypervisor 
$libvirt_hypervisor_uri
exit 1
fi

run_virsh() {
virsh -q -c $libvirt_hypervisor_uri $*
}

# Print full path to domain state file for the domain.
expand_domain_state_file() {
test -n $domain_name || return
domain_state_file_name=`echo $domain_state_file_name_pattern | sed 
s/%n/$domain_name/g 2/dev/null`
test -n $domain_state_file_name || return
echo $domain_state_file_directory/$domain_state_file_name
}

check_domain_autostart() {
virsh_output=`run_virsh dominfo $1 2/dev/null`
ret=$?
if test $ret != 0; then
return 2
fi
echo $virsh_output | grep -q 'Autostart: disable' 2/dev/null
}

# Restore a domain from state file if the domain is shut down
# or start QEMU/KVM domain if enabled.
restore_domain() {
case $domain_status in
shut off)
domain_state_file=`expand_domain_state_file`
if test -f $domain_state_file; then
if run_virsh restore $domain_state_file 
/dev/null 21; then
if test 
$delete_restored_domain_state_files = true; then
rm -f $domain_state_file
fi
log_action_msg $domain_name restored
else
log_failure_msg Failed to restore 
$domain_name from $domain_state_file
fi
elif test $start_qemu_domains_without_state_file = 
true  test -f /etc/libvirt/qemu/$domain_name.xml; then
if run_virsh start $domain_name /dev/null 
21; then
log_action_msg $domain_name started 
(QEMU/KVM domain with no restore file)
else
log_failure_msg Failed to start 
QEMU/KVM domain $domain_name
fi
else
log_warning_msg Restore file not found for 
$domain_name
fi
;;
running) log_warning_msg $domain_name already running; 
return ;;
*) log_warning_msg Unknown status for domain 
$domain_name: $domain_status; return ;;
esac
}

# Save a domain to state file if domain is running and if libvirt
# is not starting the domain
save_domain() {
if test $domain_status != running

Bug#508604: Mechanism to automatically save and restore the state of virtual machines on host shutdown/start-up

2008-12-12 Thread Tuomas Jormola
Package: libvirt-bin
Version: 0.5.1-2
Severity: wishlist
Tags: patch

Hi,

I implemented feature to automatically save virtual machines managed by
libvirt on host shutdown and restore on host startup. See the attached
patch. If you wish, you can use it as is or modify in any way desired.
It has only been tested with KVM based VMs.

Regards,

-- 
Tuomas Jormola t...@solitudo.net
Index: debian/libvirt-domain-state.default
===
--- debian/libvirt-domain-state.default	(revision 0)
+++ debian/libvirt-domain-state.default	(revision 0)
@@ -0,0 +1,29 @@
+# Provide support for storing virtual machine state on host shutdown
+# and restore on host start-up. In order to support this, please note
+# that you must configure each virtual machine so that the libvirt
+# autostart feature is disabled. Also if you're using Xen, the built-in
+# save/restore on shutdown/start-up mechanism must be disabled.
+
+# Directory where to save virtual machine state files. This directory
+# needs to be created manually. Be sure that there's enough free disk
+# space available! You will need at least the combied amount of physical
+# memory and swap space allocated to all virtual machines plus some room
+# for overhead. For instance, if you are running two virtual machines
+# each configured to use 1GB of memory and 2GB of swap, you should
+# reserve more than 6GB of space in the file system holding this
+# directory. This setting is required.
+#vm_state_file_directory=/var/lib/libvirt/domain-state-files
+
+# libvirt URI of the hypervisor.
+# See http://libvirt.org/remote.html#Remote_URI_reference
+# This setting is required.
+#libvirt_hypervisor_uri=qemu:///system
+
+# Uncommenting this setting will destroy each virtual machine state file
+# after successful restore. Disabled by default, i.e. restore is performed
+# but state files are left intact.
+#delete_restored_vm_state_files=true
+
+# Uncomment this to actually enable automatic saving and restoring
+# of the virtual machines.
+#enabled=true
Index: debian/rules
===
--- debian/rules	(revision 1604)
+++ debian/rules	(working copy)
@@ -43,3 +43,7 @@
 
 binary-install/libvirt-doc::
 	cd $(EXAMPLES)  rm -rf .libs *.o info1 suspend ../html/CVS
+
+binary-post-install/libvirt-bin::
+	install -m 755 -o root -g root debian/libvirt-domain-state.init debian/libvirt-bin/etc/init.d/libvirt-domain-state
+	install -m 644 -o root -g root debian/libvirt-domain-state.default debian/libvirt-bin/etc/default/libvirt-domain-state
Index: debian/libvirt-domain-state.init
===
--- debian/libvirt-domain-state.init	(revision 0)
+++ debian/libvirt-domain-state.init	(revision 0)
@@ -0,0 +1,120 @@
+#!/bin/sh
+#
+# This script automatically saves running VMs
+# managed by libvirtd on stop and restores VMs on start
+#
+# (C) 2008 Tuomas Jormola t...@solitudo.net
+#
+### BEGIN INIT INFO
+# Required-Start:$network $local_fs
+# Required-Stop: 
+# Default-Start: 2 3 4 5
+# Default-Stop:  0 1 6
+# Short-Description: save and restore libvirtd managed VMs
+### END INIT INFO
+
+PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
+enabled=false
+
+. /lib/lsb/init-functions
+
+if test -f /etc/default/libvirt-domain-state; then
+	. /etc/default/libvirt-domain-state
+fi
+
+test $enabled = true || exit 0
+
+if test -z $vm_state_file_directory; then
+	log_failure_msg Virtual Machine save directory not defined
+	exit 1
+fi
+if ! test -d $vm_state_file_directory; then
+	log_failure_msg Virtual Machine save directory not found: $vm_state_file_directory
+	exit 1
+fi
+if test -z $libvirt_hypervisor_uri; then
+	log_failure_msg libvirt hypervisor connection URI not defined
+	exit 1
+fi
+if test $libvirt_hypervisor_uri != `virsh -q -c $libvirt_hypervisor_uri uri 2/dev/null`; then
+	log_failure_msg Failed to connect to libvirt hypervisor $libvirt_hypervisor_uri
+	exit 1
+fi
+
+set -e
+
+run_virsh() {
+	virsh -q -c $libvirt_hypervisor_uri $*
+}
+
+print_vm_status() {
+	log_action_msg $vm_name: $vm_status
+}
+
+save_vm() {
+	if test $vm_status != running; then
+		return
+	fi
+	vm_state_file=$vm_state_file_directory/$vm_name
+	if run_virsh save $vm_id $vm_state_file /dev/null 21; then
+		log_action_msg $vm_name
+	else
+		log_failure_msg Failed to save $vm_name as $vm_state_file
+	fi
+}
+
+iterate_virtual_machines() {
+	callback=$1
+	run_virsh list 2/dev/null | grep -v ^Connecting | \
+	while read vm_id vm_name vm_status; do
+		eval $callback
+	done
+}
+
+restore_virtual_machines() {
+	for vm_state_file in $vm_state_file_directory/*; do
+		test -f $vm_state_file || continue
+		vm_name=`basename $vm_state_file`
+		if run_virsh restore $vm_state_file /dev/null 21; then
+			if test $delete_restored_vm_state_files = true; then
+rm -f $vm_state_file
+			fi
+			log_action_msg $vm_name
+		else
+			log_failure_msg Failed

Bug#508604: Acknowledgement (Mechanism to automatically save and restore the state of virtual machines on host shutdown/start-up)

2008-12-12 Thread Tuomas Jormola
Oh, it looks like there is also another attempt at solving this issue
included in the package as an example. Yes, perhaps it's better not to
enable this kind of functionality by default. Perhaps you could consider
adding also my script as an example, though.

-- 
Tuomas Jormola t...@solitudo.net


signature.asc
Description: Digital signature


Bug#508606: Split virsh to separate package

2008-12-12 Thread Tuomas Jormola
Package: libvirt-bin
Version: 0.5.1-2
Severity: wishlist
Tags: patch

Hi,

I think that it would be better to split virsh out from libvirt-bin to
its own packge. Reasoning being libvirtd works fine without local virsh
and that virsh has plenty of uses for remote administering of
hypervisors. On the admin machine you don't necessarily want or need
libvirtd binary at all. One could use virsh on a machine running an
architechture where virtualization is not even supported. In my
environment I only have one machine running virtual machines but several
machines that are used to administer the hypervisor remotely using virsh,
and I hate hate having libvirtd cruft installed on these machines.
 
The attached patch creates virsh package. Depending your taste, you
might want to make libvirt-bin depend on virsh instead of just
recommending it. Technically, it doesn't require virsh to operate, and
thus it shouldn't depend on the package, but some users may encounter
loss of functionality if virsh suddenly disappears from their system
when upgrading libvirt-bin and not installing all the recommended
packages by default.

If accepted, also maybe libvirt-bin package should be renamed to
libvirtd, as currently libvirtd is the only binary left in the
package...

-- 
Tuomas Jormola t...@solitudo.net
Index: debian/control
===
--- debian/control	(revision 1605)
+++ debian/control	(working copy)
@@ -15,7 +15,7 @@
 Depends: ${shlibs:Depends}, adduser, libvirt0 (= ${binary:Version}), logrotate
 Enhances: qemu, kvm, xen
 Section: admin
-Recommends: netcat-openbsd, bridge-utils, dnsmasq (= 2.46-1), iptables, qemu (= 0.9.1)
+Recommends: virsh, netcat-openbsd, bridge-utils, dnsmasq (= 2.46-1), iptables, qemu (= 0.9.1)
 Suggests: policykit
 Description: the programs for the libvirt library
  Libvirt is a C toolkit to interact with the virtualization capabilities
@@ -25,6 +25,20 @@
  .
  This package contains the supporting binaries to use with libvirt
 
+Package: virsh
+Architecture: any
+Depends: ${shlibs:Depends}
+Section: admin
+Conflicts: libvirt-bin ( 0.5.1-3)
+Description: text-based interface for hypervisors managed with libvirtd
+ Libvirt is a C toolkit to interact with the virtualization capabilities
+ of recent versions of Linux (and other OSes). The library aims at providing
+ a long term stable C API for different virtualization mechanisms. It currently
+ supports QEMU, KVM, and XEN.
+ .
+ This package contains virsh utility which can be used to administer
+ local or remote hypervisors managed with libvirtd.
+
 Package: libvirt0
 Architecture: any
 Depends: ${shlibs:Depends}
Index: debian/virsh.manpages
===
--- debian/virsh.manpages	(revision 0)
+++ debian/virsh.manpages	(revision 0)
@@ -0,0 +1 @@
+virsh.1
Index: debian/libvirt-bin.install
===
--- debian/libvirt-bin.install	(revision 1605)
+++ debian/libvirt-bin.install	(working copy)
@@ -1,4 +1,3 @@
-usr/bin/*
 usr/sbin/*
 etc/libvirt/*
 etc/sasl2/*
Index: debian/virsh.install
===
--- debian/virsh.install	(revision 0)
+++ debian/virsh.install	(revision 0)
@@ -0,0 +1 @@
+usr/bin/virsh
Index: debian/libvirt-bin.manpages
===
--- debian/libvirt-bin.manpages	(revision 1605)
+++ debian/libvirt-bin.manpages	(working copy)
@@ -1 +0,0 @@
-virsh.1


Bug#453113: Please create a package for the LDAP database plugin for the Kerberos server

2007-11-27 Thread Tuomas Jormola
Package: krb5
Version: 1.6.dfsg.3~beta1-2
Severity: wishlist

Hello,

krb5-1.6 can be configured with --with-ldap argument which builds LDAP
database support plugin for the Kerberos server. I think it would be
great if this was supported by the packages. Please see section
Configuring Kerberos with OpenLDAP back-end of the Kerberos V5 System
Administrator's Guide.

-- 
Tuomas Jormola [EMAIL PROTECTED]



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#378686: nfs-kernel-server: rpc.svcgssd missing

2006-07-18 Thread Tuomas Jormola
Package: nfs-kernel-server
Version: 1:1.0.9-2
Severity: important


I'm trying to get NFSv4 with Kerberos support working. The CITI NFSv4
guide [1] says that you should be running rpc.svcgssd(8) on the server.
However, the Debian package doesn't include this daemon. I begun to
investiage the situation. In version 1:1.0.8-1 changelog it is stated
that

   * rpc.svcgssd no longer exists (it's consolidated into rpc.gssd); remove
 all references to it in from the debian/ directory.

According to the upstream documentation mentioned above rpc.svcgssd is
indeed needed. When reading the upstream changelog, it says

2006-03-27 Kevin Coffman [EMAIL PROTECTED]
Consolidate gssd and svcgssd since they share much code

Remove directory svcgssd which was only created because the old
build system could not handle building two daemons in the same
directory.  This eliminates build complications since gssd and
svcgssd also share many source files.

This patch effectively removes the utils/svcgssd directory, moving
all its files to the utils/gssd directory.  File utils/gssd/Makefile.am
is modified with directions to build both gssd and svcgssd.

Effectively that says that the layout of source code was refactored, not
that the functionality of svcgssd was integrated to gssd. nfs-utils
1.0.9 still builds and installs svcgssd. So please put rpc.svcgssd back
in the package (and support for it in the init script) so that it's
possible to run Debian NFSv4 server with Kerberos support, thank you.

Regards,
Tuomas Jormola [EMAIL PROTECTED]

[1] http://www.citi.umich.edu/projects/nfsv4/linux/using-nfsv4.html

-- System Information:
Debian Release: 3.1
Architecture: i386 (i686)
Kernel: Linux 2.6.15-1.2041
Locale: LANG=C, [EMAIL PROTECTED] (charmap=ISO-8859-15)

Versions of packages nfs-kernel-server depends on:
ii  libc6 2.3.2.ds1-22sarge3 GNU C Library: Shared libraries an
ii  lsb-base  3.0-15.0.tj.1  Linux Standard Base 3.0 init scrip
ii  nfs-common1:1.0.9-2  NFS support files common to client
ii  sysvinit  2.86.ds1-1 System-V like init
ii  ucf   1.17   Update Configuration File: preserv

-- no debconf information


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#356087: yaird: Proposed fix for this bug

2006-03-12 Thread Tuomas Jormola
Package: yaird
Version: 0.0.12-7
Followup-For: Bug #356087


Here's my take on fixing this bug and #354247. Works for me. Comments
regarding the patch in the attachment.

Tuomas Jormola [EMAIL PROTECTED]

-- System Information:
Debian Release: 3.1
Architecture: i386 (i686)
Kernel: Linux 2.6.15
Locale: LANG=C, [EMAIL PROTECTED] (charmap=ISO-8859-15)

Versions of packages yaird depends on:
ii  cpio   2.5-1.3   GNU cpio -- a program to manage ar
ii  dash   0.5.2-5   The Debian Almquist Shell
ii  libc6  2.3.2.ds1-22  GNU C Library: Shared libraries an
ii  libhtml-template-perl  2.6-2 HTML::Template : A module for usin
ii  libparse-recdescent-perl   1.94-4Generates recursive-descent parser
ii  perl   5.8.4-8sarge3 Larry Wall's Practical Extraction 

-- no debconf information
--- yaird-0.0.12/debian/patches/1004_fix_ide_block_device_detection.patch   
1970-01-01 02:00:00.0 +0200
+++ yaird-0.0.12/debian/patches/1004_fix_ide_block_device_detection.patch   
2006-03-12 19:09:38.0 +0200
@@ -0,0 +1,25 @@
+The name of a symlink to a block device under /sys/devices/.../ideX/X.X/
+varies between various 2.6 kernels. Earlier versions use name block while
+later (circa 2.6.16-rc1) use block:hdX format. This patch adds support for
+both formats by first globbing block* in the directory and thus resolving the
+real name of the symlink instead of using hard-coded name that only works in
+the former case.
+
+Index: yaird/perl/IdeDev.pm
+===
+--- yaird.orig/perl/IdeDev.pm  2006-01-26 00:15:49.0 +0200
 yaird/perl/IdeDev.pm   2006-03-12 18:48:12.0 +0200
+@@ -49,7 +49,12 @@
+   $self-SUPER::fill();
+   $self-takeArgs ('path');
+   my $path = $self-path;
+-  my $link = readlink ($path/block);
++  my $block = $path/block;
++  my @link = $block*;
++  if (scalar @link != 1) {
++  Base::fatal (no suitable candidate link to block device found 
in $path);
++  }
++  my $link = readlink (shift @link);
+   if (! defined ($link)) {
+   Base::fatal (no link to block device in $path);
+   }
--- yaird-0.0.12/debian/patches/series  2006-03-12 18:04:17.0 +0200
+++ yaird-0.0.12/debian/patches/series  2006-03-12 19:09:38.0 +0200
@@ -2,3 +2,4 @@
 1001_ignore_parisc_sysfs_subdir.patch
 1002_use_xdigit_in_regexes.patch
 1003_drop_ide-generic_workaround.patch
+1004_fix_ide_block_device_detection.patch


Bug#355816: udev: [PATCH] Improvements to the build system

2006-03-07 Thread Tuomas Jormola
Package: udev
Severity: minor


Hello,

I was backporting udev from unstable to sarge. I'm maintaining my
packages in a subversion repository. In each directory in your working
directory, there's a sub-directory named .svn. The build system of udev
is confused with files inside these hidden directories. Attached patch
fixes the problems I had by adding -maxdepth 1 switch to the find
commands that locate available tarballs and patches during the build.
Index: debian/scripts/source.unpack
===
--- debian/scripts/source.unpack(revision 257)
+++ debian/scripts/source.unpack(working copy)
@@ -5,7 +5,7 @@
 
 mkdir -p $STAMP_DIR/upstream/tarballs/ $SOURCE_DIR
 if [ ! -z $SRC_TAR_DIR -a -d $SRC_TAR_DIR ];then
-   files=$(find $SRC_TAR_DIR -type f|sort)
+   files=$(find $SRC_TAR_DIR -type f -maxdepth 1|sort)
 else
VER=$(dpkg-parsechangelog 21|egrep ^Version|cut -d   -f 2|cut -d 
- -f 1)
SRC=$(dpkg-parsechangelog 21|egrep ^Source|cut -d   -f 2-)
Index: debian/scripts/lib
===
--- debian/scripts/lib  (revision 257)
+++ debian/scripts/lib  (working copy)
@@ -148,7 +148,7 @@
continue
fi
eval mkdir -p $dirprep
-   for f in `(cd $d /dev/null;find -type f ! -name 'chk-*' 
2/dev/null )|sort $reversesort`;do
+   for f in `(cd $d /dev/null;find -type f ! -name 'chk-*' 
-maxdepth 1 2/dev/null )|sort $reversesort`;do
eval stampfile=$stampfiletmpl
eval log=$logtmpl
eval file=$filetmpl


Bug#352495: dhcp3: Please build depend on dpkg 1.13

2006-02-12 Thread Tuomas Jormola
Package: dhcp3
Severity: normal

In debian/rules of dhcp3 = 3.0.3-6 source package there's a call to
command dpkg-architecture -qDEB_HOST_ARCH_OS. This fails if trying to
backport the package with dpkg 1.10.28 of sarge. I don't know the exact
version when DEB_HOST_ARCH_OS has been made available in
dpkg-architecture, but the source package should build depend at least
on dpkg  1.13.

Regards,
Tuomas Jormola


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#296912: ipsec-tools: /etc/init.d/setkey restart is broken

2005-02-25 Thread Tuomas Jormola
Package: ipsec-tools
Version: 1:0.5-3
Severity: grave
Justification: user security hole

In restart target of the setkey init script setkey is run with the
following command:

$SETKEY -f $SETKEY_CONF:

This fails of course since it appends ':' to the configuration file
name. Potential security hole introduced if the init script is used to
apply new secure configuration over previous insecure one but this fails
due to the typo in the script.

-- System Information:
Debian Release: 3.1
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: i386 (i686)
Kernel: Linux 2.6.10
Locale: LANG=C, [EMAIL PROTECTED] (charmap=ISO-8859-15)

Versions of packages ipsec-tools depends on:
ii  libc6   2.3.2.ds1-20 GNU C Library: Shared libraries an
ii  libreadline55.0-10   GNU readline and history libraries

-- no debconf information


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#290315: iproute: New upstream for kernel 2.6.10

2005-01-13 Thread Tuomas Jormola
Package: iproute
Version: 20041019-2
Severity: wishlist

Hi,

Could you update the iproute package in unstable with new upstream
version available at
http://developer.osdl.org/dev/iproute2/download/iproute2-2.6.10-ss050112.tar.gz
please? Note that new iptables support in tc uses hard-coded path for
iptables module directory. Default goes like this (from tc/m_ipt.c):

#define IPT_LIB_DIR /usr/local/lib/iptables

You should change it to /lib/iptables.

-- System Information:
Debian Release: 3.1
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: i386 (i686)
Kernel: Linux 2.6.9
Locale: LANG=C, [EMAIL PROTECTED] (charmap=ISO-8859-15)

Versions of packages iproute depends on:
ii  libatm1 2.4.1-16 shared library for ATM (Asynchrono
ii  libc6   2.3.2.ds1-20 GNU C Library: Shared libraries an

-- no debconf information


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Bug#290374: Please don't delete existing stunnel4 user on postinst

2005-01-13 Thread Tuomas Jormola
Package: stunnel4
Version: 2:4.070-2
Severity: wishlist

Hello,

This part in postinst script deletes stunnel4 user if it exists:
###
# 2. Ensure that no standard account or group will remain before adding the
#new user
if [ $IUID != NONE ]; then # remove existing user
  $USERDEL $USER
fi

if $GROUPMOD $USER  /dev/null 21; then
  $GROUPDEL $USER;
fi

What is the rationale behind this? Why not only add the user if it
doesn't exist and leave it be if it exists? Deleting and readding the
stunnel user causes /etc/passwd etc. files to change on each upgrade.
This unnecessarily triggers alert on monitored files under /etc if using
tool such as tripwire. Just one counter argument against this behaviour.

-- System Information:
Debian Release: 3.1
  APT prefers unstable
  APT policy: (500, 'unstable')
Architecture: i386 (i686)
Kernel: Linux 2.6.9
Locale: LANG=C, [EMAIL PROTECTED] (charmap=ISO-8859-15)

Versions of packages stunnel4 depends on:
ii  libc6   2.3.2.ds1-20 GNU C Library: Shared libraries an
ii  libssl0.9.7 0.9.7e-3 SSL shared libraries
ii  libwrap07.6.dbs-6Wietse Venema's TCP wrappers libra
ii  netbase 4.19 Basic TCP/IP networking system
ii  openssl 0.9.7e-3 Secure Socket Layer (SSL) binary a

-- no debconf information


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]