Re: network configuration

2015-06-25 Thread Bob Proulx
notoneofmyseeds wrote:
 Bob Proulx wrote:
 
 I must say you have written a book here on this topic, Rob. I've learned a
 lot. I printed it out. To your questions now.

Sometimes people say I write too much.  But the details are important.  :-)
And this is a large book again with this message.

 I had a very long and nice response that I lost, as a result of an
 application crash.

LOL!  I have been there many times myself! :-)

I am merging two messages from you so I can answer both here at the
same time.

ip addr show

 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group 
 default qlen 1000
 inet 192.168.1.2/24 brd 192.168.1.255 scope global dynamic eth0
 3: wlan0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group 
 default qlen 1000
 inet 10.10.10.6/24 brd 10.10.10.255 scope global dynamic wlan0

Gotcha.  Although 192.168.1.2 is an unusual address to get from dhcp.
It would help to see the dhcp range the server issues.

And the other eth0 network is:

 2: eth0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group 
 default qlen 1000
 inet 172.16.1.0/24 brd 172.16.1.255 scope global dynamic eth0
 3: wlan0: BROADCAST,MULTICAST,UP,LOWER_UP mtu 1500 qdisc mq state UP group 
 default qlen 1000
 inet 10.10.10.5/24 brd 10.10.10.255 scope global wlan0

Again the 172.16.1.0 is an unusual address to get from dhcp.  That is
the network address and I don't think this is correct.  Have you
modified the dhcpd server configuration on that network to include the
entire range?

I think the next thing to start debugging is the dhcp server
configuration on each of those networks.  Change the range to
something in the middle such as from 192.168.1.100-192.168.1.200 and
from 172.16.1.100-172.16.1.200 or some such range that does not
include the .0 network address and does not include the .255 broadcast
address.  Don't assign those as host addresses.

ip route show | tac

 172.16.1.0/24 dev eth0  proto kernel  scope link  src 192.168.1.2
 169.254.0.0/16 dev wlan0  scope link  metric 1000
 10.10.10.0/24 via 10.0.0.1 dev wlan0  proto static  metric 1
 10.10.10.0/24 dev wlan0  proto kernel  scope link  src 10.0.0.5
 default via 172.16.1.1 dev eth0  proto static  metric 1024

The route table is very telling of the problem.  I think in this state
the wifi was connected first and then the wired network was connected
afterward.  The wired network dhcp replaced the default route with its
own route default default via 172.16.1.1 dev eth0.  That is the only
route listed in the above.  It won't be out your wifi interface.  If
your wifi interface is the only one with a route to the Internet then
obviously the above won't work.

You could probably do a manual repair with:

  ip route replace default via 10.10.10.1 dev wlan0

Aftering bringing up the wired interface, which will break the route,
the above would restore the working route out the wlan0 interface.
This is only a temporary measure.  It is temporary because the
dhclient daemon will renew the dhcp lease and every time it does it
will rewrite the default route and break things again.  But it would
probably assure you that this is exactly the problem and that things
can work with the right configuration.

 Another question.  Are all of the subnets on each of the networks
 different?  A subnet is something like 192.168.1.0/24.  For example
 having different subnets would mean 192.168.1.0/24 on one and
 192.168.20.0/24 on another and 192.168.42.0/24 on the third.  All
 different.

 They are all different.

Good!  That will make things much easier and simpler.

 Now that you have learned this I have another question.  Which
 interface do you wish to keep as the default route?

 I assume the default route is the one that will take me online, in that
 case, the wifi.

Then the wifi will be the main interface and will hold eth0 default
route.  The other interfaces should have the default route disabled.

 And this happens here, when ever the ethernet is connected, the wifi, which
 is the connection to the internet does not work. I remain connected to it,
 but can't go online. This would mean the lan is being used as the default
 route?

I think so.  But unfortunately you missed providing the route data.  :-(

 I think it likely that you have a default route problem which is why I
 explained it in as much detail as I could above.

 I think you're right. The question now is who do I set it up. I've tried
 several configurations yesterday, none of which worked. For example, I set
 wlan0 to dhcp and set the ethernet static, providing all the details, ip,
 netmask, etc.

That should work.  In fact I was going to suggest such a configuration
as being easier.  Use dhcp on wlan0 but use a static ip configuration
on eth0.  Could you repeat that test and tell us here what you are
doing?

 I also changed the order, moving the wlan0 from top to bottom
 of the ethernet on the network/interfaces file.

The order the interfaces

Re: Boots into emergency mode. How to analyze?

2015-06-25 Thread Bob Proulx
The Wanderer wrote:
 In which case I return to my original comment on that point: although
 there might be situations where this setup could make sense, they would
 _not_ be for the casual user. As a setup for a sole computer intended to
 be administered by its sole user, this is simply a crazy design.

I, like you, feel that being able to log in using a root password is
an essential requirement.  However it is also true that Ubuntu is
designed for the non-technical and Ubuntu has chosen to disable the
root password by default and to provide sudo as the root access
method.  Although I agree with you that it is crazy I have to admit
that there are a lot of Ubuntu machines out there with root login
disabled.

I have also seen the problem where users have a very difficult time
managing their *one* login password.  If they are also the system
administrator for the host then they must also manage the root
password making them need to know *two* passwords.  For some people
needing to manage two passwords is overwhelming.  They can never
figure out which one to provide and at which times.  It is
tragicomical!  Therefore Ubuntu's choice to keep the user with only a
single one password and then use it for everything and sudo for root
access is understandable.

Even on Ubuntu where that is the default it is perfectly okay to set a
root password (with '# passwd root') and then continue normally.
Those that know this and want it do it.  It isn't required not to have
a root password.

Debian also allows this single user password and then use sudo
strategy at install time.  At install time if you don't set a
password for root and leave the dialog box empty then it won't set an
empty password for root.  Instead it will disable the root password
and set up sudo.  Normally in Debian sudo isn't installed at
debian-installer time but in that case it is installed.

I wrote more words on this topic in these messages.

  http://lists.debian.org/debian-user/2013/10/msg01670.html

  https://lists.debian.org/debian-user/2013/08/msg00509.html

Bob


signature.asc
Description: Digital signature


Re: Suspending and lid-switch

2015-06-25 Thread Bob Proulx
Celejar wrote:
 Running wheezy. I've been suspending to ram (hibernate-ram --force) my
 ThinkPad T61 for several years with no problems. Recently, I installed
 acpi-support to enable suspending via lid closure (by enabling
 LID_SLEEP=true in /etc/default/acpi-support). I still tend to enter the
 hibernate command manually, by habit, followed by lid closure. When I
 subsequently open the lid, the machine wakes up (as it always has done,
 even before installing acpi-support) - and then goes back to sleep
 several seconds later (which it never did before).

I read the above and understand that it suspends twice when you have
manually entered 'hibernate-ram --force'.  Does it also have that
problem if you suspend upon lid close?  That part was not clear to me.
Or is it only when you manually call hibernate-ram?

I always use Fn-F4 (sleep) on my T60 and then close the lid and do not
have this problem.  I don't sleep automatically on lid close.  I often
want to close my lid and carry from place to place and don't want it
to automatically suspend.  However my T60 is currently Squeeze not
Wheezy so the versions are not the same.  I do have acpi-support
installed.  I do not have LID_SLEEP=true uncommented.

I assume you have scanned the /var/log/syslog looking for the event
trail?  Is there anything interesting there?  If things haven't
changed in a release look for PM: lines.

 It seems pretty clear that somehow, the lid closure is triggering a
 second hibernation, which takes place after the machine wakes up from
 the first (even though the closure occurs after the machine has already
 gone to sleep). This is certainly not the desired behavior - is this is
 bug? Should I file against acpi-support?

I have read of people having this problem before.  There is almost
certainly discussion in the debian-laptop mailing list from some years
ago on this problem.  I am sure you are not the only person to have
suffered from suspending again immediately after waking from sleep
problem.  But I am not sure if or where a bug would lie.  Plus I
assume that systemd has absorbed all of this functionality in Jessie
therefore filing a bug report would simply be ignored now.

Bob


signature.asc
Description: Digital signature


Re: Recurring disk activity

2015-06-25 Thread Bob Proulx
Mayuresh wrote:
 On Tue, Jun 23, 2015 at 01:27:10PM -0600, Bob Proulx wrote:
# hdparm -B255 /dev/sda
 
 I had disabled this when I wrote the OP, but had not set it in
 /etc/hdparm.conf. Today I did that and the Load_Cycle_Count seems steadied
 at 3781, though it is still under observation.

Good deal!

 When I wrote the OP I got an impression that -B255 isn't helping, because
 I wasn't aware of Load_Cycle_Count and I measured the disk activity by the
 following command:
 
 iotop -ob --delay .1
 
 This still continues to show kworker doing some activity:
 
   103 be/4 root0.00 B/s0.00 B/s  0.00 %  0.87 % [kworker/1:2]
...
 What to attribute this disk activity to? I have tried checking using lsof
 whether any files are changing that fast (such as syslog) but that is not
 the case.

That looks like a fairly low level of activity.  It can be hard to
chase down everything that writes to the disk.  This is a topic that
you can read more about in the laptop lists.  For example
laptop-mode tries to hold off disk writes so as save battery by
avoiding disk spinups.  Researching documentation there would be
useful.  Here is an old Linux Journal article no it.

  http://www.linuxjournal.com/article/7539

There is a kernel interface to debug blocks written to disk.  Here is
a nice quick summary of the feature.

  http://www.unixdaemon.net/linux/proc-block-dump.html

And more detail here:

  http://samwel.tk/laptop_mode/faq

WARNING!  You want to make sure your debug logging doesn't cause more
disk activity which causes more logging which causes more disk
activity and so on in a vicious feedback loop.  The faq article above
describes this in more detail.  But here is a snippet from the
proc-block-dump article to show the capabilities.

  Mar 14 19:16:44 localhost kernel: sshd(2659): dirtied inode 388836 (sshd) on 
sda1
  Mar 14 19:16:44 localhost kernel: sshd(2659): dirtied inode 533395 
(libwrap.so.0) on sda1
  Mar 14 19:17:23 localhost kernel: cat(2672): dirtied inode 05 
(.bash_history) on sda1
  Mar 14 19:17:46 localhost kernel: kjournald(913): WRITE block 14016 on sda1
  Mar 14 19:17:48 localhost kernel: pdflush(104): WRITE block 12487672 on sda1

Note that I haven't used this since those articles were written.  I
will assume that things are still similar now ten years later.  But
it has been ten years.  I expect some things to be different.  But by
investigating those features you should be able to tell exactly what
is hitting your disk.

Bob


signature.asc
Description: Digital signature


Re: mpsyt (mps-youtube) can't reach YT (no search hits)

2015-06-24 Thread Bob Proulx
incal wrote:
 incal writes:
  mpsyt of mps-youtube suddenly stopped working.

Google changed the Youtube API recently.  The change in Youtube API
broke downstream applications that used it such as mpsyt.

You didn't say what version of Debian you are using.

In Sid a newer version packaged for Debian is available earlier this
month.  Here is the ticket tracking the issue.

  http://bugs.debian.org/785704

This version has migrated to Testing.  It won't be available for
Stable however.  Perhaps a backport could be prepared for Stable
Backports but nothing is packaged at this moment.

  mpsyt version : 0.01.46
 That version is old. I did
 $ sudo aptitude install python3-pip
 $ sudo pip3 install mps-youtube
 tog get version 0.2.5 and then it worked.

That is almost perfect.  But it can be improved.  The issue is that
the packages installed through pip (or cpan) like the above installed
using all powerful root privileges might crawl out if malicious code
is isntalled.  But root is not necessary.  If you add yourself to the
'staff' group then you can more safely install those packages as a
non-root user.  You also want 'adm' as well so as to be able to browse
log files in /var/log/*.log without sudo too.

  $ sudo adduser $USER staff
  $ sudo adduser $USER adm

Then log out and log back in again.  Additional groups can only be
added to your processes at login time.  Therefore you must log out and
log back in again to have them take effect.

  $ sudo aptitude install python3-pip  # Or use apt-get
  $ pip3 install mps-youtube

Because your user will have the 'staff' account it will be able to
write files into the /usr/local tree without requiring the all
powerful root superuser.  This is much safer.  It limits the attacks
that malicious code downloaded from the net and then run can do to
your system.  It will force the restriction that it can only write to
the /usr/local tree.  It will prevent any files installed from being
suid-root or any processes launched from being launched with root
privilege.  It is a safer way to install things using pip (or cpan).

Bob

P.S. I discovered mps-youtube myself recently.  Surfing youtube from
the command line has never been more fun! :-)


signature.asc
Description: Digital signature


Re: mpsyt (mps-youtube) can't reach YT (no search hits)

2015-06-24 Thread Bob Proulx
Emanuel Berg wrote:
 Bob Proulx writes:
  You didn't say what version of Debian you are using.
 
 How do I check this? 'lsb_release -a' says:
 
 No LSB modules are available.
 Distributor ID: Debian
 Description:Debian GNU/Linux 8.0 (jessie)
 Release:8.0
 Codename:   jessie

Certainly lsb_release -a is fine.  Without that I look at the contents
of /etc/debian_version where the official Debian version is stored.

  $ cat /etc/debian_version
  8.1

When I am on a foreign system I tend to also look at the release
configured in /etc/apt/sources.list too.  Because sometimes people do
non-standard things there and it gives more details.  But it is harder
to describe and document.

It looks that your system with 8.0 and not 8.1 is behind by the one
point release which was released earlier this month.

  https://www.debian.org/News/2015/20150606

It could very well be that you are current from the security
repository which is treated separately.  But AFAIK security upgrades
always assume the latest point release and are layered on top of them.

Bob


signature.asc
Description: Digital signature


Re: Recurring disk activity

2015-06-23 Thread Bob Proulx
Mayuresh wrote:
 Bob Proulx wrote:
hdparm -B /dev/sda
 
 /dev/sda:
  APM_level  = 128

 man hdparm
   -B Get/set Advanced Power Management feature, if the drive supports
  it.  A  low  value  means aggressive power management and a high
  value means better performance.  Possible  settings  range  from
  values  1  through  127 (which permit spin-down), and values 128
  through 254 (which do not permit spin-down).  The highest degree
  of  power  management  is  attained with a setting of 1, and the
  highest I/O performance with a setting of 254.  A value  of  255
  tells  hdparm to disable Advanced Power Management altogether on
  the drive (not all drives support disabling it, but most do).

I admit I am not sure but I think 128 as above won't force a spindown
and therefore doesn't seem to be the link for head parking.  I look at
a few of my systems and they either have 128 or off depending upon the
drive.  The ones with 128 have a moderate load cycle count in the low
hundreds.  Therefore I would normally say that is okay.

But you said you saw it increasing on your system and it was in the
thousands.  Therefore I would be inclined to suggest that it should be
disabled on your system.

  # hdparm -B255 /dev/sda

Or if you like turn power management to the lowest level.  Since it is
a laptop I would tend to use the 254 conservative figure.

  # hdparm -B254 /dev/sda

 If I can at least ascertain that the issue is with the drive, I can try
 and get it replaced in warranty. I am unsure since the laptop came with
 FreeDOS and when booting on FreeDOS there is no disk noise. So difficult
 to convince the retailer that there is an issue with the drive.

You only reported a load_cycle_count in the thousands.  Before this
problem was well known I had drives in the hundreds of thousands.
Some of those are still running fine.  I wouldn't worry about it.  But
I would prevent it from getting worse.

Note one crazy thing.  I have an SSD that reports a load cycle count
in the 335738 range.  Obviously there isn't any head to park and it is
simply reporting the number of times it would have parked if it were a
spinning hard drive with a head.  For anyone who looks at an SSD and
depending upon the firmware might get spooked it isn't a worry there.

  What does it indicate. It was 3755 when I started the system today and
  growing over a few minutes.
 
 I got what it indicates. It's scary...

At 3755 I wouldn't worry.  Drives are rated in the hundreds of
thousands.  Your drive is still young by the metrics.

 I noticed utilities to fix this problem in the wd drives. I checked my
 drive, it is:
 
   ata1.00: ATA-9: HGST HTS545050A7E680, GR2OA350, max UDMA/133
 
 So, not sure whether there exists a utility to fix its head parking
 problem. Also not clear, why I don't get that noise when booting to
 FreeDOS.

It depends upon many things.  Configuration and use patterns.  It may
not be a problem under FreeDOS.  In any case for this particular issue
it is easy to fix.

You already have hdparm installed.  There are multiple ways to
configure hdparm to set the -B option automatically.  The config file
/etc/default/hdparm is probably best for you.  This configuration will
set it for you at boot time.

  harddisks=/dev/sda
  hdparm_opts=-B254

As long as I am talking I will mention that I have a couple of drives
that require this.  But I didn't want to set it for all drives.
Therefore for me on my servers where I have automated configuration to
detect and configure these things I set up the following section in
the /etc/hdparm.conf file.

  /dev/disk/by-id/scsi-SATA_ST2000DM001-1CH_S1E14TKT {
apm = 254
  }

With that in place hdparm sets it only for that one specific disk
drive and serial number combination.  It doesn't get set for other
disks that don't need it.  Plus since I have dynamic system
configuration if I move the drive from one system to another my config
scripts move the configuration to follow the drive.  That is a whole
different topic but basically something like puppet, chef, and so forth.

 It is static and mostly off. I can keep it on if needed. It doesn't have
 to move around. But isn't there something I can run one time to know the
 issue.

I would be inclined to write a short script that is run from the
cron.daily run from anacron that checks if on_ac_power is true before
running and if so then 'smartctl -t short /dev/sda' so that a short
selftest is run daily.  Then do the same in cron.weekly so that weekly
a 'smartctl -t long /dev/sda' is run.  That way the internal disk
firmware selftests will be run periodically.  The anacron will ensure
that it gets run with the other housekeeping tasks.  The check through
on_ac_power will ensure that it doesn't happen when the system is off
mobile on battery.  Since I am not doing this myself I will leave that
as an exercise for the reader.  :-)

Bob

Re: network configuration

2015-06-23 Thread Bob Proulx
notoneofmyseeds wrote:
 one ethernet interface that you sometimes connect to one wired network and 
 sometimes to a different wired network?
 For now, this is a laptop that is located in one place.
 All networks are DHCP.

Those are good clarifications.  Let me mention a few problems to be
overcome with it.

DHCP is the Dynamic Host Configuration Protocol that configures a
network interface.  Among the things it configures is the default
gateway router.

  $ ip route show | grep ^default
  default via 93.184.216.34 dev eth0

Normally there will be one interface active at a time configured with
DHCP.  Therefore there will be one default route.  However if you have
several interfaces configured with DHCP all active at the same time
then each interface will configure a default route.  Let me write this
contrived example showing the problem.

  $ ip route show | tac | grep ^default
  default via 93.184.216.34 dev eth0
  default via 93.184.216.34 dev eth1
  default via 93.184.216.34 dev wlan0

Or maybe:

  $ ip route show | tac | grep ^default
  default via 93.184.216.34 dev eth1
  default via 93.184.216.34 dev eth0
  default via 93.184.216.34 dev wlan0

Or maybe:

  $ ip route show | tac | grep ^default
  default via 93.184.216.34 dev wlan0
  default via 93.184.216.34 dev eth1
  default via 93.184.216.34 dev eth0

The order of the default routes is important because the first one
matched is the one used.  The order is set by the last one
configured.  The last interface to have dhcp'd an address will set a
default route and it will be the first one matched.

Also note that until recently the kernel listed routes in top-down
order.  The first route matched as displayed from the top-down was the
one that matched.  This was true of *BSD and SystemV and others.
Newer Linux kernels since some version I forget have unfortunately
reversed this order.  Now the Linux kernel lists routes in bottom-top
priority listing.  I think that is simply a bug but so it is.  I often
pipe the output of ip route show through 'tac' to reverse the order in
order to get a sane top-bottom ordering.  That is why I have tac in
the above.  To make it display in the Right order.

There is an old saying that goes, ... if you have to ask then ...
which applies here.  If you don't know and have to ask about default
routes then you should only ever have *one* default route on the
system.  There are only some few special cases where it would be
otherwise.  One default route normally makes the most sense.

Another question: When you are connected to both as you have done what
is the output of these commands so that we can see the (as you say
broken) state of things?

  ip addr show

  ip route show | tac



Another question.  Are all of the subnets on each of the networks
different?  A subnet is something like 192.168.1.0/24.  For example
having different subnets would mean 192.168.1.0/24 on one and
192.168.20.0/24 on another and 192.168.42.0/24 on the third.  All
different.

Along with the default route every IP address assigned will create a
route in the route table for that subnet.

  $ ip route show | tac
  192.168.230.0/24 dev eth0  proto kernel  scope link  src 192.168.230.120 
  default via 192.168.230.1 dev eth0 

Routes match top to bottom (in the sane ordering after reversing them
with tac to restore the order to the way other kernels report it) and
therefore addresses on the local subnet are matched before the default
route.  For example in the above an address 192.168.230.27 would match
on the local subnet and would be routed directly using the listed src
address of the host.  Addresses such as 8.8.8.8 would fall through all
of the routes until hitting the default route at the bottom and would
then be routed through the default route to the router and out to the
Internet.

A more complicated routing table could be this example.  Routes are
matched from the top down and the first one matched indicates which
interface the packet is routed through.

  $ ip route show | tac
  216.17.153.56/29 dev eth0  proto kernel  scope link  src 216.17.153.62 
  192.168.240.0/24 dev eth0  proto kernel  scope link  src 192.168.240.1 
  192.168.230.0/24 dev eth1  proto kernel  scope link  src 192.168.230.1 
  192.168.94.0/24 dev eth2  proto kernel  scope link  src 192.168.94.1 
  192.168.93.0/24 dev wlan0  proto kernel  scope link  src 192.168.93.1 
  172.27.61.2 dev tun0  proto kernel  scope link  src 172.27.61.1 
  172.27.61.0/24 via 172.27.61.2 dev tun0 
  default via 216.17.153.57 dev eth0 

Normally it is important to ensure that every interface has a
different subnet so that routing is sane.  A typical problem is that
people don't think about this and then set up the same subnet on
multiple different interfaces.  If that happens the result is order
dependent depending upon the order the interfaces were brought up.
Things usually don't work very well.



Now that you have learned this I have another question.  Which

Re: network configuration

2015-06-22 Thread Bob Proulx
notoneofmy wrote:
 I have three separate networks, ip addresses, etc.
 One wireless and two hard wired, ethernet.
 The wireless must connect to the internet.
 As is necessary, I will need to physically swap the ethernet cables
 between networks.

You say networks.  Is that the same as ethernet interfaces?  You have
two ethernet interfaces?  Or you have one ethernet interface that you
sometimes connect to one wired network and sometimes to a different
wired network?

Is this a laptop that is carried mobile to various places?  Or is it a
laptop that always sits in one place and needs to connect to different
networks in that one place?  (I would suggest different things if it
iis three fully specified local networks with static ip addresses
versus if it were carried mobile and must connect to random dhcp
networks too.)

On these networks are they DHCP or static IPs?

For the WiFi network are you needing to select them manually from a
desktop graphically with the mouse?  Or is this from a set of known
networks that you would preconfigure and then not change?  (Such as in
a home network environment but not mobile at airports or elsewhere.)

 And when I do this, hope to be connected to a respective network based
 on my explicit rules.

Have you looked at the guessnet package?  It might help you.

  http://guessnet.alioth.debian.org/

  https://packages.debian.org/jessie/guessnet

There are also whereami and other packages in this topic space too.

 I hope this can work. And I hope I have explained a bit more enough.
 Are there rules I must follow to do this properly. As I've tried before
 without success.
 Thanks for your help.

There are still many possibilities.  But if we go through more
questions and answers then it will eventually be communicated. :-)

So far we have the above and your previously written:

 I'm using a laptop and want the wifi to go online, but the ethernet to
 stay on a different lan networks, actually two different networks. And
 I worry how to proceed with such a configuration. Ideas?

Let me assume you want to keep NetworkManager (or better wicd)
managing your wifi interface but manually configure your eth0 and eth1
wired interfaces.  For that you would simply add a static or dhcp
configuration to /etc/network/interfaces for eth0 and eth1 but not
specify wlan0.  (Reboot after doing this so that NM won't grab eth0
and eth1.)  With that configuration you will manually select the WiFi
connection to the internet as you say.  As you connect the wired
networks they will use the explicit configuration in
/etc/network/interfaces for them.

If you have only one wired ethernet and are planning on connecting
back and forth between two wired networks then it is similar but I
would use guessnet to manage the interface so that it can
automatically switch you between the specified network profiles.

And so forth...

Bob


signature.asc
Description: Digital signature


Re: Recurring disk activity

2015-06-22 Thread Bob Proulx
Mayuresh wrote:
 Bob Proulx wrote:
  Is your Load_Cycle_Count continuously increasing?
 
 Doesn't look like. It was 3634 when I started watching and over last few
 minutes it changed only to 3635.

That still seems like a rather high load_cycle_count.  And if it is
increasing every minute then I would investigate the issue further.
What does this say?

  hdparm -B /dev/sda

  Install smartmontools.  I also think think you should set up regular
  drive selftests.  Ask if you want me to suggest something about this.
 
 Yes, please do suggest.

Is your laptop something that is mostly off but only sometimes on?  Or
something that is mostly on and sometimes mobile?  Or something
different?  Mobile devices are a little hard to schedule selftests
upon because we would want to do it sometime when the device is
otherwise idle but on AC mains power.

I don't know a perfect answer to mobile devices so let me start by
explaining the default configuration, then explaining my preferred
configuration for always on systems, then guessing at something good
for a mobile device.

Install the smartmontools package.  The default configuration
dynamically searches for disk drives.  If smart detects a failure it
will notify by sending email.  Here is the default config:

  # The word DEVICESCAN will cause any remaining lines in this
  # configuration file to be ignored: it tells smartd to scan for all
  # ATA and SCSI devices.  DEVICESCAN may be followed by any of the
  # Directives listed below, which will be applied to all devices that
  # are found.  Most users should comment out DEVICESCAN and explicitly
  # list the devices that they wish to monitor.
  DEVICESCAN -d removable -n standby -m root -M exec 
/usr/share/smartmontools/smartd-runner

The above is the default.  It is documented in the smartd.conf man
page.  The options listed are:

   -d TYPE Set the device type: ata, scsi, marvell, removable, 3ware,N, 
hpt,L/M/N
   -n MODE No check. MODE is one of: never, sleep, standby, idle
   -m ADD  Send warning email to ADD for -H, -l error, -l selftest, and -f
   -M TYPE Modify email warning behavior (see man page)

The reason it recommends removing DEVICESCAN and replacing it with an
explicit configuration is for systems with multiple disk drives.  A
server with two mirrored RAID1 disks might have one disk fail
completely.  If using DEVICESCAN it will only detect one disk and
won't know there should be a second one.  By explicitly telling it
that there should be two disks it can report the failure on the
missing one.

The default config is a good safe default in that it is installable on
any system and provides something.  Unfortunately they never run any
selftests.  Therefore on an always on server I change the
configuration to be this:

  # Monitor all attributes, enable automatic offline data collection,
  # automatic attribute autosave, and start a short self-test every
  # weekday between 2-3am, and a long self test Saturdays between 3-4am.
  # Ignore attribute 194 temperature change.
  # Ignore attribute 190 airflow temperature change.
  # On failure run all installed scripts (to send notification email).
  /dev/sda -a -o on -S on -s (S/../../[1-5]/03|L/../../6/03) -I 194 -I 190 -m 
root -M exec /usr/share/smartmontools/smartd-runner
  /dev/sdb -a -o on -S on -s (S/../../[1-5]/03|L/../../6/03) -I 194 -I 190 -m 
root -M exec /usr/share/smartmontools/smartd-runner

And for those options, all of these are on:

   -a  Default: equivalent to -H -f -t -l error -l selftest -C 197 -U 198
   -f  Monitor for failure of any 'Usage' Attributes
   -H  Monitor SMART Health Status, report if failed
   -f  Monitor for failure of any 'Usage' Attributes
   -t  Equivalent to -p and -u Directives
   -p  Report changes in 'Prefailure' Normalized Attributes
   -u  Report changes in 'Usage' Normalized Attributes
   -o VAL  Enable/disable automatic offline tests (on/off)
   -S VAL  Enable/disable attribute autosave (on/off)
   -I ID   Ignore Attribute ID for -p, -u or -t Directive

Those should be relatively straight forward.  Basically all of the
above is monitor important things and ignore unimportant things.

   -s REGE Start self-test when type/date matches regular expression -(see man 
page)

That one is a mouthful.  That one is where my comments come in to
help.  With the man page documentation the
  -s (S/../../[1-5]/03|L/../../6/03)
option becomes this:

  start a short self-test every weekday between 2-3am,
  and a long self test Saturdays between 3-4am.

That is the part that runs the selftests.  Without -s it doesn't.  The
example file has examples that do almost exactly this.  But those
examples are commented out.

For a server I like the above configuration where the selftests are
run periodically.  But for a mobile laptop this is more difficult
because depending upon the user it might not be powered up at all
during that time.  Although if it is then it is likely to be on AC
mains power

Re: Recurring disk activity

2015-06-21 Thread Bob Proulx
Mayuresh wrote:
 I am a new Debian user.

Welcome! :-)

 On a freshly purchased laptop I installed Debian jessie and I am facing a
 constant recurring disk noise.

Is your Load_Cycle_Count continuously increasing?  You mention a
laptop and at one time there was a big problem with disk drives
continuously parking the drive head.  It has been a few years since I
have heard of this problem.

  # apt-get install smartmontools

  # smartctl -a /dev/sda | grep Load_Cycle_Count
  # smartctl -a /dev/sda | grep Hours

Look for parameter 193 Load_Cycle_Count and see if the raw value of it
is continuously increasing.

 # hdparm -I /dev/sda | grep level
 Advanced power management level: disabled

I think that should disable agressive power save management but I
still think it is necessary to look directly at the load cycle count
to know for certain.

 smart, hal : not installed

Install smartmontools.  I also think think you should set up regular
drive selftests.  Ask if you want me to suggest something about this.

Bob


signature.asc
Description: Digital signature


Re: NFS on Raspberry Pi high load

2015-06-21 Thread Bob Proulx
Sven Hartge wrote:
 Reco wrote:
  Sven Hartge wrote:
  Maybe the USB hardware implementation is better in the N900? The one
  in the Pi is quite bad and finicky.

I am coming to this discussion late but I had to confirm that the USB
chip in the Raspberry Pi is very limiting.  It has a maximum bandwidth
of around 50Mbps and everything including ethernet goes through it.
This means that if you have one data stream that it will get a maximum
of 50Mbps.

If you have two streams, such as if using the Raspberry Pi for a
router and it is routing packets in one interface and out a different
one, then the maximum throughput is 25Mbps with one stream in and one
stream out.  I have a good friend who has worked on the drivers for
the pi and he told me that the usb chip generated a minimum of 2000
interrupts per second even at idle.  I will pass that along as hearsay
because it seems plausible.

For example this CEC limitation makes the Raspberry Pi acceptable for
an 802.11b WiFi access point at 11Mbps but not able to keep up with
'g' or 'n' speeds.  It is simply hardware limited.

If you have 8 nfsds running and let's say each of them try to use only
one data stream then each will get only 6.25Mbps maximum.  They will
spend a lot of time in the run queue blocked on I/O waiting for the
network to respond.

basti wrote:
 Per default nfs starts with 8 servers
 root@raspberrypi:~# head -n 2 /etc/default/nfs-kernel-server
 # Number of servers to start up
 RPCNFSDCOUNT=8

As you have found doing any type of real transfer will immediately
consume 8 processese because each daemon will be in the run queue
ready to run but waiting for I/O.  The biggest problem is that each
daemon will consume *memory*.  The daemon won't consume cpu while it
is blocked waiting on I/O.  But it will consume memory.  Memory for
the nfs daemons.  Memory for the kernel to track the multiple network
streams active.  Memory for file system buffer cache.  Everything
takes memory.  The Raspbery Pi has a limited 512M of ram.

I like seeing the bar graph of the memory visualization from htop.  I
suggest installing htop and looking at the memory bar graph displaying
the amount of consumed memory and the amount available for cache.

 So I try to transfer a 3GB file from the raspberry to my laptop via
 WLAN(n).  This operation kills my raspberry.  I get a load of 12 and
 more. 10 Minutes after I interrupt this operation the load was still
 at 10.

In addition to the 8 processes consuming memory from the 8 nfsds there
will need to be additional cpu to deal with the driver for the usb
chip.  It will need to handle the accounting for the multiple network
streams.  A single stream will take less resources than 8 streams.
And anything else that happens along.  That extra accounts for the
load of 10 you are seeing.

But the real problem is probably the lack of memory.  The many
processes stacked up and the I/O buffers will likely have consumed
everything.

 So I deside to reduce the number of servers to 2. Now it's a bit
 better, the load is only around 5.

That was a good configuration modification.  About the best you can do.

 Can somebody reproduce this behavior?

Yes.  Easily!  It is simply a natural consequence of the limited
hardware of the Raspberry Pi.

I have become a fan of the newer Banana Pi.  It is very Raspberry-like
but has a different CEC and doesn't have that very limited 50Mbps usb
chip found on the Raspberry.  On the Banana Pi there is 1G of ram,
twice that of the Raspbery.  It is a dual core arm, again twice the
Raspberry.  It is an armv7l architecture and therefore runs stock
Debian.  And best yet for your purposes it has much higher speed I/O.

On the Banana Pi I can routinely get 750Mbps through a single ethernet
connection.  That is about the same performance as an Intel Atom D525.
The Banana Pi makes a much better practical machine than the Raspberry.
The price of the Banana is currently running around US $42 only $7
more than the Raspberry.  It is a much more capable machine.

I don't know about the new Raspberry quad core.  Does it have the same
limited usb chip as the original?

Bob


signature.asc
Description: Digital signature


Re: network configuration

2015-06-21 Thread Bob Proulx
notoneofmyseeds wrote:
 do I need to remove/purge NetworkManager Applet 0.9.10.0 to manually
 configure my interfaces?

It is not necessary.  However I recommend doing so anyway.

It is not necessary because NetworkManager and wicd ignore any
interface with a configuration in /etc/network/interfaces.  They
determine that there is no explicit configuration and then assume
control of the interface.  If you create an explicit configuration
they will not control it.

However there is a little bit of a chicken-and-egg startup problem.
Initially they will control the interface and there won't be an
explicit configuration for it.  You need to shut them down before
adding an explicit configuration.  Not doing this can cause problems
where NM is *still* controlling it from before, along with the
explicit configuration trying to control it.  Of course rebooting
after setting up an explicit configuration should reset everything.

I still recommend removing NetworkManager because I have too many
times had NM break the network on upgrades.  Expecially when
connecting to the system remotely with ssh that is unacceptable to
have NM break the network connection to it.

For a mobile device and graphical interactive control I use and
recommend wicd.

 I'm using a laptop and want the wifi to go online, but the ethernet to stay
 on a different lan networks, actually two different networks. And I worry
 how to proceed with such a configuration. Ideas?

This description is ambiguous.  Please say more?

Bob


signature.asc
Description: Digital signature


Re: PHP-FPM socket disappearing

2015-06-20 Thread Bob Proulx
Proxy One wrote:
  Why use /dev/shm/user-php.sock as the socket path?  The Jessie-style
  location would be in /var/run/user-php.sock AFAICS.  (I don't see how
  that would be related to your socket dissappearing.)
 
 I used that path on Centos machines and it worked. I saw that Debian
 uses different default path, but figured it should work either way. I
 also used /dev/shm so it's created in RAM (tmpfs). I figured it would be
 quicker this way.

On Debian (and I must say Jessie too to bracket the time since this
has been changing in recent history) /run is a tmpfs just like
/dev/shm and /var/run is a symlink that points to /run.  Using
/var/run for this will be just as fast as /dev/shm.  AFAIK /dev/shm is
the domain for use by libc.

It has also become very popular to use /tmp as a tmpfs too.  However
that conflicts with various historical usage.  Therefore whether you
use /tmp as a tmpfs is really more of a local system configuration
rather than as a blanket statement.

Bob


signature.asc
Description: Digital signature


Re: Problems with 32 bit Jessie and Mate DE

2015-06-17 Thread Bob Proulx
Curt wrote:
 Richard Owlett wrote:
  For example, to set the time and date to 15:00 on 1st February 2014:
  sudo date 020115002014
 
  That worked - it took effect after a reboot.
 
 That's strange; I always thought you had to set the hardware clock
 (hwclock) for the modified date and time to survive a reboot.

After a reboot I am sure the boot time hwclock set the time.  The
system time set by date evaporates when the system shuts down.  System
time is not preserved across reboots.  But at boot time the boot time
script /etc/init.d/hwclock.sh sets the time from the hardware clock.

Best is to install ntp and have it set the time from the network at
boot time.

  apt-get install ntp

If you don't have a network then of course ntp can't work.  For
systems such as the Raspberry Pi that don't have a hardware clock the
time is set to a best guess based upon the most recent timestamp on a
statefile in the file system to keep time moving forward.

Bob


signature.asc
Description: Digital signature


Re: Screen goes black after install

2015-06-17 Thread Bob Proulx
Mark Allums wrote:
 Dwijesh Gajadur wrote:
  I recently bought and installed a PCI ATI Rage 128 Video card on my pc.
 ...
  And then the screen goes black..nothing appears.
  I have tested the video card on windows and it works well.
 
  I also want to run debian on non-graphical mode..I did not install any
  desktop environment.I want to run it on command line as a server.
 
 Press Ctrl-Alt-F1 and see if a login prompt appears.  If it does, the
 machine is trying to start X and failing.  I would try to get X running, but
 if you don't want X, you should probably check and see if a DM (such as
 lightdm) is installed, then go from there.

If no X was installed using Ctrl-Alt-F1 won't exit it.

Since no desktop was installed the going black is probably the
kernel's bitmapped framebuffer at boot.  Try booting with nomodeset on
the kernel command line and see if that improves things.  To do that
interact with the grub boot loader and add that to the boot command
line.

Newer Linux kernels now set the graphics modes very early in the boot
process.  This is something that used to happen with X starting but
now happens by the kernel at boot time even without X.  It enables the
kernel to present boot splash screens and allows switching between X
screens and kernel screens without changing video modes.  And when the
graphics is not supported it tends to break things.  Using nomodeset
instructs the kernel not to do this and to use the BIOS modes
instead.  Using nomodeset now breaks X because X now relies upon the
kernel mode setting.  But it should get your text console working
again.  Hopefully.  Give it a try and see.

Bob


signature.asc
Description: Digital signature


Re: PHP-FPM socket disappearing

2015-06-17 Thread Bob Proulx
Proxy One wrote:
 I installed Jessie on my new server few days ago and moved website
 that run previously on Centos 5.

Welcome! :-)

 I'm using Apache and PHP-FPM.

I have become an Nginx + php5-fpm advocate in recent years.  If you
decide you would like to give it a try post something and I will show
my configurations for it.  (Not that I am a master of it by any means.
Just a daily user of it.)

 Currently, there is only one website on that server, but I still
 configured pool for it. What's happening is that, after some time,
 at least once a day, socket that should be listening for that user
 disappears, ...
...
 Looking at /dev/shm/ directory, user-php.sock is really missing, but
 default php5-fpm.sock is still there. 

Odd!

 I have this block in VirtualHost section for that website:
 IfModule mod_fastcgi.c
 Alias /php5-fcgi /dev/shm/pdfconve-php.fcgi
 /IfModule
...
 And there is also 
 FastCGIExternalServer /dev/shm/user-php.fcgi -socket /dev/shm/user-php.sock 
 -pass-header Authorization -flush -appConnTimeout 2 -idle-timeout 60
 in apache conf.

Hmm...  Is that right?  It is working for you so it must be.  And
since the disappearing fpm socket shouldn't be related I am going to
ignore my ignorance of the above here.

 Pool configured:
 
 [user]
 listen = /dev/shm/user-php.sock
 listen.owner = user
 listen.group = www-data
 listen.mode = 0660
 user = user
 group = user
 pm = dynamic
 pm.max_children = 40
 pm.max_requests = 2048
 pm.start_servers = 30
 pm.min_spare_servers = 10
 pm.max_spare_servers = 35
 request_terminate_timeout = 305

Why use /dev/shm/user-php.sock as the socket path?  The Jessie-style
location would be in /var/run/user-php.sock AFAICS.  (I don't see how
that would be related to your socket dissappearing.)

 There is nothing interesting in php-fpm error log. There are some errors like 
 this:
 NOTICE: [pool user] child 32719 exited with code 0 after 76324.921427 seconds 
 from start
 
 followed by this line:
  NOTICE: [pool user] child 29909 started
 
 but I don't think that's relevant. 

Is that just hitting your max_requests limit and exiting?  In which
case that would be normal operation.

 Any idea why this is happening and how to prevent it?

I am running php5-fpm on several servers.  I as a matter of course set
up my own custom pool configurations with a different socket name.
But always in /var/run.  I have not had any problems with with the
socket disappearing.  One is on Jessie 8, one on Sid, the rest on
Wheezy 7.  I haven't seen any endemic problem assocatied with php5-fpm.

The first thing I would try is to move the socket location out of
/dev/shm (which is symlinked to /run/shm in Jessie 8) and see if the
behavior stops.  If so then it must be related to that location.  If
not then it rules that out as a suspect.

I would tend to also set up a monitor that would run very often, once
every minute wouldn't be too often, to check for the presence of the
socket file.  If it dissappears then have it notify me immediately so
I could look to see what else happened around that same time period.
Because if it is getting removed I would think that something must be
removing it and that something will hopefully leave an audit trail.

Bob


signature.asc
Description: Digital signature


Re: a quick fstab questions

2015-06-17 Thread Bob Proulx
notoneofmyseeds wrote:
 Bob, thanks for your very informed response, from which I'm learning a lot.

Happy to help.

 Bob Proulx wrote:
  You haven't really included enough information in the thread yet.  You
  are mounting an ext4 file system over a USB disk.  What, in general,
  is on the disk?  Is only your own data?  Is it the root of an
  operating system?  This is rather important.  If I suggest a recursive
  chmod it will destroy an operating system.  But if it is only your own
  data files, an external $HOME backup for example, then that is fine.

 Here's a copy from the initial post that says what I'm trying to do. And to

I read the original posting.  But that still didn't say what was on
the disk you were trying to mount.  That is okay but when we on the
mailing list work with partial information the quality of the answers
are also only partial.  :-(

 be more specific: on the disk is just data. It's not the root of an OS. Now

Great!  That is what was needed.  Just a data disk of yours with only
your own files on it.

 And after putting this line in fstab, it still won't work:
 /dev/sdb2: LABEL=300gb UUID=02db0acc-97b3-40b8-b9a1-f6d5fee78e1f ext4 -o
 uid=1000,gid=1000,utf8$

The uid and gid are FAT32 mount options not ext4 mount options.  Read
the man page and see the FILESYSTEM-SPECIFIC MOUNT OPTIONS sections
for each of the different file system types.

  man 8 mount

Since you are using ext4 you can only use options from the Mount
options for ext4 section plus the FILESYSTEM-INDEPENDENT MOUNT
OPTIONS section.

 My current fstab concerning the line: /dev/sdb2 /media/a3a3/300gb ext4
 rw,user,noauto 0 0

That looks okay to me.  Assuming /media/a3a3 already exists and /media
has reasonable permissions.  (Permissions on /media/a3a3 will be
overlayed by the mount point and the permissions of the mounted device.)

 Thanks for your input. I was about to try this last chown command but
 thought I'd wait and see if I'm on the right track on that.
 
 Currently this command rsync -av /mnt/media/a3a3/300gb/
 /media/a3a3/duopro/New/
 
 Gives this: sent 20 bytes  received 12 bytes  64.00 bytes/sec
 total size is 0  speedup is 0.00
 rsync error: some files/attrs were not transferred (see previous errors)
 (code 23) at main.c(1183) [sender=3.1.1]
 
 None of the fixes online have solved the code 23 error.

code 23 is bad!  There should have been error messages printed
concerning the files that were unable to be copied.

  man rsync

   23 Partial transfer due to error

You are having device errors!  Check your syslog for kernel messages.
I suspect you will see I/O errors logged there.

  less /var/log/syslog

However usb enclosures are notoriously cheap and flakey.  Don't panic
yet.  Instead verify everything.  Make sure you have a good USB port.
I have seen different USB ports on the same computer to be trouble.
One will work okay and another will fail.

Unmount and try using a different USB port.  Or try mounting and
checking on a different computer system.

Or in more of a deep case I have removed the internal disk from
failing enclosures and mounted them directly with SATA and IDE cables.
The disk is often fine.  More often I have had the USB enclosure fail.
Accessing the drive directly over SATA / IDE eliminates the USB
enclosure electroncs and for me has more than once completely solved
the problem.

Check the disk SMART error log.  Here are some hints.

  smartctl -l error /dev/sdb | less

  smartctl -a /dev/sdb | less

  smartctl -t short /dev/sdb
  sleep 120  smartctl -l selftest /dev/sdb

  smartctl -t long /dev/sdb
  sleep $((60*20))  smartctl -l selftest /dev/sdb

  smartctl -a /dev/sdb

If it really looks like a disk problem then copy as much as you can
off the disk.  For deeper rescue contemplate using ddrescue to extract
as much raw data from the drive as possible.  More information about
ddrescue is available if needed.  If the drive is failing then I would
copy the raw drive and then fsck the new copy.  Or if you have rsync
copied as many files as practical already then you already have as
much as you will get and stop there.  Working at the file system level
with rsync is much easier than trying to rescue raw bits of a file
system.

Good luck!

Bob


signature.asc
Description: Digital signature


Re: I upgraded from jessie to stretch and messed up I used the dist-upgrade command....

2015-06-17 Thread Bob Proulx
Michael Fothergill wrote:
 ​I looked at this file - it is quite big; I need to think what files to
 search for within it​
 
 I have made a pastebinit link to the dpkg.log file:
 
 http://paste.ubuntu.com/11731677/

Looking only at entries related to the xserver and filtering out
status and configure lines I see this much smaller summary:

  2015-06-05 17:45:33 upgrade xserver-xorg-video-all:amd64 1:7.7+7 1:7.7+9
  2015-06-05 17:45:35 upgrade xserver-xorg-input-evdev:amd64 1:2.9.0-2 1:2.9.2-1
  2015-06-05 17:45:35 upgrade xserver-xorg-input-vmmouse:amd64 1:13.0.0-1+b3 
1:13.0.99-1
  2015-06-05 17:45:36 upgrade xserver-xorg-input-synaptics:amd64 1.8.1-1 1.8.2-1
  2015-06-05 17:45:36 upgrade xserver-xorg-input-all:amd64 1:7.7+7 1:7.7+9
  2015-06-05 17:45:38 upgrade xserver-xorg:amd64 1:7.7+7 1:7.7+9
  2015-06-05 17:45:50 upgrade x11-xserver-utils:amd64 7.7+3+b1 7.7+4
  2015-06-05 17:49:29 upgrade xserver-common:all 2:1.16.4-1 2:1.17.1-2
  2015-06-05 17:56:40 upgrade xserver-xorg-video-r128:amd64 6.9.2-1+b2 6.10.0-1
  2015-06-05 17:56:40 upgrade xserver-xorg-video-mach64:amd64 6.9.4-2 6.9.5-1
  2015-06-05 17:56:41 upgrade xserver-xorg-video-radeon:amd64 1:7.5.0-1 
1:7.5.0-1+b1
  2015-06-05 17:56:41 upgrade xserver-xorg-video-ati:amd64 1:7.5.0-1 
1:7.5.0-1+b1
  2015-06-05 17:56:42 upgrade xserver-xorg-video-tdfx:amd64 1:1.4.5-1+b2 
1:1.4.6-1+b1
  2015-06-05 17:56:42 upgrade xserver-xorg-video-cirrus:amd64 1:1.5.2-2+b1 
1:1.5.3-1
  2015-06-05 17:56:43 upgrade xserver-xorg-video-vesa:amd64 1:2.3.3-1+b3 
1:2.3.3-1+b4
  2015-06-05 17:56:46 remove xserver-xorg-video-siliconmotion:amd64 
1:1.7.7-2+b2 none
  2015-06-05 17:56:47 upgrade xserver-xorg-video-trident:amd64 1:1.3.6-2+b2 
1:1.3.7-1+b1
  2015-06-05 17:56:47 upgrade xserver-xorg-video-vmware:amd64 1:13.0.2-3.1 
1:13.1.0-2
  2015-06-05 17:56:48 upgrade xserver-xorg-video-neomagic:amd64 1:1.2.8-1+b2 
1:1.2.9-1
  2015-06-05 17:56:48 upgrade xserver-xorg-video-openchrome:amd64 1:0.3.3-1+b2 
1:0.3.3-1+b3
  2015-06-05 17:56:49 upgrade xserver-xorg-video-mga:amd64 1:1.6.3-2+b1 
1:1.6.4-1+b1
  2015-06-05 17:56:49 upgrade xserver-xorg-video-savage:amd64 1:2.3.7-2+b2 
1:2.3.8-1
  2015-06-05 17:56:50 upgrade xserver-xorg-video-fbdev:amd64 1:0.4.4-1+b2 
1:0.4.4-1+b3
  2015-06-05 17:56:51 upgrade xserver-xorg-video-nouveau:amd64 1:1.0.11-1 
1:1.0.11-1+b1
  2015-06-05 17:56:51 upgrade xserver-xorg-video-qxl:amd64 0.1.1-2+b1 0.1.4-1
  2015-06-05 17:56:52 upgrade xserver-xorg-video-intel:amd64 2:2.21.15-2+b2 
2:2.99.917-1
  2015-06-05 17:56:52 upgrade xserver-xorg-video-sisusb:amd64 1:0.9.6-2+b2 
1:0.9.6-2+b3
  2015-06-05 17:56:54 remove xserver-xorg-video-modesetting:amd64 0.9.0-1+b1 
none
  2015-06-05 17:56:55 upgrade xserver-xorg-core:amd64 2:1.16.4-1 2:1.17.1-2
  2015-06-08 20:44:32 upgrade xserver-xorg:amd64 1:7.7+9 1:7.7+9
  2015-06-16 18:10:25 upgrade xserver-xorg:amd64 1:7.7+9 1:7.7+9

The only two removals as suggested by your description:

 I changed from jessie to stretch in my sources.list file and then did
 aptitude safe-upgrade.
 
 It worked OK or so it seemed.
 
 But then I decided to try aptitude dist-upgrade afterward.

The only two removals are xserver-xorg-video-modesetting and
xserver-xorg-video-siliconmotion.  I wouldn't expect either of those
to negatively affect your radeon driver.  I think it must be something
else.  I know not what.  I am at the limit of my information here.
Sorry.

I don't recall you saying what graphics hardware you have but I think
it is a radeon?  What is your lspci entry for it?

  lspci | grep VGA

Bob


signature.asc
Description: Digital signature


Re: Debian don't detect blank (empty) DVDs

2015-06-17 Thread Bob Proulx
Markos wrote:
 My Debian Squeeze can read burned DVDs but doesn't detect blank DVDs, so I
 can't burn iso images.
 Any suggestion?

When I ran into that very same problem the problem was the new media I
bought.  There are only a very few manufacturers of blank media.  Be
that as it may be at least with cd-r disks the laser parameters are
mapped from the media manufacturer through known tables.  If I used
older media that I had that worked previously then everything worked
fine.  But if I tried to use the new media then I couldn't write them.

The only solution I know is to upgrade the dvd writer.  But first you
might try any older media that you still have hanging around.  If
older media works but newer media does not then you likely have the
same problem.

If there was a way to upgrade the firmware of your dvd writer that
would probably improve the situation too.

Bob


signature.asc
Description: Digital signature


Re: a quick fstab questions

2015-06-16 Thread Bob Proulx
notoneofmyseeds wrote:
 Reco wrote:
   Any other ideas please, this is driving me nuts!!!
 
  This is something to be expected. You're using ext4 filesystem, which
  carefully preserves files (directries, etc) permissions.
  So you can mount the filesystem as a user, but it's not enough by itself.
  It's the filesystem's permissions that prevent you from viewing its
  contents.
 
  Recommended solution to the problem includes chown(1) and chmod(1).

 Thanks a lot, at the risk of loosing data, I ask; how can I safely chown or
 chmod?

You haven't really included enough information in the thread yet.  You
are mounting an ext4 file system over a USB disk.  What, in general,
is on the disk?  Is only your own data?  Is it the root of an
operating system?  This is rather important.  If I suggest a recursive
chmod it will destroy an operating system.  But if it is only your own
data files, an external $HOME backup for example, then that is fine.

Since ext4 is a full Unix-like file system all files are owned by
users and groups.  The file system never stores names.  Everything is
stored by number.  The first user on a Debian system is usually 1000
by number.  You can tell this by looking at the output of the 'id'
command and by listing files in your home with 'ls -l -n'.

If you are the second user then your uid is usually 1001 and the next
1002 and so forth.  Transporting files on the file system drive from
one system to another the other system might have different uid
numbers.  Let's say you were 1001 on one system but 1000 on a
different system.  The numbers would not match.  You as uid=1000 would
not have permission to modify uid=1001 files.

Of course root on the system mounting that disk would have
permissions.  I tend to never modify backup files.  I tend to always
mount backup copies read-only to prevent accidents to the backup.  If
the backups had uid=1002 and I needed to recover them to 1001 then I
would copy the files off backup without modifying them.  Then in the
recovery area I would modify the new copies to be the right user.

Tools such as cp simply copy files and permissions from the source to
the destination.  Tools such as rsync are fancy and try to map uids.
Both tools have options to control this behavior.

  $ cp --help | less
   --preserve[=ATTR_LIST]
  preserve the specified attributes (default: mode,ownership,time-
  stamps), if  possible  additional  attributes:  context, links,
  xattr, all
  --no-preserve=ATTR_LIST  don't preserve the specified attributes

  $ man rsync
  --numeric-ids   don't map uid/gid values by user/group name

Additionally rsync has many options for controlling this and various
behavior.  See --usermap=STRING, --groupmap=STRING for example.

Let's say I ignore those options for the moment.  Let's say I copy
those files from backup to recover them.

  # rsync -av /mnt/path/to/backup /home/foo/restore/

That will probably leave the uids:gids owning the files incorrect.
They will be as on the source mapped through the local /etc/passwd
file mapping.  After restoring them they will need to be corrected.

  # chown -R user1:user1 /home/foo/restore

That will recurse (-R) down the /home/foo/restore directory tree and
change the ownership and group of everything to user1:user1.

Since I don't know what is on your usb disk it is hard to give
specific advice.  I can only say the above in general terms.  Modify
as needed for your situation.

 This is sad and frustrating. I'm sweating.

Don't sweat it!  :-)

Bob


signature.asc
Description: Digital signature


Re: I upgraded from jessie to stretch and messed up I used the dist-upgrade command....

2015-06-16 Thread Bob Proulx
Bob Proulx wrote:
 Check /var/log/dpkg.log for a trace of what was installed previously
 and what was removed and what was recently installed.  I think it
 likely your xserver driver was removed due to a conflict.  Figure out
 what had been running and install it again.

In order to reduce the detail in the dpkg.log file something like the
following command can summarize the actions.

  awk '$3==upgrade' /var/log/dpkg.log | less

In awk $3 is the third field.  If the third field is upgrade then
perform the default action.  The default action is to print the entire
line.

These will be listed with the time at the front of the line.  This
makes it relatively easy to see what changes happened at what time.
On my Sid system today I see:

  $ awk '$3==upgrade||$3==install' /var/log/dpkg.log | less
  ...
  2015-06-16 11:23:41 upgrade libdebconfclient0:amd64 0.193 0.194
  2015-06-16 11:23:44 upgrade libustr-1.0-1:amd64 1.0.4-4 1.0.4-5
  2015-06-16 11:23:45 upgrade chromium:amd64 43.0.2357.65-1 43.0.2357.81-1
  2015-06-16 11:23:57 upgrade clusterssh:all 4.03.06-1 4.03.06-2
  2015-06-16 11:23:57 upgrade file-roller:amd64 3.16.2-1 3.16.2-2
  2015-06-16 11:24:04 upgrade gedit:amd64 3.14.0-3 3.16.2-1
  2015-06-16 11:24:06 upgrade gedit-common:all 3.14.0-3 3.16.2-1
  2015-06-16 11:24:12 upgrade help2man:amd64 1.46.6 1.47.1
  2015-06-16 11:24:13 upgrade libdata-dump-perl:all 1.22-1 1.23-1
  2015-06-16 11:24:13 upgrade libgtop2-common:all 2.28.5-2 2.30.0-3
  2015-06-16 11:29:30 upgrade metacity:amd64 1:3.14.3-1 1:3.17.2-3
  2015-06-16 11:29:32 upgrade metacity-common:all 1:3.14.3-1 1:3.17.2-3

  $ awk '{print$3}' /var/log/dpkg.log | sort -u
  configure
  install
  purge
  remove
  startup
  status
  trigproc
  upgrade

Here I am printing the 3rd field and sorting them uniquely to see what
possible strings might be in that field.  I guess we should look for
remove and purge too.  So it isn't quite so simple.  As opposed to
overthinking this I will just brute force ahead and look for all of
those.

  $ awk '$3==upgrade||$3==install||$3==remote||$3==purge' 
/var/log/dpkg.log | less

Not sure how much help this will be.  But it is the type of thing I do
when I am looking to see what has changed recently.

Bob


signature.asc
Description: Digital signature


Re: I upgraded from jessie to stretch and messed up I used the dist-upgrade command....

2015-06-16 Thread Bob Proulx
Michael Fothergill wrote:
 (WW) warning, (EE) error, (NI) not implemented, (??) unknown.

Note that it will log errors with EE at the front.  This is where the
errors start:

 [15.104] (II) [KMS] drm report modesetting isn't supported.
 [15.104] (EE) open /dev/dri/card0: No such file or directory
 [15.104] (WW) Falling back to old probe method for modesetting
 [15.104] (EE) open /dev/dri/card0: No such file or directory
 [15.104] (II) Loading sub module fbdevhw
 [15.104] (II) LoadModule: fbdevhw
 [15.105] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so
 [15.105] (II) Module fbdevhw: vendor=X.Org Foundation
 [15.105] compiled for 1.17.1, module version = 0.0.2
 [15.105] ABI class: X.Org Video Driver, version 19.0
 [15.105] (EE) open /dev/fb0: No such file or directory
 [15.105] (WW) Falling back to old probe method for fbdev
 [15.105] (II) Loading sub module fbdevhw
 [15.105] (II) LoadModule: fbdevhw
 [15.105] (II) Loading /usr/lib/xorg/modules/libfbdevhw.so
 [15.105] (II) Module fbdevhw: vendor=X.Org Foundation
 [15.105] compiled for 1.17.1, module version = 0.0.2
 [15.105] ABI class: X.Org Video Driver, version 19.0
 [15.105] (EE) open /dev/fb0: No such file or directory
 [15.105] (EE) Screen 0 deleted because of no matching config section.
 [15.105] (II) UnloadModule: radeon
 [15.105] (EE) Screen 0 deleted because of no matching config section.
 [15.105] (II) UnloadModule: modesetting
 [15.105] (EE) Screen 0 deleted because of no matching config section.
 [15.105] (II) UnloadModule: fbdev
 [15.105] (II) UnloadSubModule: fbdevhw
 [15.105] (II) Loading sub module vbe
 [15.105] (II) LoadModule: vbe
 [15.105] (II) Loading /usr/lib/xorg/modules/libvbe.so
 [15.105] (II) Module vbe: vendor=X.Org Foundation
 [15.105] compiled for 1.17.1, module version = 1.1.0
 [15.105] ABI class: X.Org Video Driver, version 19.0
 [15.105] (II) Loading sub module int10
 [15.105] (II) LoadModule: int10
 [15.105] (II) Loading /usr/lib/xorg/modules/libint10.so
 [15.105] (II) Module int10: vendor=X.Org Foundation
 [15.105] compiled for 1.17.1, module version = 1.0.0
 [15.105] ABI class: X.Org Video Driver, version 19.0
 [15.105] (II) VESA(0): initializing int10
 [15.105] (EE) VESA(0): Cannot read int vect
 [15.105] (II) UnloadModule: vesa
 [15.105] (II) UnloadSubModule: int10
 [15.105] (II) Unloading int10
 [15.105] (II) UnloadSubModule: vbe
 [15.105] (II) Unloading vbe
 [15.105] (EE) Screen(s) found, but none have a usable configuration.

Screen(s) found, but none have a usable configuration.  I have many
times seen and dreaded that message.

I can't decode the above into the root cause of the problem.
Hopefully someone else will be able to do so.  Does anyone else on
this list have any hints here?

I fear the problem is the newer Linux KMS and DRM interfaces which
obsoleted a lot of hardware.  I have been hit by that problem myself.
Whereas older kernels worked perfectly supporting the hardware newer
kernels have dropped support and broken my systems.

I suggest booting one of the previous kernels and seeing if that
helps.  If not then try the newest kernel available, possibly a
backports kernel.  If not then I suggest trying to boot with the
kernel command line option nomodeset and see if it improves things.
Plus any better hints that others might supply.

Bob


signature.asc
Description: Digital signature


Re: Major bug with wired/wireless connections in Debian 7

2015-06-16 Thread Bob Proulx
Reco wrote:
 linuxthefish wrote:
  auto lo
  iface lo inet loopback
  
  auto eth0
  iface eth0 inet static
  address 172.16.0.4
  gateway 172.16.0.1
  netmask 255.255.0.0
  
  allow-hotplug wlan0
  iface wlan0 inet static
  wpa-conf /etc/wpa_supplicant/wpa_supplicant0.conf
  address 172.16.0.5
  gateway 172.16.0.1
  netmask 255.255.0.0
 
 You're configuring a default gateway for both network interfaces, which
 is a big no-no, unless you know what you're doing. You also set IP
 addresses from the same network, and that is one of the best ways to
 have your host's IP routing going haywire.

Good observations!  I missed seeing both of those when I commented in
my response.  (I hang my head in embarrassment.)  Both are very
critical no-no's.  Both configuration problems should be avoided.

I wonder if those are clues as to why the reporter said that it needed
a cable plugged/unplugged?  Because I think that would rotate the
order of the default routes.  That might account for the odd behavior.

Bob


signature.asc
Description: Digital signature


Re: ThinkPad R51 creeping segmentation faults

2015-06-15 Thread Bob Proulx
Martin Read wrote:
 Bob Proulx wrote:
 In the old days computers would use ECC ram throughout.
 
 ECC (in the strict sense) has never been ubiquitous.

At one time every computer I interfaced with had ECC.  It was very
popular with me and everyone else I knew. :-)

 Parity was quite common in certain timeframes, but parity won't stop your
 system crashing if you get bitflips - it'll just make it crash
 *immediately*.

Parity would at least provide for better error messages and
diagnosibility.  I just tossed out a bad one year old 4G 204 pin ram
just TODAY that caused really wierd errors on the system.  I pulled it
and ran memtest86 on it in another system and it threw errors on an
overnight run fortunately confirming the problem.

Bob


signature.asc
Description: Digital signature


Re: Major bug with wired/wireless connections in Debian 7

2015-06-15 Thread Bob Proulx
linuxthefish wrote:
 It looks like Debian will only connect to a wireless network when the
 ethernet cable has been plugged in and unplugged.

No.  If you are seeing this it is either a bug or a configuration or
use problem.  Works fine for most of us.  So my guess is a
configuration or use problem.

 Both the wired and wireless connections are set to automatically
 come up - but wireless will not come up and fails to respond to ping
 unless a ethernet cable has been plugged in.

Very odd.

 auto lo
 iface lo inet loopback
 
 auto eth0
 iface eth0 inet static
 address 172.16.0.4
 gateway 172.16.0.1
 netmask 255.255.0.0

Looks okay to me.  It is 'auto' so will be a boot time action.

 allow-hotplug wlan0
 iface wlan0 inet static
 wpa-conf /etc/wpa_supplicant/wpa_supplicant0.conf
 address 172.16.0.5
 gateway 172.16.0.1
 netmask 255.255.0.0

What are the contents of /etc/wpa_supplicant/wpa_supplicant0.conf
file?  Feel free to redact any passwords.

You have this interface set up to be 'allow-hotplug' which means it is
using the event driven interface.  That should be fine for the WiFi
interface.  But maybe things are simply out of sync with file edits?
It is pretty easy to get mixed up and have edited the file with the
interface up.  Edits should always happen with the interface down.

Have you tried manually bringing the interface up and down?  What
errors are shown?

  ifup wlan0

Have you verified the device names are the ones you expect?  It is
possible that /etc/udev/rules.d/70-persistent-net.rules has renamed
the interface.  This has tripped me many times.

  ip addr show

I am not currently using that style in any of my wifi configs.  For
better or worse.  Mine all look like this example:

  allow-hotplug wlan0
  iface wlan0 inet static
  wpa-ssid CherryFarmLink
  wpa-psk passwordredacted
  address 10.10.223.2
  netmask 255.255.255.0
  gateway 10.10.223.1
  dns-search example.com
  dns-nameservers 10.10.223.1

That works okay for me.  This has the disadvantage of having the wifi
password in the /etc/network/interfaces file.  But that isn't critical
information in my environment and there isn't general logins on the
router system anyway.

Bob


signature.asc
Description: Digital signature


Re: Why Debian 8.1 has no torrents beyond first 3 DVD ISO images

2015-06-14 Thread Bob Proulx
arnuld uttre wrote:
 I wanted to download all Debian 8.1 DVD images at my friend's place because
 I don't have net at home. but all I see are 3 DVD ISO images available on

Have you considered using apt-offline?

  apt-cache show apt-offline
   apt-offline is an Offline APT Package Manager.

   apt-offline can fully update and upgrade an APT based distribution without
   connecting to the network, all of it transparent to APT.

   apt-offline can be used to generate a signature on a machine (with no 
network).
   This signature contains all download information required for the APT 
database
   system. This signature file can be used on another machine connected to the
   internet (which need not be a Debian box and can even be running windows) to
   download the updates.
   The downloaded data will contain all updates in a format understood by APT 
and
   this data can be used by apt-offline to update the non-networked machine.

   apt-offline can also fetch bug reports and make them available offline.

It would take some sneakernet trips back and forth with a portable
storage medium such as a USB storage device or some such but that
would be a possibility.  Can get security updates and so forth that
way.

 torrents, where as jigdo has available 13 DVD images:
 
 http://cdimage.debian.org/debian-cd/8.1.0/amd64/bt-dvd/
 
 http://cdimage.debian.org/debian-cd/8.1.0/amd64/jigdo-dvd/

Yes.  All of the DVDs images are available.  It is only that Debvian
is not hosting them as iso images on their download site.  It is a
problem of mirror site disk space and bandwidth.  Mirror sites do not
have infinite disk space nor infinite bandwidth.  There is a cost to
everything.  At some point there isn't enough space nor bandwidth nor
the associated actual cost of hosting them.  Mirrors would stop
hosting Debian if it grew without bounds.

Therefore compromises must be made.  The very little used images are 
not hosted.  It is very unusual for anyone to need or to use those 
images.  However they are available and you can create them for 
yourself very easily. 

If you desire any or all of the 13 DVD set for amd64 then they are 
available for very easy creation using jigdo-lite.  Here is an
example. 

  # apt-get install jigdo-file 

  $ mkdir jigdo-stuff 
  $ cd jigdo-stuff 
  $ jigdo-lite --noask 
http://cdimage.debian.org/debian-cd/8.1.0/amd64/jigdo-dvd/debian-8.1.0-amd64-DVD-1.jigdo
  ...lots of output and some minutes of time... 
  OK: Checksums match, image is good!   
  real24m14.239s
  user1m23.772s
  sys 1m9.412s

  $ ls -ldog 
  -rw-rw-r-- 1 3994091520 Jun 14 12:38 debian-8.1.0-amd64-DVD-1.iso
  -rw-rw-r-- 1   3.8G Jun 14 12:38 debian-8.1.0-amd64-DVD-1.iso

That took me 24 minutes using a fast local mirror on GigE.  Pulling
3.8 Gig off a slower WAN would take considerably longer.

 My friend keeps the DVD image torrent (in Deluge) on his computer when he
 sleeps and it is downloaded by morning. Kind of hassle-free download on
 torrents because his connection keeps on breaking/disconnecting every few
 hours all the time.  But I can not find more than 3 DVD images on torrent.
 What can I do ?

I don't know of any other way than to run jigdo to build the DVD
image.  Therefore your friend would need to be able to run it on their
computer or you would need to to be able to leave a portable machine
such as a laptop with enough disk space.  I don't know if there is a
way to run jigdo on a Windows machine for example.  Temporarily leaving
a laptop there would allow you to have control over the download.

Note that the entirety of the disk space is pretty large.  4G times 13
DVD images is 50 plus Gig of data to download.

If I were heading to an Antartic station where access to the Internet
is a precious thing I would definitely download the entire set.  If I
were using sneakernet to get data across town I would probably try
apt-offline and see if it could be used to download just what was
needed.

Good luck!

Bob




signature.asc
Description: Digital signature


Re: ThinkPad R51 creeping segmentation faults

2015-06-14 Thread Bob Proulx
Paul Ausbeck wrote:
 I recently replaced the hard disk in my ThinkPad R51 with a solid
 state drive

The ThinkPad R51 is a solid machine.  Don't let anyone tell you
otherwise.

 The symptom is that as time goes on more and more programs will cause a
 segmentation fault while loading. For instance, emacs commonly is the first
 program to go. Then maybe iceweasel. Just today iceweasel wouldn't load at
 all but then following another suspend/resume cycle it now loads to a point
 where it presents a safe mode dialog but then crashes if the mouse pointer
 is moved over the dialog box.

That sounds very much like a hardware fault.  Probably a ram failure.
Which is the best type to have because ram is the cheapest to swap out.

If it isn't a ram failure then unfortunately it would most likely be a
cpu failure.  It would be possible to swap the cpu but much more
inconvenient.  Third likely would be some failure on the motherboard.

The root cause of a segmentation fault that isn't a software bug is
that bits are getting flipped.  Let's say a pointer to some piece of
memory is being accessed but a bit of the pointer value is
flipped.  That will cause it to access the array out of bounds and
cause a segmentation violation.  Those will be random because the
location of the program is different at different times and bits being
flipped could be anywhere.  This is most likely to occur when running
programs that use a lot of memory.  That is why you are seeing it on
Iceweasel, which is true memory hog, and ahem, my favorite editor
Emacs too.  Those programs are making the most use of your memory and
are therefore the mostly likely to suffer from flipped bits.

In the old days computers would use ECC ram throughout.  The ECC would
protect you from these problems.  For years however we have suffered
under MS quality hardware.  It doesn't make financial sense to make
hardware more reliable than the OS sold with it and most machines have
been sold with MS.

 I've looked around a bit on the internet for similar problems and come up
 short. In fact, this class of problem seems inherently difficult to drive to
 ground, at least with the knowledge that I currently possess. So what I hope
 is that the Debian mailing list can give me some good seeds for new
 knowledge to acquire. In particular I'd be interested in how others might
 have approached similar situations.

I would start by running memtest86+ overnight.

  apt-get install memtest86+

Then rebooting to the memtest system and letting it run overnight.
Hopefully it will indicate a problem.  That would be the best result.

 I've tried loading emacs and iceweasel with gdb to get stack
 backtraces.

If random programs are segfaulting then it is very unlikely to be a
problem with any of those programs.

 One last specific question that sort of embarrasses me to ask, is
 where should segmentation fault messages be logged?

/var/log/syslog logs all system messages.  I always look there.  Red
Hat calls it /var/log/messages and Debian also logs there too.  The
/var/log/kern.log is for the subset that are kernel messages.

To understand the difference look at /etc/rsyslog.conf and see what
gets logged different places.  /var/log/syslog contains pretty much
everything and the other logs contain more specific things.  Mostly.

Do you have mcelog installed?  If not then install it.

  apt-get install mcelog

 I've grepped around and there are a few segfault messages from maybe
 a week ago in kern.log.1 and messages.1, but nothing in kern.log or
 messages.log. Perhaps these are still in a memory ring buffer
 somewhere? Is there some sort of tool for viewing user space log
 messages, I mean other than dmesg which doesn't appear to show any
 user space messages?

What I have told you applies to Wheezy 7 you are running which is
running sysvinit.  A lot of flamewar has been spent on the new systemd
binary file logging in Jessie 8.  I mention this only to give you a
heads up that everything you have previously learned about the system
up through Wheezy 7 is all changed in Jessie 8.  If you decide to
stick with sysvinit then what you learn about /etc/rsyslog.conf
applies.  If you go with the new systemd journal in Jessie 8 then the
entire universe is a different place and you will need to learn it all
new for systemd.  Just to let you know there was a major change that
rolled out with the Jessie 8 release.

Bob


signature.asc
Description: Digital signature


Re: httpd virtual package

2015-06-12 Thread Bob Proulx
to...@tuxteam.de wrote:
 Bob Proulx wrote:
 [...]
  I debootstrapped a Jessie chroot in order to test your case.
 [...]
 
 Bob, just a lurker here. Thanks for this very instructive and well-
 written walkthrough!

I am glad you found it useful.  As long as I am on the topic I should
add a small additional hint.

  debootstrap jessie jessie-oc-test http://httpredir.debian.org/debian
  echo owncloud-test  ./jessie-oc-test/etc/debian_chroot
  chroot ./jessie-oc-test su -

By adding /etc/debian_chroot into the chroot it can be automatically
included into the shell prompt.  It makes it easier to remember what
environment one is operating.  For example in the above after the
chroot the prompt will then say:

  (owncloud-test)root@havoc:~# 

When switching back and forth between chroots and the native host
little things like that can save a lot of confusion.

Bob


signature.asc
Description: Digital signature


Re: wget fails in Debian Jessie

2015-06-12 Thread Bob Proulx
Reco wrote:
 Bob Proulx wrote:
  That use of socat was clever.  I didn't like the pkill socat though.
  Wouldn't be good if there were another one running at the same time.
 
 Yes, there's a room for an improvement. Presumably socat can write own
 pid to a user-specified pidfile, but I was lazy to check a manpage.

I don't think socat does.  But one can use start-stop-daemon to manage
things for you.  The /etc/init.d/rsync file contains an example of
doing such using --make-pidfile and so forth.

  Some time ago Reco and I were discussing this and Reco noted that curl
  uses openssl while wget uses gnutls.  That was Reco's reason for
  prefering curl over wget at that time.
  
https://lists.debian.org/20150409082351.GA24040@x101h
 
 And as the current discussion shows - those reasons are still valid.

Yes.  I was just keeping neutral in the debate.  I note the problem,
and agree it is a problem, and hope that gnutls improves.

My own problem with gnutls is that it seems it requires *all* of the
certificate chains to verify valid instead of *any* of them.  Meaning
that some sites that only include a valid certificate chain for one
path but have at least one path not fully valid will fail the wget
gnutls test but will work with a web browser and (apparently) libnss.
That isn't nice either.

  Which might be different behavior from web
  browsers as most web browsers use openssl. 
 
 A minor nitpick here.
 
 Iceweasel/Firefox use libnss, not openssl.
 Chrome/Chromium use libnss.
 Anything based on webkit-gtk actually uses gnutls.
 I'm unsure about webkit-qt, though.

 About the only browser that actually uses openssl I can remember is w3m.

Good update.  I hadn't internalized that the web browsers used libnss
instead of openssl.  Thanks!

Bob


signature.asc
Description: Digital signature


Re: su chmod -755 /usr

2015-06-12 Thread Bob Proulx
Julian Brooks wrote:
 All seems well, valuable lesson(s) learnt.
 Seriously thought it was terminal, appreciate the wisdom people.

Glad to hear you solved your problem.  In the future with a similar
problem you would be able to restore your current system permissions
from your backup.  Not the entire backup files.  But by using the
permissions stored on the backup files you could reset the permissions
on the live files.

You do have a backup plan, right?  :-)

Bob


signature.asc
Description: Digital signature


Re: su chmod -755 /usr

2015-06-12 Thread Bob Proulx
Julian Brooks wrote:
 Cheers Bob :)
 
 Uuummm  - work files yes, system configs/settings not really.
 
 Any top tips, like where are the permission file/s?

I think you are asking what backup software would be recommended?
There are many different ones.  Let me point to a reference.

  https://wiki.debian.org/BackupAndRecovery

Personally I always used to use rsync scripts for years.  These days I
am enjoying using BackupPC.  But isn't to say that amanda or bacula or
any of the others are not good too.  They are all the same and all
different.

But you ask about permission files.  I think perhaps I wasn't clear
enough.  For example I could run 'find' down the backup tree and print
the file modes of the files there.

  cd /path/to/backup
  find . -type l -prune -o -printf chmod %m %p\n

There are no whitespace in most files in /usr and therefore the above
would print out a series of commands such as:

  chmod 755 .
  chmod 755 ./bin
  chmod 755 ./bin/vnc4server
  chmod 755 ./bin/xkbevd
  chmod 755 ./bin/pavucontrol
  chmod 755 ./bin/sg_dd
  chmod 755 ./bin/glxgears
  chmod 755 ./bin/sensors-conf-convert
  chmod 755 ./bin/etags.emacs24
  chmod 755 ./bin/qemu-armeb
  ...
  chmod 4755 ./bin/sudo
  ...
  chmod 2755 ./games/hack

Could then inspect the output for anything strange such as whitespace
in filenames.  Then run it as a script, perhaps after editing it.

Bob


signature.asc
Description: Digital signature


Re: httpd virtual package

2015-06-10 Thread Bob Proulx
Anatoly A. Kazantsev wrote:
 P.S: I'm not on the list, please keep me CCed

Will do.

 I'm trying to install owncloud with lighttpd on stable release

Noting that Stable is Jessie 8 at this time.

 Depends: apache2 | httpd, fonts-font-awesome, fonts-liberation,

Either apache2 or httpd will be needed to satisfy the Depends
relationship.  If neither is installed then apt will pull in apache2
since it is listed first.

 It says the package is depends on apache2 or httpd. So as I think it should
 allow me to install it without installing apache2 package if some installed
 package provides httpd (such as lighttpd or nginx).

Yes.  If you already have either lighttpd or nginx installed then
httpd will be povided for already.  However the entire dependency tree
must be analyzed to understand what is happening.

 $ sudo aptitude install owncloud
 Reading package lists...
 Building dependency tree...
 Reading state information...
 Reading extended state information...
 Initializing package states...
 Reading task descriptions...
 Building tag database...
 The following NEW packages will be installed:
   apache2{a} apache2-bin{a} apache2-data{a} apache2-utils{a} 
   fonts-font-awesome{a} fonts-liberation{a} fonts-linuxlibertine{a} 
 ...
 
 it wants to install apache web-server

I debootstrapped a Jessie chroot in order to test your case.

  debootstrap jessie jessie-oc-test http://httpredir.debian.org/debian
  chroot ./jessie-oc-test su -

The first test confirms your problem.  It wants to install apache2.
It can be explicitly avoided with a minus at the end.  This works.  It
is a little unsatisfying needing to manually force it this way but it
does allow installing owncloud on Jessie without installing apache2.

  apt-get install owncloud apache2-

But this is less than optimal since it wants to install php5-cgi in
this case.

Could it be a recommends in one of the packages?

  apt-get install --no-install-recommends owncloud
  ...still wants apache2...

It isn't a recommends.  It must be one of the 112 packages in the
dependency chain.  When dependency chains are that large it is often a
problem for the dependency resolver to do a good job of things.
Something depends upon something else in an alternation which depends
upon another list of alternate things.  The problem space is just too
big and arbitrary for apt to know about it.

In order to look at them all I generated a list of all of the packages
that it wants to install.

  apt-get install --no-install-recommends -s owncloud | awk '/^Inst/{print$2}' 
 /tmp/pkglist

I really only care about packages that mention apache2 anywhere in the
headers.  Reduce to that set.

  for i in $(cat /tmp/pkglist); do apt-cache show $i | grep -q apache2  echo 
$i;done
  apache2-bin
  apache2-utils
  apache2-data
  apache2
  libapache2-mod-php5
  php5
  owncloud

Aha!  The chain must be through libapache2-mod-php5.  owncloud depends
upon php5, which depends upon libapache2-mod-php5 | php5-fpm (along
with other OR things).  Let's test that theory and install php5-fpm
first.  Along with nginx.

  apt-get install --no-install-recommends nginx php5-fpm

Then try owncloud.

  apt-get install owncloud

Aha!  That works.  It does not want to install apache2 in this case.

And knowing this I then try it all over again with php5-fpm specified.

  apt-get install php5-fpm nginx owncloud

Yes!  That does not want to install apache2.  The dependency chain
pulling in apache2 was through php5 which wanted an interpreter.

  $ apt-cache show php5 | grep Depends
  Depends: libapache2-mod-php5 (= 5.6.7+dfsg-1~) | libapache2-mod-php5filter 
(= 5.6.7+dfsg-1~) | php5-cgi (= 5.6.7+dfsg-1~) | php5-fpm (= 5.6.7+dfsg-1~), 
php5-common (= 5.6.7+dfsg-1~)

Supplying the php5 interpreter through php5-fpm satisfied that
dependency and avoided it selecting libapache2-mod-php5 which
must hard depend upon apache2.

 I checked it on current testing with installed lighttpd - it works
 as expected, no apache2 packages

I looked in Testing and the dependency chain looks the same there.
But perhaps on that system you already had installed a php5
interpreter?  If so that would have avoided the libapache2-mod-php5
path and make it appear to work okay.  (Which is why we like to debug
in pristine installation root such as through debootstrapped chroots.)

The summary is this:

  apt-get install php5-fpm nginx owncloud

You will still need to configure either nginx or lighttpd to handle
PHP through the fpm or other.  There isn't any automatic configuration
for it.  But it will be a superior solution to libapache2-mod-php5
which is easy and automated but not as efficient.

Hope that helped!

Bob


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
https://lists.debian.org/2015061014525443742.noccsple...@bob.proulx.com



Re: unable to modify umask settings in wheezy 7.8

2015-06-10 Thread Bob Proulx
Fekete Tamás wrote:
 I use wheezy 7.8 on x86_64 architecture and I'm simply unable to modify the
  umask settings of the system. I have read many webpages for solution (even
  helps specified for debian) but didn't find the answer what to do.
 
 Now, my umask is the default umask: 0022. I want it to be 0077.

Wow.  That is very restrictive.  Are you sure you need such setting?

 I took two steps to modify this variable. The first that I have edit
 /etc/login defs. I rewrite the UMASK part of the file to look like this:
 UMASK 077

Much chaos exists surrounding how to set umask by the system for
users.  In Etch 4 there was major changes.  In Squeeze 6 there were
more changes.  Therefore one must be careful looking at documentation
to make sure the docs match the version of Debian they are currently
using.  /etc/login.defs is the old Debian way from before Etch 4.

With varying age and applicability here is a collection of references
through time in Debian that I have collected apply on umask.

  http://wiki.debian.org/DebianDesktopHowTo
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=646692
  http://lists.debian.org/debian-devel/2005/06/msg01598.html
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=336214
  http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=314539

 My second step was to edit /etc/pam.d/common-session. I add a last line to
 the:
 #and here are moe per-package modules part of the file. The line is looking
 like this:
 session optional pam_umask.so umask = 0077

Are there really spaces around the equals sign?  I believe that needs
to be passed as one argument.

  session optional pam_umask.so umask=0077

 The umask settings are not modified even if I am in gnome, or in normal
 shell.
 If you have any advice or good knowledge what is still missing to work,
 please share!

I haven't updated this for either Wheezy or Jessie so beware that I
might be stale but my latest notes on setting umask say add this to
the /etc/pam.d/common-session file:

  session optional pam_umask.so umask=002

Most people normally want 02 for general use.  If you want 077 that is
okay but not for general purpose use.

Bob


signature.asc
Description: Digital signature


Re: convert Jessie stable to pure testing

2015-06-08 Thread Bob Proulx
Joe wrote:
 Francis Gerund wrote:
  Or, I can always reinstall.  The XFCE didplay is so fuzzy, it hurts
  my eyes (even after woring with the gui adjustments).  Maybe I will
  go back to being a slave to Gnome - unconfigureable, but looks better.
 
 That's generally a sign that a non-CRT monitor is not working at its
 native resolution. Scaling can never give clean results, it's OK for
 moving real-world pictures, but not so good with computer displays. I
 have a pretty sharp Xfce display at 1920x1080 here.

Worse could be a non-native resolution over an analog vga cable.
Double check that you have a digital connection along with going to
the native resolution.

Bob


signature.asc
Description: Digital signature


Re: convert Jessie stable to pure testing

2015-06-08 Thread Bob Proulx
Erwan David wrote:
 Note also that using testing give you the advantage of not having to
 upgrade all software at the same time, but gradually.

Yes.  This is just like removing a bandage.  You don't have to take it
off all at once.  You can pull it off very slowly and savor the
feeling of each hair being pulled out individually.  That works fine
too! :-)

Bob


signature.asc
Description: Digital signature


Re: wget fails in Debian Jessie

2015-06-08 Thread Bob Proulx
Leslie Rhorer wrote:
 Reco wrote:
  Don't depend on curl. Use good old socat combined with wget:

That use of socat was clever.  I didn't like the pkill socat though.
Wouldn't be good if there were another one running at the same time.

 Why?  The -L option in curl did the trick.  Is there some
 over-riding reason why I should use wget instead of curl?  Curl
 windsup being simpler and faster in this case.

Some time ago Reco and I were discussing this and Reco noted that curl
uses openssl while wget uses gnutls.  That was Reco's reason for
prefering curl over wget at that time.

  https://lists.debian.org/20150409082351.GA24040@x101h

That is a difference that causes problems and is probably related to
the problem you just experienced.  This doesn't play into plain http
connections.  But for https gnutls and openssl have different
behavior.  They shouldn't.  But they do.  Therefore the suggestion
away from wget is really a suggestion away from gnutls and over to the
more time proven openssl used by curl.

However recently in Unstable and therefore Stretch Testing curl
changed from openssl over to gnutls too.  Jessie is already released
and this doesn't change Jessie or earlier releases.

http://bugs.debian.org/342719

So in the future if that is maintained both curl and wget will have
the same gnutls behavior.  Which might be different behavior from web
browsers as most web browsers use openssl.  I am not sure which side
to cheer for as I would like both to work.  Security benefits from
having independent implementations.

Bob


signature.asc
Description: Digital signature


Re: Moving server to new server with tar

2015-06-08 Thread Bob Proulx
Lisi Reisz wrote:
 Bob Proulx wrote:
  Every file.  File by file.  I liked this presentation and found it
  quite interesting.
 
  http://marc.merlins.org/perso/linux/post_2014-01-06_My-Live-Upgrading-Many-Thousands-of-Servers-ProdNG-talk-at-Linux_conf_au-2014.html

That one definitely works for me at this moment.  I just tested it and
got a page okay.  Since that is the origin I wanted to quote it.
However the links from that page all give me 403 forbidden.  Sigh.

There is a video of the presentation somewhere.  I don't have a link.
I imagine that would be on youtube and would still be available.
Maybe someone will find it and post a link to it.

  Unfortunately the original paper is now 403 forbidden.  I think that
  is likely a mistake somewhere.  But the Internet Archive Wayback
  Machine has a copy if you want to browse it.
 
  https://web.archive.org/web/*/http://marc.merlins.org/linux/talks/ProdNG-LCA2014/Paper/ProdNG.pdf
 
 I got 403 forbidden on both. :-(

That second one is 403 for me now too.  The '*' means grab the newest
version and it has updated to the 403 version.  But looking through
the older snapshots I find the specific version is here:

  
https://web.archive.org/web/20141016050915/http://marc.merlins.org/linux/talks/ProdNG-LCA2014/Paper/ProdNG.pdf

That should get you the paper at least.  I also found this just now:

  
https://www.usenix.org/conference/lisa13/technical-sessions/presentation/merlin

Bob


signature.asc
Description: Digital signature


Re: Moving server to new server with tar

2015-06-08 Thread Bob Proulx
Linux4Bene wrote:
 schreef Bob Proulx:
 thanks for your reply and the time invested. Much appreciated.
 It does indeed seem tricky unless you go the full monty and replace the 
 whole installation except for the special dirs like dev as you noted.
 In my test, I didn't get any strange results in the end but I have 
 learned not to always trust what I see in IT. In the back of my mind, I 
 always suspect a problem popping up when the server is in production.

Since /dev is dynamic anything done to it will evaporate after a
reboot.  After a reboot it will all be as if nothing had been
overwritten there.  If you get to the point of a reboot then there is
nothing lingering afterward.  You would be in the clear.  I would
still exclude it of course.  The possibility of archiving /dev/mem for
example makes me nervous. :-)

  But if you were overwriting *everything* on the system from the backup
  on one to the new system then after having done so then postfix would
  have been installed.  Right?  The binaries in /usr/sbin/postfix would
 
 The reason why postfix didn't want to start, even after untarring the 
 whole system was because exim was still running. After stopping exim, I 
 could start postfix without a hitch.

Ah, yes, port 25 was still busy.  You would have needed to kill exim
first.  Gotcha!

  Might be better to start with:
  Old server: dpkg --get-selections  packages
  New server: dpkg --set-selections  packages
  
  Yes.  That is what that was designed for along with some other things
  such as the dselect-upgrade and so forth.  Those will be suggested and
  with identical versions (Stable, OldStable) they should be able to
  replicate the same packages installed on each machine.

I wanted to add and emphasize for others that this doesn't work very
wll when changing from one machine to a different system with a
different version.  Lots of packages change names and dependencies
across major releases.  It works fine when the version is identical.
But across different releases there are a lot of packages that don't
like up exactly from one version to the next.  You are on the same
version but I wanted to leave this comment in the archive for others
reading it.

  And also because the user ids of system users and groups are dependent
  upon the order in which they are installed.  When doing a system splat
  from one system on top of another all of /etc/passwd, group, shadow,
  gshadow being overwritten with new numbers for all of the system
  installed accounts.  Of course that is matched by completely overwriting
  all of the files elsewhere too.  And because it won't remove files on
  the new server that weren't on the old server and those files will
  become orphaned when /var/lib/dpkg is overwritten. As long as it is a
  complete overwrite of *everything* then it works. But for files either
  extra or missing it can cause problems.
 
 Indeed. And the users is one of the reasons why I did want to try a full 
 copy otherwise these subtleties could make the move harder.
 Same with databases and the database users.
 The fully copy means not having to worry with creating the users on the 
 system and database level.

Then you will be set and good to go.

  Having said this it is something that gets done periodically and it does
  work.  Google does upgrades across its data centers one file at a time
  using rsync.  Pretty amazingly cool.
 
 Wow I didn't know that. They rsync their servers? That's binaries and 
 config I suppose? 

Every file.  File by file.  I liked this presentation and found it
quite interesting.

  
http://marc.merlins.org/perso/linux/post_2014-01-06_My-Live-Upgrading-Many-Thousands-of-Servers-ProdNG-talk-at-Linux_conf_au-2014.html

Unfortunately the original paper is now 403 forbidden.  I think that
is likely a mistake somewhere.  But the Internet Archive Wayback
Machine has a copy if you want to browse it.

  
https://web.archive.org/web/*/http://marc.merlins.org/linux/talks/ProdNG-LCA2014/Paper/ProdNG.pdf

  And then there is Guillem Jover's older 'debtakeover' project.
http://www.hadrons.org/~guillem/debian/debtakeover/README
  The debtakeover project replaces a foreign non-Debian system (such as
  Red Hat or SuSE) with a Debian system in-place while the system is
  running.  It basically replaces the system out from under itself with
 
 That's impressive. It sounds a bit like a Debian borg system :)

As I recall one spot I don't think was as complete was the kernel both
running and otherwise.  I think debtakeover required that the new
system run on the foreign system's kernel.  Which is not always
possible due to system version differences.  I think that is more of a
problem now than it was then.  So for example replacing a RHEL 6
system with Jessie would fail because jessie binaries require a newer
kernel than the default RHEL 6 kernel.  But probably upgrading the
kernel first with a native backport and then doing the debtakeover
process would get past that problem.

 When

Re: I upgraded from jessie to stretch and messed up I used the dist-upgrade command....

2015-06-08 Thread Bob Proulx
Michael Fothergill wrote:
 I changed from jessie to stretch in my sources.list file and then did

Testing is a development track.  It isn't released.  Welcome to the
process of making the release.  Remember that when running Testing or
Unstable that you are part of the development process.  You will see
and live every bug that slips through.

 But then I decided to try aptitude dist-upgrade afterward.
 
 That wanted to rehash the xorg-server video package and after doing it when
 I boot up the window manager does not start up.
...
 What sort of output that I could get in recovery mode (eg var log files
 dmesg output) would be useful to help fix this?

What errors are seen in /var/log/Xorg.0.log file?

Check /var/log/dpkg.log for a trace of what was installed previously
and what was removed and what was recently installed.  I think it
likely your xserver driver was removed due to a conflict.  Figure out
what had been running and install it again.

If the current Testing no longer contains the right combination of
packages for your xserver then look up the previous version of the
packages on http://snapshot.debian.org/ where all package versions are
archived.  You can undo the recent installation and downgrade revert
to the previous packages from there.

 I can boot in recovery mode but if I do that as root if I run aptitude it
 doesn't download packages - I think there is no internet connection being
 made.

That makes me think you are running NetworkManager.  By default it
seems to want the desktop user to start networking.

Try using the network manage command line client to start networking.
I can't suggest a command as I am not running NetworkManager (due to
problems such as these) and therefore don't know the command line to
suggest.  Others may be able to help better.

Bob


signature.asc
Description: Digital signature


Re: convert Jessie stable to pure testing

2015-06-07 Thread Bob Proulx
Francis Gerund wrote:
 FWIW, here is the new /etc/apt/sources.list:
 
 #
 
 # deb cdrom:[Debian GNU/Linux 8.0.0 _Jessie_ - Official Multi-architecture
 amd64/i386 NETINST #1 20150$
 
 # deb cdrom:[Debian GNU/Linux 8.0.0 _Jessie_ - Official Multi-architecture
 amd64/i386 NETINST #1 20150$

FWIW I would delete those above lines.  You have installed already.
You will probably never use the cdrom again.  Might as well clean
those lines up by removing them.

 Note that the Debian website did say to include the lines:
 
 deb http://security.debian.org/ testing/updates main
 deb-src http://security.debian.org/ testing/updates main

Debian recently started providing security upgrades for Testing.

 I'm surprised that the Debian website was not nore clear on how to change
 from stabel to testing; it's an important procedure - and not exactly
 obscure.

Debian is about getting to a released version.  Testing is not
released.  Stable is released.  Most of the documentation is targeting
the released version.  Therefore not as much documentation about
Testing.  And again when runnning testing you are part of the
development process.  You should take notes and report bugs.

 I wanted to go permanently with testing because I do NOT like reinstalling
 with every new release, and upgrading in place sounds good in theory, but
 never quite works right.  And the packages in stable are just too stale.

Uhm...  You do realize that Testing is doing an upgrade in place every
time you upgrade in Testing?  It is the same thing.  Therefore in my
mind as I read the above it created this conflicting information.  You
basically said you want to upgrade because upgrades don't work.

Basically if you think an upgrade at every release doesn't work (it
works for me, I have upgraded machines since Potato) then you will
hate Testing which is all about upgrades all of the time.

But don't let me talk you out of running Testing.  Take notes, file
bugs, have fun.  But don't complain when something is broken during a
transition.

Bob


signature.asc
Description: Digital signature


Re: Moving server to new server with tar

2015-06-06 Thread Bob Proulx
Linux4Bene wrote:
 I am in the process of moving my server to another VPS.
 The goal is to keep the old VPS around and convert it to backup MX  DNS 
 amongst other things. I will purchase the new VPS from another company so 
 I can't just copy the vm file/container.
 
 As a start, I would do a full tar archive of the old server and start 
 from there. A test on a local VM worked, with some adjustments. Both use 
 Debian 7.8. The services on the old server that need to me moved:
 - Mail: Postfix, Dovecot, Spamassassin, Clamav, Postgresql, ...
 - Web: nginx, supervisord, python, php5-fpm, Postgresql, ...
 - DNS: PowerDNS

This is one of those hard topics.  It seems easy enough.  But the
reality is that there are many subtle problems.  It comes up for
discussion every so often over the years.  I don't think there has
ever been a completely satisfactory canonical one true answer that
solves the problem in one collection.  There just isn't any perfect
methods.  Instead all that we can suggest is to understand everything
and deal with it ad-hoc.  This is exactly what you are doing.  With
what you have written I can tell that you are very much aware of most
of the traps and pitfalls and are dealing with them as best as you
can.  Good job!

It would be good if there were a better way to deal with this.  There
are many different strategies.  Some people favor one strategy.  Other
people favor other strategies.  You and I have already diverged from
favored strategy.  Personally I prefer to build a pristine system and
then install the new services upon it.  That allows me to be refreshed
in everything.  (On the other hand I always upgrade servers in place.
I carry the upgrade history along with me.  An irony of opposites for
me I know.)  I would use this as an opportunity to clean and clean and
clean.  But it is okay if you tell me you want to have the identical
server, as identical as possible, moved without doing that in this
step.  That is fine too.

 Tar command from the backup script on the old server:
 EXCLUDE=--exclude=proc --exclude=sys --exclude=dev/pts --exclude=backups
 tar -czpf /backups/full.tar.gz --directory=/ $EXCLUDE / 21

Since /dev is dynamic I exclude /dev from the backup too.  Your new
installation will already have a static copy of the minimum dev under
the udev mount point.

 I know I can migrate by first installing all packages, and then copying 
 the config and data from one server to the other. But then you need to 
 pick all data to be moved. It takes longer and it's more prone to error 
 (forgetting something). I want this server to be exactly the same as the 
 first one.

One sideways strategy that you might consider for risk management is
to untar a copy of your old system into what will become a chroot area
on the new system.  That will give you a reference on the new system.
You can run services from there.  But mainly it would give you a way
to do an A-B comparison between what you had before and what you are
creating new.  I do that often.  If something shows up being different
then can go investigate the way things were and find lost and
forgotten tweaks and revive them.

 What I've found so far in my test:
 - It's a good thing to first install all the same packages on the new 
 machine first. I didn't do that in my first test and Postfix wouldn't 
 come up because of Exim that was installed on the base version of the new 
 OS. Simple to solve but this wouldn't have happened if I had installed 
 Postfix first as Exim would have been purged.

Right.  They intentionally confict with each other and push each other
out.  It will sound obvious but postfix can't come up if it isn't
installed.  :-)

But if you were overwriting *everything* on the system from the backup
on one to the new system then after having done so then postfix would
have been installed.  Right?  The binaries in /usr/sbin/postfix would
have been copied into place and the package manager would think it had
installed it in /var/lib/dpkg too.  The biggest issue being any daemon
that changed uids and was running would need to have been stopped
before this and restarted after this.  Right?  This is one of the
issues that makes doing it this way tricky.  Not impossible.  Just
tricky.

 Might be better to start with:
 Old server: dpkg --get-selections  packages
 New server: dpkg --set-selections  packages

Yes.  That is what that was designed for along with some other things
such as the dselect-upgrade and so forth.  Those will be suggested and
with identical versions (Stable, OldStable) they should be able to
replicate the same packages installed on each machine.

There are some issues with doing it this way.  You should read about
'apt-mark' and the database flag that indicates whether the package
has been installed explicitly or automatically.  Automatically
installed packages without anything depending upon them are candidates
for 'apt-get autoremove' to remove them.  Explicitly installed
packages are not.  You can dump 

Re: convert Jessie stable to pure testing

2015-06-06 Thread Bob Proulx
Francis Gerund wrote:
 It should be so simple . . .

It is simple.  :-)

 1) I have a new installation of Debian 8 stable (Jessie).
 
 2) I want to convert it to a pure Debian testing setup, to track testing
 indefinitely.

Beware that Testing is entering the most volatile time in its
lifecycle.  The floodgates are open now.  Updates are flowing.  Life
in Testing is getting interesting again.

 here is my current /etc/apt/sources.list:

Sure.

 Is this what it should be changed to:?

Yes.  That would be okay.

I suggest using the named release candidate stretch rather than
testing.  That way when stretch releases and becomes the new Stable
release at that time your system would not automatically be pulled
forward.  That gives you time to read the release notes and to react
to required actions needed before the next upgrade.

 (In the past, i have tried that, and it screwed up my installation.  I
 don't ever want that to happen again.)

I suspect that it wasn't the syntactic change to sources.list that
screwed up your installation.  Instead it is the nature of using the
release candidate Testing or Unstable versions.  When using those beta
release tracks you are the beta tester.  Sometimes there are problems.
It isn't a released track.  There is no guarantee that you won't have
problems with it.  Historically there have been many accidents and
incidents that cause problem for both Unstable and Testing.  Users of
them are participating in the making of the sausage there.

If you want a trouble free release track that is the Stable release.
Jessie is now the new Stable release.

Bob


signature.asc
Description: Digital signature


Re: Old Computers

2015-06-03 Thread Bob Proulx
Jose Martinez wrote:
 Marc Shapiro wrote:
  Jose Martinez wrote:
   Yeah, there's nothing like making an antique useful.  I remember the
   days of the PDP-11 running *nixWhat I wouldn't give to come up with
   one of those old things!!
 
  My first programming class, back in 1976 was on a PDP-11.  Those were the
  days.  Bootstrap with physical toggle switches on the box to enter the
  binary code.
 
 Boy do I remember those toggle switches!!!  A few years back, I built a Z-80
 based toy, and that was one of things I wanted t have...Toggle switches and
 lights on the front panel!  Made it work too.

Put that PDP 11 experience to work!  Here is a recent job posting for
a PDP 11 Software Designer for a nuclear power plant.  Some things
never go out of style!  Especially when working at a nuclear power
plant.  Hope they will have 500 years worth of spare parts.

  https://ca.linkedin.com/jobs2/view/28135735
  January 20, 2015
  Job description
  Design of new PDP-11 assembly level software as well as the
  extension of existing automated control systems to accommodate new
  functionality.
  ...

Bob


signature.asc
Description: Digital signature


Re: wget vs curl (was ... Re: debian 8)

2015-06-03 Thread Bob Proulx
Reco wrote:
 On Wed, Apr 08, 2015 at 01:17:15PM -0600, Bob Proulx wrote:
  David Wright wrote:
   Reco wrote:
So, in the case of doubt - you use curl or rebuild wget against
openssl. It's that simple.
  
  I know that people have strong feelings for and against curl and wget.
  I haven't ever understood it.  You are the first to quantify why you
  think Debian's curl deals better with https sites.  It appears the
  issues all surround https handling.
 
 Indeed. For example, I stumble upon #686837 on regular basis.
 
 And I'd like to add that wget that's linked against gnutls *would* be
 good thing *if* it allowed to poke all GnuTLS knobs - #642051. But it
 does not.

You might be interested to see curl 7.42.1-2 entered unstable today.
It immediately started core dumping for me.  But see also Bug#787638
where the affect was widespread and reportedly in libcurl3-gnutls.  I
know that wget and httping were segfaulting for me too.  An upgrade
later in the day pulled in more upgrades and solved the problem for me.

  http://bugs.debian.org/342719

  Date: Sun, 03 May 2015 13:13:15 +0200
  Distribution: unstable

   * Switch curl binary to libcurl3-gnutls (Closes: #342719)
 This is the first step of a possible migration to a GnuTLS-only
 libcurl for Debian. Let's see how it goes.

If you have opinions you might want to make your comments known.

Bob


signature.asc
Description: Digital signature


Re: httpd virtual package

2015-06-03 Thread Bob Proulx
Anatoly A. Kazantsev wrote:
 P.S: I'm not on the list, please keep me CCed
...

 I have installed lighttpd (same for nginx) on stable/testing,
 but it doesn't provide httpd virtual package.

But it did.  It showed as Provides: httpd in your data.  That is how
it provides that virtual name.

 Provides: httpd, httpd-cgi

There it is. :-)

What are you missing that you are expecting to see?  I have not used
Lighttpd but I do use Nginx and like it and could answer questions
concerning the use fo it.

 $ aptitude show httpd
 No current or candidate version found for httpd

The aptitude show command shows information about a package name.
The httpd name is not a package name.  Therefore aptitude does not
show it.

 Same result from apt-cache:
 $ apt-cache show httpd
 N: Can't select versions from package 'httpd' as it is purely virtual
 N: No packages found

Same thing for apt-cache.  However apt-cache tells you it is a pure
virtual name and not a package and no packages named httpd are found.
I think that is a little nicer.

 I'll post some other information if you need it. This is my first time on
 debian-user@lists.debian.org

Welcome to the mailing list!  I have CC'd you as requested.  Good job
on making that request.  The default otherwise would be to reply only
to the mailing list.

Bob


signature.asc
Description: Digital signature


Re: Problems with SSD

2015-05-29 Thread Bob Proulx
Jochen Spieker wrote:
 Petter Adsen:
  I'm starting to suspect that it is. Either that, or the controller on
  the motherboard, which would be even worse.
 
 Or just the cable (if we are not talking about a laptop). I got rid of
 similar errors in the past by replacing the SATA cable.

If it were me I would swap cables and move to a different SATA port on
the motherboard.  I have seen individual SATA ports fail with the rest
of the ports okay.  I also have the advantage of many different sets
of hardware available and so I would mix and match the various parts
into different systems.  If the problem stays with the system or moves
with the moved part is a good diagnostic aid in determining which
piece of hardware or software is causing problems.  In this case it is
1) the kernel software 2) sata cable 3) sata device 4) motherboard
sata controller.  At least one of those is the problem.  It is a
mental game of Mastermind to determine which.

Bob

  https://en.wikipedia.org/wiki/Mastermind_(board_game)


signature.asc
Description: Digital signature


Re: Two network cards

2015-05-29 Thread Bob Proulx
michael-spree-michael wrote:
 It used to be that one can use two network cards at the same time, after
 configuring /etc/network/interfaces and making the networks static.

Yes.  This is a standard and widely used feature.

 This I did.
 
 The results; the wifi card won't connect and the wired connected.
 
 Is this something different in Jessie?

The differences between Wheezy 7 and Jessie 8 are very long.  You can
look in the release notes for all of the documentation of the details
the changes.  The expectation is that none of those will prevent
multiple network interfaces from working.  And for all of us running
Jessie 8 with multiple network cards it is in fact working.

You have said very little about what problem you are having nor what
software you are using.  Makes it rather difficult for anyone to help
you.  All we can say is works for me and that isn't going to be very
satisfying to you.

For best results please say what software you are using, exactly what
you have tried, and what was the output.

Bob


signature.asc
Description: Digital signature


Re: NFS drive uids/gids completely broken- for a little while

2015-05-28 Thread Bob Proulx
bri...@aracnet.com wrote:
 aha. sounds like my problem. interesting that it's enabled by default.
 i'm assuming that for my rinky-dink set-up with 5 users i don't need it ?

The number of users is not the determinating factor.  It is the number
of groups for any particular user.  There is an array size limit of
only 16 numbers in the underlying structure without it which limits
the number of groups possible per user to 16 or fewer.  So likely you
don't need it.  However that is the normal tested path these days so I
tend not to mess with it.

I would tend to be more concerned that something glitchy is happening
on your physical network connections that you saw something one moment
and then it went away on another moment.  This may be an indicator of
something else happening.  It is actually easier when things fail hard
because then you can get to root cause.  Hard to do that when the
problem goes away.

Bob


signature.asc
Description: Digital signature


Re: (should be) simple bind problem [possibly solved]

2015-05-27 Thread Bob Proulx
Glenn English wrote:
 apparmor.

Ah!  I would not have thought of that one.

 In the recent Debians (Wheezy++, I think), there is a directory
 /etc/apparmor.d. In there is a file called user.sbin.named. That

Yes.  But it isn't enabled by default.  On a recently installed Debian
Jessie 8 system:

  $ dpkg -l | grep apparmor
  $ 

Usually nothing is installed to start it.  Perhaps something you
installed pulled it in as a dependency?  Looking I see one of my
systems has libapparmor1 but it is still not enabled.  So the presense
of that one library would not be enough to start it.

 After reboot, and after waiting a few minutes, there are no new
 permission error entries in the log. I realize this is kind of far
 fetched, seeing how there was no apparmor startup in init.d, but
 this has been making me crazy, and I've tried many things that
 should have fixed it, so I'd do anything.

I really don't know very much about apparmor.

 I found a note in the Debian wiki saying apparmor is installed by
 default on Wheezy and that it's started by GRUB. That might explain
 why I didn't find anything in init.d.

I don't see it installed by default on the recently installed Jessie 8
system here.  Just a data point.  I wouldn't be surprised to find that
something else (GNOME for example?) might pull it in as a dependency.

 I don't know when Bind slaves try to update the mod times on their
 zone files, but I'm pretty sure the master sends out refreshes to
 the slaves when the master restarts, so I restarted the master. Lots
 of entries in ns2's log about receiving notifies, but no permission
 errors.

Everything depends upon the DNS zone serial number.  When the master
restarts it will send a notify.  The slaves will get the notify and
check the serial number against their cached copy.  If the serial
number is the same or older then nothing further happens.  If the
serial number is newer than their cached copy they will request a
scheduled zone transfer.  It won't happen immediately to prevent a
storm of activity all at once synchronized by the notify.  But within
a randomized short time a scheduled zone transfer will then occur.

Bob


signature.asc
Description: Digital signature


Re: Definitely straying: Was: Re: Danger of stray : in PATH, Re: Problem Running Application with Alias

2015-05-27 Thread Bob Proulx
Petter Adsen wrote:
 OK, this is veering off-topic - apologies in advance. From what I
 understand, LD_LIBRARY_PATH contains additional places to look for
 libraries that aren't in ld.so.conf.

Off the original topic maybe but definitely a technical discussion of
something important to Debian and its users.

 From ld.so(8):
 
   LD_LIBRARY_PATH
  A colon-separated list of directories in which to search for ELF
  libraries at execution-time.  Similar to  the  PATH  environment
  variable.  Ignored in set-user-ID and set-group-ID programs.
 
 But what order is this information parsed in? Does LD_LIBRARY_PATH
 override ld.so.conf? Ie, could I place a modified version of a
 library somewhere, point LD_LIBRARY_PATH at it, and every binary
 that is linked toward the original library will run functions from
 the modified one, or would I also need to remove the original
 library, so that only the modified one is found?  The danger would
 be much clearer if it overrides, since if it doesn't you will
 probably notice that the original library has been disabled.

Yes.  LD_LIBRARY_PATH overrides the /etc/ld.so.conf file.  The /etc
ld.so.conf file is for general use by every command on the system.
The LD_LIBRARY_PATH environment variable is specifically for the
programs that have it in the environment.

Also the command line --library-path overrides the LD_LIBRARY_PATH
environment variable.  But calling the ld.so with --library-path is
very specific and wouldn't ever be something done accidentally.
Calling it explicitly is useful for wrappers needing to set up a
specific environment (libraries and such) different from the native
host environment.

 Am I even on the right track, here?

You are on track.  An LD_LIBRARY_PATH variable exported in the
environment would potentially affect every command run in that
environment.  Okay to have it set to a full path to something such as
/opt/foo/lib but bad if it contained a relative path and '.' is the
likely trouble spot.  Having /opt/foo/lib: set would cause every
command to look for preload libs in the current directory and would
allow the discussed type of attack against them.

  Some of us would say it is unlikely to happen to *us* but we might all
  agree that it could happen to someone else.  A potential if unlikely
  exploit.
 
 Yes, it would never happen to me. Dangerous attitude when it comes to
 security.

But hard to get people to agree to.  Much easier to get people to
agree that it might happen to someone else.  And if it can happen to
someone else then it can happen and it should be educated against.

I specifically say educated against rather than guarded against.
Because there is an old saying that you can make something fool proof
but you can't make it damn fool proof.  And another that it is hard to
make things foolproof because fools are so clever.  Trying to avoid
every problem by adding layers and layers of software just makes a
bigger mess of things.  Keeping it simple is always better.  Therefore
I wouldn't block against PATH and LD_LIBRARY_PATH with '.' in them.
That would undoubted break other cases where it is perfectly
reasonable to do things that way.

Bob


signature.asc
Description: Digital signature


Re: IP performance question

2015-05-27 Thread Bob Proulx
Petter Adsen wrote:
 Reco wrote:
  May I suggest using etckeeper for this? The tool is invaluable if one
  needs to answer a question such as what exactly did I changed a
  couple of days ago?. The usual caveat is that using etckeeper
  requires at least casual knowledge of any RCS that's supported by
  etckeeper (I prefer git for this).
 
 I looked at etckeeper a while back, but I'm not familiar with revision
 control. It is something I could use, to keep track of changes to
 translations I do.

+1 for etckeeper.  It is a tool that I came to lately.  But now having
used it I wouldn't be without it.  It is a really useful safety net.

 From what I understand, it seems git is what most people use these
 days, so maybe that is the best one to learn? I just need something
 that is simple to learn and use.

The biggest advantage to git is that it has the critical mass of users
behind it.  There will always be someone to help you with it.  There
are a huge amount of documentation and tutorials written about it.  If
you learn it then you will be able to use it with the majority of
every other project on the net these days.  There does seem to be a
lot of griping about it but I find it relatively easy to use and so
personally don't understand why some people dislike it so much.

The disadvantage is that people who use hg mercurial and other systems
will complain that their system is easier to use but disadvantaged by
the huge mass of git users.  (shrug)

Bob


signature.asc
Description: Digital signature


Re: DHCPv6 and DDNS

2015-05-27 Thread Bob Proulx
Philippe Clérié wrote:
 So far I have not seen any trace of an attempt by the DHCPv6 server to
 update the DNS.
 
 Thanks in advance for any suggestion...

Does the dhcp *server* update dynamic dns?  I always thougth it was
the client that made the dynamic dns update.  I didn't look and this
is simply any suggestion so take this with that consideration but I
think you would want the dhcpv6 client to make the dynamic dns update.
That is probably the difference.  I expect your dhcp clients are doing
the dynamic dns update for ipv4 but not for ipv6.  I would check the
client config.

Bob


signature.asc
Description: Digital signature


Re: NFS drive uids/gids completely broken- for a little while

2015-05-27 Thread Bob Proulx
bri...@aracnet.com wrote:
 This is a weird one.

That is a little weird that it was a transient glitch of a failure.

 Tried to use ssh and saw a bad permissions error on my .ssh/config file.
 
 I do ls -l and i see uids/gids of 2^32-1 or a similar very large integer.
 
 WTF ?!

Are you using --manage-gids?

  root@fs:~# grep manage-gids /etc/default/nfs-kernel-server 
  RPCMOUNTDOPTS=--manage-gids

  $ man rpc.mountd

   -g  or  --manage-gids
  Accept requests from the kernel to map user id numbers into
  lists of group id numbers for use in access control.  An NFS
  request will normally (except when using Kerberos or other
  cryptographic authentication) contains a user-id and a list of
  group-ids.  Due to a limitation in the NFS protocol, at most 16
  groups ids can be listed.  If you use the -g flag, then the list
  of group ids received from the client will be replaced by a list
  of group ids determined by an appropriate lookup on the
  server. Note that the 'primary' group id is not affected so a
  newgroup command on the client will still be effective.  This
  function requires a Linux Kernel with version at least 2.6.21.

That is normal for an NIS/yp environment.  But it means that uid
lookups are done over the network.  A transient network would return
-1 error codes for all of the numbers.  It will make user ids appear
to be -1.

 So i go back to the server to make sure the ownership hasn't been
 borked some way and everything is fine.
 
 I go back to my account on the client, do 'ls -l' again, and
 everything is as it should be.
 
 WTF ?!

I am thinking it was a transient network failure coupled with the
above --manage-gids setting.

 1 my NFS drives are mounted at boot. i've noticed that i get errors
 about something not being ready, or auto mounting being a problem,
 but up until now i haven't seen any real problems.
 
 2 i can't look at those boot messages i see in 1.  why is it that a
 permanent method, installed by DEFAULT has never been implemented to
 look at boot messages ?

I don't know either.  I always install bootlogd since it was split
into a separate package.

Bob


signature.asc
Description: Digital signature


Re: mysterious cruft output on Jessie amd64

2015-05-27 Thread Bob Proulx
Deb wrote:
 I'm intimidated by the bug reporting system and kind of afraid to use it,
 but I'll read up on it thoroughly and see whether I can file a bug report
 without getting yelled at (or filing a duplicate by mistake).

LOL!  I have been yelled at in so many bug reports that I am
desensitized to it.  I rather expect that every time I file a report
that I am going to get griped at for it.  So I sympathize.  I have
stopped filing reports against some packages due to this.  But that
shouldn't stop people from doing so when it is the right thing to do.
But I completely understand if you want to avoid the conflict.  In
which case discussing the issues here will motivate someone (perhaps
even me) to file a bug about it.  I have cruft installed but I don't
run it.  It is one of those grand experiments that is perpetually in
development that doesn't seem to have been finished off yet.  I would
hope that someone who actually uses and likes cruft would jump in and
help with it.

 I just reinstalled that library as per your instructions and plan to rerun
 cruft when I have time to check whether it still gives me the same missing
 dpkg. Im betting it won't.

I should have also mentioned this in my message.  Let me mention it now.

  # apt-mark markauto libept1.4.12

Normally libraries are installed as dependencies of other packages.
But if you install any package manually that package will be marked as
manually installed.  By installing that library manually it was marked
as manually installed.  Meaning that it would never be a candidate for
autoremove cleaning later.  In order to keep the system tidy (and
you are running cruft so I know you care about keeping it tidy) it
should be marked as auto again.

If you wish to review the list you can list all of the manually
installed packages and all of the automatically installed packages.
They are long lists so I suggest browsing them with the 'less' screen
pager.

  $ apt-mark showmanual | less
  $ apt-mark showauto | less

The packages that are marked as automatically installed become
candidates for 'autoremove' once the package that Depends upon them
is upgrade to a newer library and no longer depends upon it.

  # apt-get autoremove

I have been recommending 'etckeeper' as a good package to check the
contents of /etc and all of the conffiles there into version
control.  You might consider installing it.  It will then commit every
system package change and all /etc file changes into version control.
Meaning that with etckeeper installed I recommend purging packages
instead of removing them.

  # apt-get autoremove --purge

With etckeeper (and/or with good backup, backup is always important)
then packages can be purged removing the conffiles from /etc too.
That is important in keeping a system tidy too.

  $ dpkg -l | grep ^rc

Those packages have been removed but have /etc conffiles left behind
for you to possibly reinstall and use them again.  To be completely
clean those packages should be purged and those /etc conffiles removed
with them.  This isn't really critical until doing major release
upgrades such as Squeeze 6 to Wheezy 7 or from Wheezy 7 to Jessie 8.

 Thanks for the link to the manpages bug. I read it and am not concerned,
 unless I should post in that bug report that I've duplicated the findings.

I wouldn't add to the bug report unless there is additional
information or you want to poke the maintainer to move things along.
This is one of those, yes it is a bug, but just sitting there on disk
it doesn't badly affect things.  It just isn't tidy.

Bob


signature.asc
Description: Digital signature


Re: mysterious cruft output on Jessie amd64

2015-05-27 Thread Bob Proulx
Deb wrote:
 I ran cruft through sudo in my three-day-old Jessie amd64 install on my
 personal home tower, and redirected output to a text file. This error
 message displayed on standard console output and was absent from the file:
 
  find: `/run/user/1000/gvfs': Permission denied

I don't think cruft should be searching down the /run tmpfs
filesystem.  I suggest filing a bug against it.  I don't see one for
that problem.

 Tne application did output successfully and I got a 1,219-line file with
 only one missing DPKG:
 
  /usr/lib/libept.so.1.0.5.4.12

  $ dpkg -L libept1.4.12 | grep libept.so.1.0.5.4.12
  /usr/lib/libept.so.1.0.5.4.12

Why is that reported as being missing?  Perhaps you should reinstall
that package??

  # apt-get install --reinstall libept1.4.12

 and I got the following broken symlinks:
 
  /usr/share/man/man3/cerfcl.3.gz
  /usr/share/man/man3/cerfcf.3.gz
  /usr/share/man/man3/cerff.3.gz
  /usr/share/man/man3/cerfl.3.gz
  /usr/lib/python2.6/dist-packages/python-support.pth

See Bug#765592 which is as yet not fixed.

  http://bugs.debian.org/765592

 Could someone explain the standard console output to me and tell me whether
 the dpkg and symlink outputs are anything to worry about?

Bugs.  I think they are simply bugs.  But it is possible that it
detected an installation problem with libept1.4.12 on your system.

Bob


signature.asc
Description: Digital signature


Re: mysterious cruft output on Jessie amd64

2015-05-27 Thread Bob Proulx
Deb wrote:
 Jessie complained that the markauto option is deprecated and I should use
 apt-mark auto instead. So I did and was informed that the library package
 was already marked auto.

Oh, that is just me being slightly behind.  Things change and I am
used to typing in the old way and if the old way still works I keep
typing it out of habit.

 If you wish to review the list you can list all of the manually
 installed packages and all of the automatically installed packages.
 They are long lists so I suggest browsing them with the 'less' screen
 pager.
 
$ apt-mark showmanual | less
$ apt-mark showauto | less
 
 Okay, this is bizarre. The first command shows that apt, bash, whiptail and
 a whole bunch of other packages that were installed by the netinstaller are
 marked manual. In total there are 263 manual packages on my system.

I guess I should clarify manual in this context.  It means by
explicit installation.  Not necessarily that you yourself as an admin
installed it.  The list you mention apt, bash and others you really
want marked as manual.  That way they won't be autoremoved.  But I
know you as you didn't install them.  They came in with the system.
Which may feel automatic to you.  But from apt's perspective it was
explicitly told to install it and therefore it is manual because it
was explicitly installed when the system was installed.

Hopefully that makes sense.  I am not sure I explained it well enough.

 There are 1,048 auto packages listed, including nearly all lib* packages.
 I'm thinking that Jessie must be configured to mark lib* packages as auto
 even if the parent package install is manual.

Correct.  Not just Jessie but all of them.  That is actually exactly
how it is supposed to always work.  You have it right.

For example if one installs feh (image viewer) that will depend upon
libexif12 and other things.  This makes feh manual but libexif12 auto.
Later one removes feh.  Or feh is upgraded to Depend upon libexif14 (I
just made that up) instead.  That would leave libexif12 unused and
auto.  If it is unused (not depended upon) and auto it will be a
candidate for autoremove.  But... *If* libexif12 were marked as manual
then it would never get removed even if nothing used it and nothing
Depends: upon it.

This concept isn't so much important as it is useful for keeping a
system clean.  I think most people ignore it and clutter builds up.  A
much smaller number of people try to keep the system tidy.  I happen
to be one of those.  But don't stress if this seems too complicated.
But it is useful to know how things work.

 Python 3 and python3.4 are listed under auto, but python 2.7 is listed under
 manual even though the OS still depends more heavily on 2.7 than on 3 or
 3.4.

I am hoping that by this point in the explanation you see that it
doesn't matter how many packages depend upon a package.  Packages are
auto if they were installed solely due to a dependency.  Packages are
manual if they were explicitly named for installation.

 The only package I've installed through curl and make rather than apt-get,
 and the only package that isn't from the Jessie main repository, is
 valgrind3.10.1. It is listed under neither manual nor auto, but ir runs
 perfectly.

That local installation should be installed in /usr/local/bin/valgrind
though and isn't a package and isn't managed by the package manager.
You compiled it.  You installed it with 'make install' and therefore
the default installation path should be /usr/local/bin and /usr/local
other paths for the support files.  It isn't a package.  It wasn't
installed using apt-get foopackage or even dpkg -i ./foo.deb and
therefore the package system doesn't know about it.

The /usr/local directory tree is yours fully from top to bottom.  It
is yours to maintain as you wish.  It is completely independent of the
system's package manager.

 I'm putting off installing etckeeper until I can find out whether it will
 mess with my version of valgrind, which is later than the Jessie main
 repository version.

It won't mess with your installed copy of valgrind.  Nothing installed
by the package system will mess with anything in /usr/local.  There is
a clear separation of duties.  The package manager handles /usr/bin,
/bin and the rest of the system at those high levels.  The local admin
handles /usr/local/bin and all of the /usr/local tree.

The first installation of etckeeper will check everything in /etc into
git version control.  After that you can:

  # cd /etc
  # git log
  # git whatchanged

After you have made changes (there aren't changes yet when you first
install it) then you can view the history of all of your system
changes.  And all other git commands are available too.  But mostly
one never needs to *do* anything.  It is simply a record keeper.  It
keeps track of all of the system changes.

If at some point in the future you purge a package, deleting the
conffiles along with it, and then later go, Hey! I want that back
again.  

Re: (should be) simple bind problem

2015-05-25 Thread Bob Proulx
Glenn English wrote:
 root@srv:~# ps -ef | grep named
 bind  2098 1  0 May10 ?00:00:36 /usr/sbin/named -u bind
 root 10498 1  0 May10 ?00:00:50 /usr/sbin/named -c 
 /etc/bind/named.conf

There are two of them running?  That doesn't seem right.  The first
one looks okay but the second one does not.

I would be inclined to kill both of them.  Then start it up again and
check all over again.

  service bind9 stop
  ps -ef | grep named
  kill 10498
  ps -ef | grep named

Make sure none are running.  Then start it up again and check.

  service bind9 start

Did you by any chance configure bind9 to run in a chroot?

  If it isn't that then I would suspect selinux has become enabled but
  not fully configured.
 
 I'm game. How do I find out/configure it?

If you haven't heard of it then it isn't enabled.  I wouldn't suggest
enabling it.  If you haven't heard of it then I think it is not likely
to be the problem.

 root@srv:~# ps aux | egrep -i selinux
 root 13013  0.0  0.0   7828   900 pts/0S+   15:48   0:00 egrep -i 
 selinux
 
 If it's running, it doesn't have a pid. I don't really know what
 SELinux is. I've heard it's a collection of patches to the kernel,
 but that's all I know.

selinux stands for security-enhanced-linux and is a policy layer where
everything is controlled by access control lists.  It completely
changes the traditional security system.  It isn't a daemon.

 I grepped the /etc/default files for selinux. Nothing.
 
 I grepped the /etc/init.d startup files. I found 'selinux-enabled' in the 
 checkroot.sh file (if selinux-enabled ...). selinux-enabled is a small 
 function in /lib/lsb/init-functions.sh:
 
 selinux_enabled () {
which selinuxenabled /dev/null 21  selinuxenabled
 }

 'which selinuxenabled' says there's no such file here. So does
 'root@srv:/boot# find / -iname *selinuxenabled*

That command is in the selinux-utils package.  You don't have it.
Making it unlikely that you would have selinux blocking you.  (No need
to install it.)

The next thing I would wonder is if the 'immutable' bit were set on
the file system.  Again from my system.

  $ lsattr -d /var/cache/bind
   /var/cache/bind

(You can read about it in the 'man chattr' man page.)

 This is happening on Dell, Supermicro, and RaspberryPi boxes, all
 running Wheezy with default, and updated, kernels, FWIW. The lone
 Lenny server doesn't seem to have troubles.

It happens on multiple systems?  Oh my that is a problem.  I am afraid
I am running out of ideas.  If it isn't normal user permissions, isn't
selinux, isn't ext immutable then I don't know what it would be.  It
isn't normal.  I am running bind9 on similar random architectures and
systems and I have not run into any problems caching files there.

If all else fails I would be inclined to try an experiment.  I would
open up the permissions on /var/cache/bind to be drwxrwxrwx and then
start bind9 and see what files it produces there.  The owner and group
of the files produced should be a clue.  They should be bind but if
they were something else that would explain the permission denied
message and be a clue as to the problem.

  service bind9 stop
  chmod a+rwx /var/cache/bind
  service bind9 start
  ls -la /var/cache/bind

Bob


signature.asc
Description: Digital signature


Re: Danger of stray : in PATH, Re: Problem Running Application with Alias

2015-05-24 Thread Bob Proulx
David Wright wrote:
 Quoting Petter Adsen:
  PS: What _are_ the security implications of having a PATH set to
  /foo/bar:?
...
 $ cd /home/evilperson/malicious-programs/
 $ emaca  (oops, I mistyped emacs. Funny, why are my files disappearing?)
  (oh dear, their file emaca contains rm -f ~/*)
 
 or, if the colon is at the start of PATH:
 
 $ date   (Funny, why...?)
  (oh dear, their file date is a symlink to emaca)
 
 $ ls -1 /home/evilperson/malicious-programs/
 date
 emaca

You aren't thinking maliciously enough!  :-)

Almost everyone would run 'ls' without a second thought.  Replace it
with a variation that lists all of the files in the current directory
except for the trojan ls command itself.  Then when the command is
running set up the user for setuid access to the system.  Using
adduser $USER sudo for example and then it really all looks very
legitimate.  However that does point back to the user upon an audit.

So perhaps simply copy /bin/bash off to a private location and chown
root, chmod u+s to it for later access.  Do that on the sly with no
visible difference from what they expect and the root user who was hit
with a social engineering attack would probably not immediately detect
it.  Replace a normally suid command like umount or another and if the
root user isn't running other intrusion detection software they won't
notice.

Therefore having the current directory in PATH due to an empty :
either first or last or explicitly with '.' isn't a good idea.  It
leaves one open to being tricked.

And it isn't just other local users.  Let's go long.  Web sites such
as Wordpress have a long history of being cracked.  In their preferred
installation mode they want to be able to update themselves.  Meaning
they can write to their own files.  Meaning that if they to get
exploited an attacker can write files to the local file system.  This
is often used for cracking the web site but could also be used for
leaving files such as a compomised 'ls' around too.  Just by itself
maybe they could only drop some files onto the file system owned by a
non-priviledged www-data user causing web site defacement.  But then
later if the user or worse root had the current working directory in
PATH and is tricked into running a compromised ls then the remote
compromise attack succeeds.  With LD_PRELOAD_PATH is almost as good
and has all of the same dangers.  This could potentially through a
sequence of missteps become a remote root compromise to the system.
Some of us would say it is unlikely to happen to *us* but we might all
agree that it could happen to someone else.  A potential if unlikely
exploit.

Bob


signature.asc
Description: Digital signature


Re: [OT] 2 dhcp server on same lan

2015-05-23 Thread Bob Proulx
Eike Lantzsch wrote:
 DHCP failover on a small network does not seem to be worth the
 effort, except for training purposes. Exactly that is why this
 thread was very informative for me. Just for kicks (training) I was
 trying for a while to set up two dhcp servers on my network. One
 OpenBSD on a router and the other on Debian arm isc-dhcp-server on a
 cubietruck.

Training is almost always worth the effort as long as something is
learned in the process.

 One sends sync messages and the other connects to ip-ports - duh!
 The philosophy of OpenBSD dhcp seems to be really wide area networks not 
 being 
 on the same premises and isc-dhcp seems to be better fitted for LAN.
 Now I know that my endeavours are futile.

They were not futile.  You learned something from them.  Therefore the
effort had good value.

 The synchronization between two OpenBSD dhcp servers and the
 failover isc-dhcp-server are essentially different and do not work
 together. Either 2 * isc-dhcp-server (which is available for OpenBSD
 as a package) or 2 * OpenBSD dhcp servers (which does not exist for
 the armv7l cubietruck) = need another machine or maquinita.

Even better you shared your experience with the mailing list and
allowed me to learn something too.  I am completely unfamiliar with
the OpenBSD dhcp daemon but now I know a little more about it.  (I use
the ISC dhcp software.)

Bob


signature.asc
Description: Digital signature


Re: Problem Running Application with Alias

2015-05-23 Thread Bob Proulx
Petter Adsen wrote:
 On Sat, 23 May 2015 09:36:31 -0400
  LD_LIBRARY_PATH=/opt/mopac/MOPAC2012.exe:$LD_LIBRARY_PATH
 
 This is an obvious thing that jumps out at me, this line should be:
 
 LD_LIBRARY_PATH=/opt/mopac:$LD_LIBRARY_PATH
 
 as LD_LIBRARY_PATH is meant to contain directories where shared
 libraries can be found, not an executable binary.

I have a small comment concerning the syntax.  I am sure you are
correct about the problem.  But both of those assume that
$LD_LIBRARY_PATH already exists in the environment.  If it does then
fine.  Let me use foo as a stand-in for the explanation.

  $ foo=/bar
  $ foo=/opt/somepath:$foo
  $ echo $foo
  /opt/somepath:/bar

But if it does not exist then it leaves the environment variable with
a hanging colon at the end.

  $ unset foo
  $ foo=/opt/somepath:$foo
  $ echo $foo
  /opt/somepath:

That is bad and actually has been argued is a security consideration
in other paths such as PATH.

To avoid that the following shell syntax is typically used.

  $ foo=/bar
  $ foo=/opt/somepath${foo+:$foo}
  $ echo $foo
  /opt/somepath:/bar

  $ unset foo
  $ foo=/opt/somepath${foo+:$foo}
  $ echo $foo
  /opt/somepath

Or to append to the end of the path put the colon on the other side.

  $ foo=/bar
  $ foo=${foo+$foo:}/opt/somepath
  $ echo $foo
  /bar:/opt/somepath

  $ unset foo
  $ foo=${foo+$foo:}/opt/somepath
  $ echo $foo
  /opt/somepath

The dash documentation on this standard shell feature is:

 ${parameter:+word}Use Alternative Value.  If parameter is unset or
   null, null is substituted; otherwise, the expansion
   of word is substituted.

 In the parameter expansions shown previously, use of the colon in the
 format results in a test for a parameter that is unset or null; omission
 of the colon results in a test for a parameter that is only unset.

The form without the colon is correct for path handling so that it
maintains the state in the case that the variable was already set to
an empty value indicating the current working directory.

  $ foo=
  $ foo=${foo+$foo:}/opt/somepath
  $ echo $foo
  :/opt/somepath

Note however that an empty path indicating using the current working
directory is almost never desirable.  In the case of LD_LIBRARY_PATH
one would never want to have it set to an empty value.

Bob


signature.asc
Description: Digital signature


Re: Fwd: rc.local and systemd

2015-05-23 Thread Bob Proulx
Rainer Dorsch wrote:
 Nevertheless I am still wondering why it has the failed status,
 rc.local has an hard exit 0 return code (?) :

Look at the first line of /etc/rc.local script.

 rd@nanette:/etc$ cat rc.local
 #!/bin/sh -e
The set -e  ^^ here is what sets the -e flag for the shell.

Bob


signature.asc
Description: Digital signature


Re: Query about possible impact of leap second on Debian Linux

2015-05-23 Thread Bob Proulx
Bret Busby wrote:
 And, with Debian 6 LTS, in /etc/apt/sources.list, I have, apart from
 the commented out lines,
 
 deb http://http.debian.net/debian/ squeeze main contrib non-free
 deb-src http://http.debian.net/debian/ squeeze main contrib non-free
 deb http://http.debian.net/debian squeeze-lts main contrib non-free
 deb-src http://http.debian.net/debian squeeze-lts main contrib non-free

I am only commenting for the archive to confirm that those should be
good.  Here are references for those reading later and wish to check
for updated information.

  https://wiki.debian.org/LTS

  https://wiki.debian.org/LTS/Using

Bob


signature.asc
Description: Digital signature


Re: Boot menu entries

2015-05-23 Thread Bob Proulx
Lisi Reisz wrote:
 Bob Proulx wrote:
  The newest latest kernel installed is on the bottom.
 
 Not here.  I have the oldest on the bottom.
 
 lisi@Tux-II:~$ dpkg -S /boot/vmlinuz-*
 linux-image-3.16.0-0.bpo.4-amd64: /boot/vmlinuz-3.16.0-0.bpo.4-amd64
 linux-image-3.16-0.bpo.2-amd64: /boot/vmlinuz-3.16-0.bpo.2-amd64
 linux-image-3.16-0.bpo.3-amd64: /boot/vmlinuz-3.16-0.bpo.3-amd64
 linux-image-3.2.0-4-amd64: /boot/vmlinuz-3.2.0-4-amd64
 lisi@Tux-II:~$

I got caught by version sorting lexicographical!  Good catch!  I had
the same two with 3.2 and 3.16 too but didn't see it because for me
those were buried in the middle.

I guess I could have used the ls --sort=version sort to sort the order
first.  Just because I am pedantic try this:

  $ dpkg -S $(ls -v /boot/vmlinuz-*)
  linux-image-2.6.39-2-amd64: /boot/vmlinuz-2.6.39-2-amd64
  linux-image-3.0.0-2-amd64: /boot/vmlinuz-3.0.0-2-amd64
  linux-image-3.1.0-1-amd64: /boot/vmlinuz-3.1.0-1-amd64
  linux-image-3.2.0-1-amd64: /boot/vmlinuz-3.2.0-1-amd64
  linux-image-3.2.0-2-amd64: /boot/vmlinuz-3.2.0-2-amd64
  linux-image-3.2.0-3-amd64: /boot/vmlinuz-3.2.0-3-amd64
  linux-image-3.2.0-4-amd64: /boot/vmlinuz-3.2.0-4-amd64
  linux-image-3.8-2-amd64: /boot/vmlinuz-3.8-2-amd64
  linux-image-3.9-1-amd64: /boot/vmlinuz-3.9-1-amd64
  linux-image-3.10-1-amd64: /boot/vmlinuz-3.10-1-amd64
  linux-image-3.10-2-amd64: /boot/vmlinuz-3.10-2-amd64
  linux-image-3.10-3-amd64: /boot/vmlinuz-3.10-3-amd64
  linux-image-3.16.0-4-amd64: /boot/vmlinuz-3.16.0-4-amd64
  linux-image-4.0.0-1-amd64: /boot/vmlinuz-4.0.0-1-amd64

The docs on ls -v are:

  ‘-v’
  ‘--sort=version’
 Sort by version name and number, lowest first.  It behaves like a
 default sort, except that each sequence of decimal digits is
 treated numerically as an index/version number.

:-)

Bob


signature.asc
Description: Digital signature


Re: KVM switch: DVI-D, DVI-I or vga?

2015-05-23 Thread Bob Proulx
German wrote:
 I am shopping locally here for a good KVM switch. For now, I am not
 even sure what type should I get. What are advantages to have DVI
 instead of VGA interface? Are there any justifications in price? VGA
 KVM is about $20, where is DVI is $100. If money is no object, DVI KVM
 is better than VGA? Thanks for all info you can share.

There is, in my opinion, a significant difference in visual quality
between the digital and analog display feeds.  For me I might not pay
$100 for a digital KVM but I also would not give up my digital display
feed for an analog KVM one either.

Analog was okay for lower resolution CRT analog monitors.  Higher
resolution LCD monitors really look a lot better with a digital feel
than with an analog feed.  Feeding a large high resolution LCD monitor
with an analog VGA feed in my opinion really looks terrible.  There
are dynamic artifacts.  It's awful.

My advice is to try it yourself.  Try two tests side by side.  One
being an analog feed of your display and the other being a digital
feed of your display.  It's a brain thing and maybe for your brain you
will look at it and go, I don't see anything significant there.  If so
that is great.  Don't spend any money on it.  On the other hand you
may be like me and find the analog feed to the LCD monitor to be
really pretty annoying.  You should try it and see how it works for
you.

Another possibility is that a lot of monitors have both an analog VGA
and a digital DVI input and can switch between them using the
monitor's on screen display.  You could plug the KVM into the analog
VGA for some light use but still use the digital DVI for your main
work and enjoy the full quality resolution there.  Just an idea.

Bob


signature.asc
Description: Digital signature


Re: Boot menu entries

2015-05-22 Thread Bob Proulx
Emil Payne wrote:
 'Debian GNU/Linux, with Linux 3.2.0-4-amd64'
 'Debian GNU/Linux, with Linux 3.2.0-0.bpo.4-rt-amd64'
 'Debian GNU/Linux, with Linux 3.2.0-0.bpo.4-amd64'
 'Debian GNU/Linux, with Xen 4.1-amd64 and Linux 3.2.0-4-amd64'
 'Debian GNU/Linux, with Xen 4.1-amd64 and Linux 3.2.0-0.bpo.4-rt-amd64'
 'Debian GNU/Linux, with Xen 4.1-amd64 and Linux 3.2.0-0.bpo.4-amd64'

 Ok, and which one's should I purge if I have no idea what the differences
 are?
 bpo?
 rt?
 xen?

These are the kernels you have installed.  For example from my long
running Sid machine that I haven't ever done any cleanup on making it
a good example of a lot of kernels.

$ dpkg -S /boot/vmlinuz-*
linux-image-2.6.39-2-amd64: /boot/vmlinuz-2.6.39-2-amd64
linux-image-3.0.0-2-amd64: /boot/vmlinuz-3.0.0-2-amd64
linux-image-3.1.0-1-amd64: /boot/vmlinuz-3.1.0-1-amd64
linux-image-3.10-1-amd64: /boot/vmlinuz-3.10-1-amd64
linux-image-3.10-2-amd64: /boot/vmlinuz-3.10-2-amd64
linux-image-3.10-3-amd64: /boot/vmlinuz-3.10-3-amd64
linux-image-3.16.0-4-amd64: /boot/vmlinuz-3.16.0-4-amd64
linux-image-3.2.0-1-amd64: /boot/vmlinuz-3.2.0-1-amd64
linux-image-3.2.0-2-amd64: /boot/vmlinuz-3.2.0-2-amd64
linux-image-3.2.0-3-amd64: /boot/vmlinuz-3.2.0-3-amd64
linux-image-3.2.0-4-amd64: /boot/vmlinuz-3.2.0-4-amd64
linux-image-3.8-2-amd64: /boot/vmlinuz-3.8-2-amd64
linux-image-3.9-1-amd64: /boot/vmlinuz-3.9-1-amd64
linux-image-4.0.0-1-amd64: /boot/vmlinuz-4.0.0-1-amd64

The newest latest kernel installed is on the bottom.  Don't delete
it.  It is also possible if you haven't rebooted that you will be
running an older kernel since something newer was installed.

$ uname -r
4.0.0-1-amd64

That is rebooted but another system:

$ uname -r
3.9-1-amd64

Don't remove the running kernel nor the latest kernel.  Remove all of
the others.

Bob


signature.asc
Description: Digital signature


Re: Query about possible impact of leap second on Debian Linux

2015-05-22 Thread Bob Proulx
Bret Busby wrote:
 ... so, upon checking (using Synaptic) the tzdata package(s), and
 finding they needed updating, apparently without depending on the
 kernel update(s), I have now updated the tzdata packages. There are
 tzdata and tzdata-java, both of which had updates available.

The tzdata package is updated through the stable-updates section, not
to be confused with the security stable/updates section.  They are
similarly named but different update channels.  The stable-updates
path is the one time named volatile section for those that remember
it.  It is for updates that by their nature must update more often
than the Debian Stable release and point release cycle.

Packages such as tzdata are updated when governments change the
timezones.  This happens outside of distribution release cycles.

For Debian Stable Jessie 8 the following shows all three sets of
sources that one should have in their sources.list in order to get all
of the updates they should be getting.[*] This is the main archive
where most packages exist, the updates source for packages such as
tzdata, and the security source for security upgrades.

deb http://httpredir.debian.org/debian jessie main
deb-src http://httpredir.debian.org/debian jessie main

deb http://httpredir.debian.org/debian jessie-updates main
deb-src http://httpredir.debian.org/debian jessie-updates main

deb http://security.debian.org/ jessie/updates main
deb-src http://security.debian.org/ jessie/updates main

Bob

[*] Note that httpredir.debian.org is the redirector and the same as
if one had the geographic alias ftp.XX.debian.org where XX is your
country code such as ftp.us.debian.org.  Either is okay.  I am using
the redirector in the documentation above so that it is generic.  If
you have the country code alias version that is fine too.


signature.asc
Description: Digital signature


Re: HELP- very slow download speeds

2015-05-22 Thread Bob Proulx
Darac Marjal wrote:
 Gary Roach wrote:
  When I start a download, it starts at 50M for the first few
  seconds and then drops to 500K to 100K range.

 Finally, don't rule out the possibility that your ISP is throttling
 you. While you may be synced at 50M and may be able to transfer at that
 for short periods (and thus, the ISP can rightly claim that you have a
 50M connection), they could conceivably throttle your connection in the
 longer term.

I think this is quite the most likely possibility.  I have only
anecdotal reports from friends but what I hear is that often ISPs
allow a full speed burst but then throttle for long term steady state
data transfer.  That matches your reported behavior exactly.  This
allows customers to run a speed test and have it report full speed but
prevent them from getting that speed for a long download such as a
full system upgrade or a large install ISO image download.  Are you
sure your ISP isn't throttling you?

Bob


signature.asc
Description: Digital signature


Re: Query about possible impact of leap second on Debian Linux

2015-05-21 Thread Bob Proulx
Iain M Conochie wrote:
 Bret Busby wrote:
 I have today seen the news report below, and wonder whether it needs
 some kind of patch for Debian Linux, and, if so, whether it has
 already been done, or is pending.
 
 https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=679882#87

Good to see that Debian has already implemented the patches through
Debian Squeeze LTS.

A reasonably good summary and description of the leapsecond issues
appears in the up-voted answer posted here:

  
http://serverfault.com/questions/403732/anyone-else-experiencing-high-rates-of-linux-server-crashes-during-a-leap-second

Bob


signature.asc
Description: Digital signature


Re: [OT] 2 dhcp server on same lan

2015-05-21 Thread Bob Proulx
Pol Hallen wrote:
 On same network 192.168.1.0/24 I can put 2 AP with each one own dhcp server?
 (obviously with different range but on same network).
 
 router IP 192.168.1.1
 AP1 IP 192.168.1.2 (dhcp 192.168.100-149)
 AP2 IP 192.168.1.3 (dhcp 192.168.150-199)

What is the purpose for doing such a configuration?  All answers
depend upon the purpose because it all depends.

For example the above does NOT provide for high availability.  And I
can't think of any other reason for it.  And even then it depends upon
the implementation of the dhcp server.  I would think that the little
blue consumer boxes would not work sufficiently but again it depends.

Bob


signature.asc
Description: Digital signature


Re: (should be) simple bind problem

2015-05-21 Thread Bob Proulx
Glenn English wrote:
 I'm getting (and have been for a while) log entries from my slave
 nameservers like:
 
dumping master file: /var/cache/bind/tmp-0EIP3LrP0G: open: permission 
 denied
...
 drwxrwxr-x 2 bind bind 4096 May 21 10:09 /var/cache/bind/

Good.

 Any ideas?

The first reason that comes to mind for permission denied is that it
doesn't have permission.  Because the permission is allowed for user
and group bind then it follows that the named must be running as a
different user rather than the bind user.  Therefore the -u bind
option must have been removed.

  $ grep OPTIONS /etc/default/bind9
  OPTIONS=-u bind

  $ ps -ef | grep named
  bind  2257 1  0 May20 ?00:00:27 /usr/sbin/named -u bind
  
  $ id bind
  uid=107(bind) gid=115(bind) groups=115(bind)
The numbers above are not significant and depend upon the
system.  Your numbers will be different from this example.
It is only important that bind shows up in all three places and
not some other name.

Has the -u bind option been removed and the daemon is therefore
running as a different user id?

If it isn't that then I would suspect selinux has become enabled but
not fully configured.

Bob


signature.asc
Description: Digital signature


Re: out of the box wifi adapter

2015-05-18 Thread Bob Proulx
Bob Holtzman wrote:
 Ric Moore wrote:
  Please don't top post. Thanx, Ric
 
 That's standard format on Earth, Sol system,
 Alpha quadrant, Milky Way galaxy, Virgo supercluster

Sorry but it's not.  Top posting isn't the standard format on
technical mailing lists.  The standard format is conversational
quoting.

Bob


signature.asc
Description: Digital signature


Re: aptitude update errors for upgrade to Jessie on amd64

2015-05-18 Thread Bob Proulx
David Wright wrote:
 Quoting Bob Proulx (b...@proulx.com):
  When the program looks up the ftp.us.debian.org name it will get all
  three of the above in some order.  If your system is IPv6 capable it
 ...

 Thanks for that clear exposition. I myself have had no problem with
 these differences (assuming they could even be relevant). But can you
 throw any light on why Pierre is apparently being served a .xz
 compressed file by ftp.fr.debian.org which is making apt-get
 (presumably expecting to receive a .gz file) fail to verify the
 digest? (I can't even try reproducing this as I'm i386 and he's amd64.)

I keep thinking I will have time to give this a proper answer.
Basically that would require manually walking through downloading each
of the Packages and Release files in the archive and verifying
signatures and file checksums.  The tool is complaining about a
mismatch.  At some point doing all of that manually should reproduce
the same mismatch.  When I have needed to do that before (behind
corporate caching proxies) I always found that the caching proxy was
returning stale files.

That is about the only hint I can provide at this time.

Bob


signature.asc
Description: Digital signature


Re: out of the box wifi adapter

2015-05-18 Thread Bob Proulx
Tony van der Hoff wrote:
 Bob Proulx wrote:
  If you didn't think of http://www.monoprice.com/ for that 100 foot
  cable let me make a mention of it here for the future.  I am a happy
  customer of them.
  
http://www.monoprice.com/Category?c_id=105cp_id=10208cs_id=1020814

 Beware: These cables tend to be prone to UV degradation; best to run
 them in some sort of conduit for use outdoors. Outdoor grade cable is
 available, though, as is armoured cable.

True.  But for $12 for a 100ft cable it is hard not to simply go for
it and deal with a replacement a decade later after it finally fails.

About five years ago I put up a temporary cable up a tower.  It was
a red colored cable so definitely not meant for UV environments.
Black is better for high UV environments.  But red was what I had
available and therefore became a temporary installation.  Now all of
these years later it is somewhat faded but still usable.  Hard to call
it temporary after several years.  I keep thinking I will replace it
one of these days with one with UV rated insulation.  But it hasn't
needed it yet.

Bob


signature.asc
Description: Digital signature


Re: RAID question

2015-05-18 Thread Bob Proulx
Gary Dale wrote:
 To elaborate, declaring them RAID in the BIOS will make them look
 like one drive. This would prevent mdadm from operating
 entirely. You would be relying on the motherboard's firmware to
 handle the RAID, which is generally not a good idea.

One general problem with BIOS raid is that the raid formats stored
onto the disks is usually a proprietary format.  If the motherboard
dies for any reason then you will also lose the data on the disks
unless you have an identical raid controller motherboard upon which to
transfer the drives.  The disks may be raid and okay but a motherboard
loss is a single point of failure.

 Unless you have a high-end RAID controller, mdadm works best.

I would add unless you have *two* or more identical raid controllers
so that you can transfer the disks and rescue the data in the case of
a raid controller card failure then mdadm works best.  With mdadm the
disks are always visible to any other system.  With mdadm no special
hardware is needed.

Hardware raid controllers work really well in data centers with rack
after rack of identical systems.  But for SOHO (Small Office Home
Office) use needing redundancy of identical raid cards is
problematic.

Bob


signature.asc
Description: Digital signature


Re: why are libraries in jessi more up to date

2015-05-18 Thread Bob Proulx
Anil Duggirala wrote:
 Im a newbie and would like to know why libraries in Jessie are some much
 more up to date than in wheezy ? If the libraries have been tested and
 are stable then why arent they available in the wheezy repositories. I
 had a terrible time, trying to get a newer version of glibc to play some
 games in wheezy, and the version in jessie is much more up to date,
 thanks for the info,

If everything in Wheezy 7 were upgraded to the latest best known
versions then it would be called Jessie 8.  That is exactly how Jessie
8 was created.  Jessie 8 is Wheezy 7 with all of the libraries and
other packages updated.

Bob


signature.asc
Description: Digital signature


Re: Grub2 (Wheezy) booting off either disk in a RAID1 mirror

2015-05-18 Thread Bob Proulx
Darac Marjal wrote:
 Andrew Wood wrote:
  Ive got 2 disks in a RAID1 mirror and am trying to setup Grub 2 so that I
  can boot from either disk if one fails.
  Ive tried using dd to copy the MBR over but when booting from  one of the
  disks I just get the word GRUB which I guess is because its looking for the
  stage 2 loader on the other disk?
  
  How can I configure Grub2 correctly such that I can boot with either disk
  removed?
 
 I think you should be able to install grub (that is, run grub-install)
 for both devices (grub-install /dev/sda ; grub-install /dev/sdb # for
 example).

Using grub-install /dev/sda ; grub-install /dev/sdb should definitely
work for both disks.

It is definitely good to test.  Andrew I applaud you for doing the
testing of the installation.  That is awesome.

If all else fails and it were me I would swap the cables so that the
order of sda and sdb are reversed.  Then use grub-install /dev/sda to
write the mbr onto the new sda drive.  Then swap them back, although
that shouldn't matter at that point.  It is good to know that you can
use the debian-installer in rescue mode to boot and chroot to the
system in order to run grub-install there even if the BIOS booting
drives won't boot.

Bob


signature.asc
Description: Digital signature


Re: aptitude update errors for upgrade to Jessie on amd64

2015-05-15 Thread Bob Proulx
Brian wrote:
 Bob Proulx wrote:
  You are currently using:
  
deb http://ftp.fr.debian.org/debian jessie main
  
  That is okay.  Good!  But for the purpose of this task change that to
  a different but still valid mirror.  This would be a good time to try
  out the new http redirector which has just recently become an official
  Debian resource.[1]
  
deb http://httpredir.debian.org/debian jessie main
  
  That httpredir.debian.org was previously known as http.debian.net for
  those on the list that are familiar with it.
  
  This is a good configuration for anyone geographically anywhere as it
  will redirect to the best known nearby mirror.  The effect for you
  with this problem is that it will be a different appearing URL than
  the previously used ftp.fr.debian.org and will therefore ask for
  different files.  The previously locally cached files on your disk in
  /var/lib/apt will be expired as no longer being in your sources.list
  file.  If mismatched cached proxy files are your only remaining
  problem then this should fix it.  Make sure to 'apt-get update' after
  changing the sources.list file.
 
 A small reservation, much as we are advocates of http.debian.net in
 these parts. If the intention is to eliminate the mirror as a source
 of a problem it may not be sufficient. For whatever reason, the
 redirector may still choose the mirror you do not want as the best
 one to use.

I still consider the redirector somewhat experimental.  There are
reports that it sometimes does not select the best mirror.  It might
not detect that a mirror has gone out of sync fast enough and leave a
time when it would still be redirecting to a bad mirror.  It can't be
perfect.  However none of the other strategies such as the
ftp.XX.debian.org country code alias set are perfect either.  I have
had to report mirrors in the US set being stuck.  But both schemes
seem to be working pretty well.  That is why think of it as a try it
out policy.  Try it.  If it works then great.  If it doesn't for some
reason then continue to use the previous country specific mirrors.
That is my thinking.

Bob


signature.asc
Description: Digital signature


Re: out of the box wifi adapter

2015-05-15 Thread Bob Proulx
Raymond Jennings wrote:
 I wound up buying a 100 foot cat5 cable and running it from the house to
 the trailer :P

My perspective is that nothing is as reliable as hardline wire!  It
will be much more immune to radio noise trouble.  Speedy and
reliable.  Wire will almost always be my choice if possible.

If you didn't think of http://www.monoprice.com/ for that 100 foot
cable let me make a mention of it here for the future.  I am a happy
customer of them.

  http://www.monoprice.com/Category?c_id=105cp_id=10208cs_id=1020814

Bob


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20150515165956448392...@bob.proulx.com



Re: About alternatives in stretch - testing.........

2015-05-15 Thread Bob Proulx
Charlie wrote:
 A question about alternatives in stretch:

 $ update-alternatives --config x-www-browser
 * 1 /usr/bin/chromium 40 manual mode
...
 In claws-mail press a URL and it opens in iceweasel.

Sounds like claws does not call x-www-browser but instead calls
iceweasel directly.  However I installed claws in order to test this
and for me it did call x-www-browser okay.  So I think it must be
calling x-www-browser okay.

   .1. Is the alternatives script not working or

Very likely the alternatives for x-www-browser are correct.  Did you
try it?  Please try it and check that it works.  Check that it returns
a non-zero exit code.

  $ x-www-browser
  $ echo $?

What starts?  Chromium?  Then your configuration is fine.  If not then
that is the problem.  But that is unlikely if your Testing is up to
date.  Does it return an exit code of 0 okay?

   .2. Is Claws mail not checking it

For me when I installed claws just now as a test, set x-www-browser to
chromium, and then clicked on a link to open in a web browser window
it opened chromium.  Therefore it isn't a systematic problem.  It must
be something in your environment.

   .3. Can I change this manually and if so where?

Almost certainly.  You did very good starting with the alternatives
for x-www-browser.  That's great.  I would start there too.  But since
it works for other people (me!) the problem must be something specific
to your environment.

Bob

P.S.

 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt9-3 (2015-04-23) x86_64 GNU/Linux

Just for future reference but the 'uname -a' output showing kernel
version information isn't useful for much these days.  At one time it
was how we told if users were using BSD or HP-UX or Solaris or IBM AIX
or whatever.  But if you say you are using Debian Stretch then we know
you are actually using Debian Testing and that is good enough.  I
prefer to say Testing rather than a name because Stretch is only a
candidate and will change every day for the next two years until it is
finally released.  It doesn't stop changing until then. 


signature.asc
Description: Digital signature


Re: aptitude update errors for upgrade to Jessie on amd64

2015-05-15 Thread Bob Proulx
David Wright wrote:
 I noticed that on repeating the former, I got a very different file,
 and this might be because ftp.us.debian.org had resolved to a different
 IPv4 address (but IPv6 was the same).

  $ host ftp.us.debian.org
  ftp.us.debian.org has address 128.61.240.89
  ftp.us.debian.org has address 64.50.233.100
  ftp.us.debian.org has IPv6 address 2610:148:1f10:3::89

It is at this moment a list of two different IPv4 addresses and one
IPv6 address.  Looking up those will find that debian.gtisc.gatech.edu
has both IPv4 and IPv6 available.  ftp-nyc.osuosl.org only shows the
IPv4 address.  So basically two mirror sites are available.

Depending upon various things you will somewhat randomly get one or
the other site.  After having gotten one site it will be sticky for
the dns time to live value after which it will expire and the
selection process will repeat.

 The appearance of the webpage had also changed since yesterday, with
 logos from Oregon State University, Open Source Lab, TDS and Friend of
 OSL scattered around that weren't there before.

Basically the difference is:

  http://64.50.233.100/debian/
  http://128.61.240.89/debian/

When the program looks up the ftp.us.debian.org name it will get all
three of the above in some order.  If your system is IPv6 capable it
will prefer the IPv6 address and always use it.  If not then it will
select one of the two IPv4 addresses and use it.  The different mirror
sites are using different software.  Some sites advertise their own
information and others do not.  The archive data provided is the same
in either case.  And regardless the Release file is cryptographically
signed and checksumed such that it can be trusted regardless of the
host transporting it.  We appreciate the mirrors making their
bandwidth and hosting available for Debian mirrors.

Bob


signature.asc
Description: Digital signature


Re: a replacement for ssh packages in Jessie?

2015-05-14 Thread Bob Proulx
Paul E Condon wrote:
 I go into this seeming unnecessary detail because the behavior ot ssh
 in this environment is very strange:

I think it makes sense and can be explained.

 Sitting at Big, logged in as user pec, I can:
 
 connect to pec@gq without giving any password 
 connect to root@gq also without a password
 connect to pec@dl2 also without a password

Good.

 But I CANNOT connect to root@dl2. When I try I'm asked to type in password
   and when I type it in I get a response denied, please try again
 I know the root password to Dl2 and the password for the ssh key that
 I generated for /root/.ssh/id-rsa , and I carefully contrived to make it
 the same as system password to that of system user 'root'.

I think the ssh rsa keys on the server for dl2 are broken in some
way.  This is preventing you from logging in using ssh rsa keys.

Jessie 8 and includes a new default configuration:

  PermitRootLogin without-password

That prevents logging in with a password.  Only ssh rsa keys are
allowed.  But if the ssh rsa key login is broken then it exactly
matches your description.

The usual problem is permissions.  Double check all of the file
permissions on the server side of things.  It is the server that is
the problem.  This entire path cannot be writable by group or other.

  # ls -ld / /root /root/.ssh /root/.ssh/authorized_keys
  drwxr-xr-x 26 root root 4096 Feb 17 13:10 /
  drwxr-xr-x 27 root root 4096 May 10 15:17 /root
  drwx--  2 root root 4096 Dec 18  2013 /root/.ssh
  -rw-r--r--  1 root root 1440 Oct 29  2011 /root/.ssh/authorized_keys

Check the /var/log/auth.log file for errors from the sshd daemon.

  # less /var/log/auth.log

If all of the checking fails then log in on the console and start up a
debugging sshd session manually.

  root@dl2:~# /usr/sbin/sshd -d -p 
  debug1: sshd version OpenSSH_6.7, OpenSSL 1.0.2a 19 Mar 2015
  debug1: private host key: #0 type 1 RSA
  debug1: private host key: #1 type 2 DSA
  debug1: rexec_argv[0]='/usr/sbin/sshd'
  debug1: rexec_argv[1]='-d'
  debug1: rexec_argv[2]='-p'
  debug1: rexec_argv[3]=''
  Set /proc/self/oom_score_adj from 0 to -1000
  debug1: Bind to port  on 0.0.0.0.
  Server listening on 0.0.0.0 port .
  debug1: Bind to port  on ::.
  Server listening on :: port .

Then try to log in using port .

  $ ssh -p  dl2

That will cause the debugging sshd to print out more information.
Hopefully the clues as to the problem will be there.  You can increase
the debug level from one -d to two -dd to three -ddd if needed for
more debug verbosity.

 lsh-client
 lsh-server
 lsh-utils

As far as I know those were created at a time when there was more
friction between the OpenSSL license and the GNU GPL license.  It is a
political philosohpical difference.  While they are both free(dom)
licenses they are not directly compatible.  Currently GPL software
that links with OpenSSL contains an exception to allow it.

As far as I know the lsh project was an attempt to produce a fully GPL
licensed ssh stack from top to bottom using the GnuTLS library.  That
would create a fully consistent license for all parts of it without
any exceptions needed.  But the project has floundered and OpenSSH has
prospered.  I stick with OpenSSH.

Bob


signature.asc
Description: Digital signature


Re: aptitude update errors for upgrade to Jessie on amd64

2015-05-13 Thread Bob Proulx
Pierre Frenkiel wrote:
   I have still the same problem with Hash Sum mismatch
   Do you have any idea on how to fix it?

Is your system behind a proxy cache of some sort?  This problem is one
sometimes seen when files of different ages are cached and served
causing the entire set of files to be out of sync with each other.
This is sometimes seen when behind a caching proxy server such as
those on a large company network.

There are various solutions.  I think the simplest is to change the
sources.list file to a different mirror.  That will have the effect of
asking for different URLs and will avoid the previously cached files
and will cause the local copies of the old files to be expired.

You are currently using:

  deb http://ftp.fr.debian.org/debian jessie main

That is okay.  Good!  But for the purpose of this task change that to
a different but still valid mirror.  This would be a good time to try
out the new http redirector which has just recently become an official
Debian resource.[1]

  deb http://httpredir.debian.org/debian jessie main

That httpredir.debian.org was previously known as http.debian.net for
those on the list that are familiar with it.

This is a good configuration for anyone geographically anywhere as it
will redirect to the best known nearby mirror.  The effect for you
with this problem is that it will be a different appearing URL than
the previously used ftp.fr.debian.org and will therefore ask for
different files.  The previously locally cached files on your disk in
/var/lib/apt will be expired as no longer being in your sources.list
file.  If mismatched cached proxy files are your only remaining
problem then this should fix it.  Make sure to 'apt-get update' after
changing the sources.list file.

Bob

Useful Resources:

[1] https://lists.debian.org/debian-devel-announce/2015/05/msg3.html

http://httpredir.debian.org/


signature.asc
Description: Digital signature


Re: Resolver not working when network is up

2015-05-13 Thread Bob Proulx
B.R. wrote:
 ​What​ could be the problem? How should I investigate this?

Just guessing...

In your nginx configuration are you specifying a resolver that is
different or unavailable?

  http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver

Another guess but likely installing a local caching nameserver I think
would work around the problem, using 'nameserver 127.0.0.1' in the
/etc/resolv.conf file.

  apt-get install bind9

What version of debian are you running?

  cat /etc/debian_version

Bob


signature.asc
Description: Digital signature


Re: Resolver not working when network is up

2015-05-13 Thread Bob Proulx
B.R. wrote:
 I managed to solve the problem with some help from debian IRC channel.

Great!  Glad to hear you have it solved.

 The problem lied in the /etc/network/interfaces, where my eth0 interface
 was set up with the 'allow-hotplug' directive.
 Still wondering why I ever did that... oO

Both 'auto' and 'allow-hotplug' should work.  The general movement by
the-powers-that-be are to move everything to the event driven hotplug
interfaces.  Therefore it *should* work.  But obviously the
synchronous boot time init is the one that has been traditionally used
and the most well tested.  The event driven interface is getting
rewritten and the default for Jessie just changed to it.  Basically
everything is different in the new Jessie using the defaults.

 In short, that allowed the interface to be declared 'mounted' while still
 unavailable. Services requiring the network to be up were then confused and
 reported heavy errors. nginx was one of those.
 Reverting to the standard 'auto' solved the problem and the dependency is
 now met.

Since this *should* work and you have a failing test case where it
does not I encourage you to file a bug report on it.  Unfortunately I
am not sure which package should get the bug.  Plus it depends upon
some other specifics of your configuration.  But I think it definitely
warrants getting a bug filed against it.

Bob


signature.asc
Description: Digital signature


Re: no root file system after encryption

2015-05-12 Thread Bob Proulx
Henrique de Moraes Holschuh wrote:
 The bathtub curve also applies for software systems, in practice.  When
 you aim for realiability, you need to consider the general maintenance
 state of the underlying kernel code (bitrot that crept in as other parts
 of the kernel changed and evolved, general increase in how brittle the
 code is due to accumulated maintenance, how much testing and attention
 from developers it still gets, etc), not just filesystem features.
 
 In that sense, ext2 is not nearly as good a choice as it once was.  A
 newly created ext3 with default parameters (yes, that means it gets a
 journal -- that's how it gets most use and most testing) is a better bet
 nowadays as far as present and future reliability goes.

Those are valid points.  However I think you have discredited ext2
much too soon.  It is still widely used.  I think it unlikely that
serious bugs would not get noticed.  On the contrary I think it is
widely used enough that bugs would be quickly noticed.  Plus I rather
think that because ext2 isn't glamorous that the code isn't getting
creeping features added and that is a reason it remains stable.

Bob


signature.asc
Description: Digital signature


Re: aptitude update errors for upgrade to Jessie on amd64

2015-05-12 Thread Bob Proulx
Pierre Frenkiel wrote:
 Get: 11 http://linux.dropbox.com wheezy/main i386 Packages [1,150 B]
 
 why these references to wheezy?

Check for additional configuration files in /etc/apt/sources.list.d/*
which I expect you to find a file referencing linux.dropbox.com
wheezy.

At a guess I expect that file to be owned by a package.  *If* there is a
file there *then* this will say what package owns it.

  dpkg -S /etc/apt/sources.list.d/*

I assume you installed it for a reason.

Myself I find packages that do this type of thing bad because they
cause problems such as the above and I avoid them.  Debian packages
don't do this.  If they did it would be a bug report for them.

Bob


signature.asc
Description: Digital signature


Re: Default virtualhost on Debian Jessie with Apache 2.4.10

2015-05-11 Thread Bob Proulx
Marco Stoecker wrote:
 If I've read the thread carefully and I thought that I indeed have a similar
 problem as Daniel has, which in summary means If you disable the default
 site via a2dissite, it will nevertheless show the index.html in
 /var/www/html, when you call localhost or the dedicated IP of the machine

Did you disable the port 80 listen statement in the
/etc/apache2/ports.conf file?  That is a required step when trying to
disable port 80.  If you didn't then getting a page is expected.

 I apologize if I've raised confusion about my mailman site, this
 should have been an example to point out what I have meant, it was
 not the problem itself. Anyhow thx for your support, I appreciated
 it.

No worries.  We could site down over lunch and a beverage and have an
enjoyable conversation over it.  But I still think the two things we
have been talking about in this thread are in fundamental conflict
with each other.

Maybe I am just being very strict and rigid in my thinking.  When
dealing with the computer it helps me to think like the computer
thinks and operate with it exactly as it will operate.  Basically I
follow the same policies and programs as the computer.  I do this
intentionally and with as much vigor as I can muster!  This helps me,
but dare I say that most people don't operate this way.  Most people
tend to be very fuzzy in their thinking.

For instance with Apache's configuration there are many independent
variables.   Each is independent of each other.  It either listens on
port 80 or it does not.  It is completely one or the other with no
grey space.  Daniel wanted to disable http access entirely.  To do
that the ports.conf file must be modified to remove the Listen 80
statement.  It either listens or it does not listen.  This is
completely independent of having a default site.  If Mailman is
installed and has a VirtualHost entry but isn't the default site then
that is independent and separate.  If someone wants to set up Mailman
as the default site.  That wouldn't be typical and so can't be sent
out as a default.  But it is easily configured to be the default site.

And I think you have that figured out now?  Maybe?  Set up the Mailman
VirtualHost to have the _default_ config.  Or set it up to be the
first and/or only VirtualHost configured which will do the same
thing.  Either way.  That should set you up.

Bob


signature.asc
Description: Digital signature


Re: need help with approx-gc

2015-05-09 Thread Bob Proulx
Paul E Condon wrote:
 The following is just a few examples from kern.log:
 May  8 11:32:49 cmn kernel: [4880283.861051] end_request: I/O error, dev sda, 
 sector 16136192

Ouch!  You have a disk that is crying out for help.  Oh the pain and
suffering of it!

 All of them have the same sector number. This is the sda drive,
 which is formatted as ext4. Is there some way that the automatic
 reallocate could the repaired by a forced manual fsck? and is the
 rescue function on the netinst CD adequate for this?

I have often been in your same situation.  I would ensure that the
backup is current and valid and then replace the disk.  That is me.  I
have seen disks get worse very quickly after they have exhibited
failures.  Modern disk controllers keep internal spares.  By the time
the disk is showing errors externally the internal spares have
probably all been consumed with other failures.

Problems like this will quickly make you a believer in RAID.  I pretty
much raid everything these days just to avoid being in this
situation.  In a RAID the bad disk would have already been kicked out
of the raid array.  It would then be left running in degraded mode on
the remaining drives.  The system would keep running without
problems.  Replacing the failing drive and backfilling the raid array
can all occur while the system is up and online.

 Not running SMART.
 What Debian package provides smartctl ?

  apt-get install smartmontools
  smartctl -l error /dev/sda

I expect that to show errors.

  smartctl -t short /dev/sda
  sleep 120
  smartctl -l selftest /dev/sda

I expect that to show errors.

 I don't think the following tests will make the reallocation problem
 go away.

Nope.  Seems like a disk failure to me.

 I was planning to do something else this weekend, Oh well.

RAID.  I can't say enough good things about it in these situations.
And backup.

BTW...  I have a low priority machine that is crying right now that
SMART selftests are failing.  It hasn't gotten to the actual I/O
failure error stage yet but it is only a matter of time.  It is a low
priority machine so I haven't actually done anything yet.  It is still
up and running.  But I have a disk and as soon as I get a few spare
minutes this weekend I am going to go swap out the failing disk for
another.  But tomorrow looks pretty busy for me.  I probably won't get
to it until Monday.  And I have no stress about it because it is a
raid and the other disk is healthy.  Plus backups are current.

Bob


signature.asc
Description: Digital signature


Re: Temporarily hold a package............

2015-05-09 Thread Bob Proulx
Michael Biebl wrote:
 Bob Proulx:
  Charlie wrote:
  I had never even known about apt-mark and when Matthias pointed it out,
  tried it, read the man page and was amazed.
  
  The functionality of 'apt-mark hold' was new in Wheezy 7.  Squeezy 6
  didn't include that functionality.
 
 Sort of.
 In Squeeze and earlier releases you could already use
 dpkg --get-selections  file
 edit file and s/install/hold/ for the desired package
 dpkg --set-selections  file
 
 This had the same effect as apt-mark hold. Arguably, apt-mark provides
 the nicer interface :-)

Right.  New in apt-mark.  Has existed in dpkg --set-selections and
get-selections for a long time.  It is slightly easier to use in
apt-mark now.  A good improvement.

 If you're interested which packages are currently put on hold, run
 dpkg --get-selections | grep hold

Or now in the new apt-mark run:

  apt-mark showhold

Bob



signature.asc
Description: Digital signature


Re: ssh tunnels or openvpn/IPsec?

2015-05-09 Thread Bob Proulx
Petter Adsen wrote:
 Now the question becomes; AFAIK, I could do this with ssh tunnels and
 forward the ports on my router/firewall, or I could use something like
 openvpn or IPsec (strongswan).

Yes.  Exactly.

Also 'stunnel4' is useful too.

I would avoid IPsec.  Last I looked there were more than 55 RFCs that
had some impact on IPsec.  It has traditionally been rather of a messy
thing.

 The problem is that I haven't really messed with any of these before
 - what would be the best choice in this situation?

 Note that I'm not asking for a complete configuration, all I want is
 some advice as to which of these technologies I should begin to read up
 on. The IPsec article on the Debian wiki is from Sarge, so it is quite
 outdated, but the openvpn article is recent and seems helpful.
 
 Any insights/advice/links, etc?

Using ssh tunnels will get you 80% with 20% of the work.  Using
OpenVPN will get you 100% with 100% of the work.  Using 'autossh' to
manage ssh tunnels is very reliable to run and very quick and easy to
set up.

I use all of autossh/ssh tunnels, stunnel4, openvpn in different
places.  I tend to like and use the autossh/ssh tunnels because they
are quick and easy and work well enough that I can move along to
something else without spending a lifetime managing them.  It doesn't
require any routing table modifications.

I like stunnel4 for some things because it also is very easy to set up
and very reliable.  Either ssh or stunnel would seem to be good simple
effective choices for remote sysloging.  I might lean toward stunnel
for this.  It all depends.  Using stunnel benefits if you have signed
https ssl certificates already that can be verified by stunnel.

Both ssh and stunnel use TCP which means that in terms of ultimate
performance and ultimate efficiency you are ending up with TCP over
TCP and that isn't perfect.  TCP over TCP will use some resources and
time transporting packets somewhat inefficiently.  I think for your
example of using remote syslog logging I wouldn't worry about it.  It
is a non-interactive task and the machines won't care when talking to
each other.  No one will ever notice the inefficiency.

When operating interactively such as working from my laptop to my
remote servers I am usually interactive.  That is when transport
artifacts of latency become noticeable and annoying.  There I have put
in the extra work to set up openvpn for the 100% solution.  It uses
UDP for the transport avoiding the TCP over TCP issues.  It is more
work to set up initially due to dealing with setting up ssl
certificates and routing.  But having set it up it is a high
performance solution that does 100% of the job.

I would probably start your remote syslog task using autossh/ssh and
then worry about doing something more when the need for more arises
and not before.

Bob


signature.asc
Description: Digital signature


Re: Temporarily hold a package............

2015-05-09 Thread Bob Proulx
Charlie wrote:
 I had never even known about apt-mark and when Matthias pointed it out,
 tried it, read the man page and was amazed.

The functionality of 'apt-mark hold' was new in Wheezy 7.  Squeezy 6
didn't include that functionality.  Squeeze 6 only included controls
to mark and unmark the automatically installed flag used by 'apt-get
autoremove'.  So relatively new for people who have been using Debian
for a while.

Bob


signature.asc
Description: Digital signature


Re: Open ports

2015-05-09 Thread Bob Proulx
Christian Seiler wrote:
 Bill wrote:
  what uses them and why shouldn't I close them?
  (I'm assuming there must be a good reason to have wide open ports.)

It is debatable whether the old Sun RPC services should be installed
by default.  I do use and manage NFS but I wouldn't install it by
default on any machine not using it.  If you are not serving NFS then
you don't need it.  If you are serving NFS then it will get installed
as a matter of course.

 rpcbind is started from /etc/init.d/rpcbind. If you don't use NFS or NIS
 at all, you don't need to have that running. To disable it under Wheezy,
 use:
 
 update-rc.d rpcbind disable
 
 After that, it won't be started anymore at boot.

Instead of disabling it I am of the opinion that it should be removed
if it isn't going to be used.  If in the future someone were going to
set up an NFS server on the system then it can trivially be installed
again.  So easy to install that removing it instead of disabling it
seems like the better way to go in my opinion.  One less package that
might need a security upgrade at some point.  One less package on the
disk to manage.  Just simplify.

  # apt-get purge rcpbind

Bob


signature.asc
Description: Digital signature


Re: Help with ddrescue

2015-05-09 Thread Bob Proulx
German wrote:
 Bob Proulx wrote:
ddrescue if=/dev/sdX of=/dev/sdY /var/tmp/rescuelogfile
 
 Hmm.. The Wanderer suggest that *if= and of=* is the wrong syntax.

Argh!  I have made two typos in the space of the last two messages.  I
can't believe I made a mistake this silly.

 He says that this should be simple as this:
 
 ddrescue /dev/sdX /dev/sdY /path/to/logfile
 
 Who is right?

The Wanderer is correct.  The if= and of= is 'dd' syntax not
'ddrescue'.  I goofed up the suggestion.  With ddrescue there are
three arguments, infile, outfile and logfile.

When reading postings such as these one should always read the
official documentation.  That would clear all of it up
authoritatively.  For ddrescue here is the documentation online.

  http://www.gnu.org/software/ddrescue/manual/ddrescue_manual.html

Bob


signature.asc
Description: Digital signature


Re: Default virtualhost on Debian Jessie with Apache 2.4.10

2015-05-08 Thread Bob Proulx
Marco Stoecker wrote:
 Bob Proulx wrote:
  I am unable to recreate your problem on wheezy.  I just now installed
 ...
  Go back and double check everything.
 
 But what happens to the mailman site, if I disable listening on port 80?
 Will the mailman site still be available?

(Me rattles my head and goes, What?)

Daniel Bareiro submitted the problem that he could not disable the
default web server on port 80.  That has been the topic of this
thread.  Later you posted that you had the same issue.

I see now that you did say you wanted to have the mailman site
enabled.  I didn't see that before since we were focused on Daniel's
problem of trying to disable the port 80 web site.

Obviously that is impossible.  One cannot both disable the port 80 web
site and keep it enabled for Mailman.  It is one or the other.  It
cannot be both.  And this is a completely different topic than the we
one were discussing.  That is why there was confusion.  In the future
if you want to avoid confusion about such things then start a new
discussion thread with new topic.  Something like, How do I make
Mailman the default and only web site? or some such.  That would get
much better results.

On the topic of making the Mailman web interface the default and/or
only web site seen:

If you want to keep the Mailman web site as the default then do not
disable port 80.  You will be needing it.  Instead make the Mailman
web site the default web site.  If you are using Apache there are two
main ways to do this.  One is to ensure that the default site is the
first or only VirtualHost listed.  Either way then it will be the
default site.  That is rationale behind the 000-default naming in that
the zeros will cause that to be loaded first and therefore will be the
default by default.  My preference is to remove the 000-default link
and keep the original 000-default.conf and default-ssl.conf files
pristine.  (Previously those were default and default-ssl in
Wheezy 7.  They have been renamed in Jessie 8.)  By keeping those
files pristine they will not be prompted for merging upon upgrades.
This makes upgrades easier.  Then create a new site local file for the
local web site configurations.  Being a different file it will not be
in the package and will not need to be merged when applying security
upgrades.

The other way from having the 000 naming to force the first
configuration to be the default is to use the _default_ tag on one
site to explicitly say which is the default.  Then the ordering does
not matter.  This will explicitly tell Apache that it should be the
default for all unspecified sites.  This is documented here:

  https://httpd.apache.org/docs/2.4/vhosts/examples.html#default

Bob


signature.asc
Description: Digital signature


Re: no root file system after encryption

2015-05-08 Thread Bob Proulx
David Christensen wrote:
 Juha Heinanen wrote:
  On Partition settings screen, I choose Use as Ext2, Mount point /boot, and
  Bootable flag on.  Then I choose Done setting up the partition.
 
 Why ext2?  I use ext4.

I always use and recommend ext2 for /boot.  It avoids wasting space in
the ext3/ext4 journal that isn't needed on such a small and seldom
modified partition.  Also fsck speed for a small 512M ext2 is very
fast without needing a journal to speed it up.  For /boot ext2 is a
good choice.  It is fully supported everywhere for rescue boot media.

Of course now that we have large disks and can easily allocate 512M to
/boot without significantly reducing the remainder of the disk this
hardly matters.  Basically we are overflowing with disk space and
therefore need not be as efficient with it these days.  Wasting some
space with ext3/ext4 has become a very small matter.

Bob


signature.asc
Description: Digital signature


Re: no root file system after encryption

2015-05-08 Thread Bob Proulx
Juha Heinanen wrote:
 Bob Proulx writes:
  Why no LVM?  Using LVM is the way I always do it because that allows
 
 I didn't have any particular reason to avoid LVM. I just tried if
 encrypted installation succeeds without it.  Now that I tried with LVM,
 installation was simple and worked without issues.

Yes.  I was thinking that might improve the experience.  :-)  Good.

 One thing that I noticed was that LVM option uses by default ext2 for
 /boot, whereas some people on the list suggested ext4.  So I manually
 changed that.

Oh well.  As you saw I had the opposite recommendation.  I always use
and recommend a ext2 /boot.  However also note that I wrote that these
days with a moderately sized 512M /boot that the difference is hardly
worth discussing.  I wouldn't change it after having done it.
(Although since it is a separate partition it is trivial to do so.)

Glad to hear you have things going for you.

Bob


signature.asc
Description: Digital signature


Re: bad ABI version?

2015-05-08 Thread Bob Proulx
Sven Hartge wrote:
 If you know, how to pull yourself out of the mud, are able to file bug
 reports, read bug reports und know how to use apt-listchanges and
 apt-listbugs, _then_ you can use Sid. (Been doing that for over 15 years
 myself.)

(Chuckle.)  I'm good to go.  Specifically the reason I am using Sid
Unstable is so that I get the early preview of what is happening in
Testing.  Then I can avoid the bugs there and Stable and can file bugs
and work to get things fixed before they enter Stable.

Unfortunately one person's bug is another person's feature and
therefore we are not completely in agreement over what ends up in
Stable.  But participating in the process is much better than being
only a passenger on it.

 If one wants to use Sid, he/she should know about the following things:
 
 - how to boot your system from a rescue medium (and have one ready)
 - how to unpack a DEB without using the Debian tools and only with ar
 - how to identify a broken library and revert to a working one
 - be able to interpret linker error messages
 - know where to find the relevant system log files and how to read them
 - how to pin pacakges to a specific version
 - know about snapshot.debian.org
 - tracker.debian.org
 - https://release.debian.org/transitions/
 - how to rebuild a package using pbuilder/cowbuilder/sbuild
 - debug packages and basic GDB usage
 - reportbug
 - generally understand how a Linux boot works
 
 If one needs an always working system, she/he better stick with Stable
 or Testing (as soon as it has cooled down enough).

Agreed to most!  Generally in concept anyway.  Some of the details
vary.  :-)

I am not a fan of pinning.  Generally people try to use it to keep a
mixed system.  My opinion is that way leads to madness.  Building
packages does not *require* pbuilder/cowbuilder/sbuild but being able
to build your own packages by some method is a very useful and needed
skill.  I would probably add other skills to the list such as being
able to use either apt-get or aptitude (my preference is apt-get and
friends) at a deep level to upgrade and downgrade when the system
needs manual attention for it.  Other things too.

Bob


signature.asc
Description: Digital signature


Re: Broken package can't be configured, can't be uninstalled

2015-05-08 Thread Bob Proulx
Mark Allums wrote:
 I have some packages that did not install correctly.  One in particular is
 giving me fits.  It can't be upgraded.  It can't be removed.  It can't be
 reinstalled.  It can't be reconfigured.  The error message states that it is
 in an inconsistent state and needs to be reinstalled before it is removed.
 But Debian has removed that version from the repository and I can't find it
 to download, even for a manual install.  
 
 I need to either remove it by hand or install the new version (by hand if
 necessary), or something else, if the collective mind of the list knows a
 better solution.  How do I locate the correct version?  Until this is
 resolved, I can't update my system.  
 
 Help!

Help us to help you.  Please cut and paste verbatim the entire text of
the exchange.  That way we can see both the packages involved and the
error messages produced.

If you are using a graphical tool then please get the verbatim
messages from a text window.  We need the text and not a screenshot
image.  (Never send screenshot images.)  Open a text window, become
root, and run the command there.  One of these probably:

  # apt-get upgrade
  # apt-get dist-upgrade
  # apt-get remove FOO

Where FOO is the name of the trouble package.  It will probably spew
error messages.  Cut and past the entire command plus the error
messages into the email message.

Bob


signature.asc
Description: Digital signature


Re: Help with ddrescue

2015-05-08 Thread Bob Proulx
The Wanderer wrote:
 Gary Dale wrote:
  I think Wanderer may be overstating the problem a little. If the two
   drives are exactly the same size, you can use ddrescue to duplicate
  the failed drive onto the new drive (ddrescue if=/dev/sdb
  of=/dev/sdc). However this will limit you to recovering in place on
  new drive.

 In my experience, single-pass recovery like this does not work very
 reliably or very well; it also doesn't let you make the backup copy
 you originally suggested, which is a good idea if you have the space
 (though I never have had).

I have never had *enough* disk space.  Because I always need more!

There is an idea that I didn't see proposed as I read through this
thread.  If one had three same sized drives then there is another
possibility.  It isn't that unusual to have three drives of the same
size.  One could then make two copies of the data.  Copy the data from
the failing drive to a good same-sized drive.  It would then be an
identical copy.  Then take the failing drive and put it on the shelf
so as not to damage it further while trying to recover.  Then make
another identical copy onto the third drive.  At that point one of the
copies can be used for recovery experimentation and there would still
be a copy available for backup on the other drive.

Also when making the first backup copy using ddrescue use a third
location for the ddrescue log file.  I am assuming that a system has
been booted up for the rescue and has the failing disk and the target
disk attached.  That means the host system has its own space available
for the copy.  Use it for the log file.

  ddrescue /dev/sdX /dev/sdY /var/tmp/rescuelogfile

Where /dev/sdX is the failing source drive, /dev/sdY is the spare
target drive, and /var/tmp/rescuelogfile is on the system hosting the
recovery effort with its own drives.  If I had a spare drive of the
same size this would be what I would do to copy it.

Then could do the same for the backup onto the working copy spare for
recovery.  Or just use a normal dd there since presumably both of the
spares are good and error free.

 It's technically possible, yes, but I wouldn't want to trust or rely on
 it in any case where the source device is potentially prone to failure -
 and in any scenario where it isn't, you're unlikely to want to use one
 of the *rescue tools in the first place.

My experience has been that once a drive starts to produce hard
failures that it tends to become worse rather quickly.  I would
perform the backup of it.  There is no way to avoid it.  But I would
try to avoid working the failing drive as much as possible until there
is as much recovered as possible.  After having recovered as much as
is possible only then would I try any other work on the failing drive.
Because Murphy's Law usually means that it becomes worse quickly.  But
after I had a good backup then trying other things such as other disk
recovery software and so forth is reasonable.  Why not if the drive
has already failed otherwise?  But again I would want a good backup
first.  And do any recovery work on a different working copy of it.

 (There's also the consideration of finding space for the ddrescue log
 file if you're restoring directly to the identical-size device; that
 file that may not be as important in some scenarios, but I wouldn't want
 to try to do such a rescue without one.)

Agreed.  The ddrescue log is critical.  But a hosting system with both
drives mounted could use /var/tmp/rescuelog for space not on either of
the same-sized drives.

 I certainly wouldn't say there are never times when direct
 device-to-device recovery like that is appropriate, but I haven't
 encountered one and I would not recommend it as a base-practices
 procedure.

Storage recovery is definitely a problem that takes skill to produce
good results.  It's a problem.  There is no easy solution because the
decisions we make along the way all depend upon the knowledge and the
information available at that moment.  It is really hard to document a
canonical procedure.  Sometimes it just helps to be lucky.

Bob


signature.asc
Description: Digital signature


  1   2   3   4   5   6   7   8   9   10   >