Re: Network config

2017-08-03 Thread Joshua Schaeffer

On 08/02/2017 06:56 PM, Zenaan Harkness wrote:
>
> I've preferred a static networking config for years, and resolvconf
> works well in this situation - but once resolvconf is configured,
> I've always put the dns setting in /etc/networks/interfaces

I agree with this as well. If you want to use static configuration and want 
resolvconf installed, you need to use the dns-* options in 
/etc/network/interfaces. resolvconf will then update your /etc/resolv.conf file 
with those options. If you change /etc/resolv.conf manually, resolvconf will 
override those settings periodically. The alternative is to simply uninstall 
resolvconf and set your DNS settings manually.

>  - the
> only time I put an entry in /etc/resolv.conf is when I'm testing
> stuff or doing a quick hack.
>
> Static network configs are quicker and give that sense of control -
> if the gui is down I can still fix things, and my knowledge applies
> in both gui and console scanerios.

Configuration in /etc/network/interfaces only works when NetworkManager isn't 
installed, which it typically is in GUI environments. If you don't have it 
installed in your GUI environment then yes, it works in both. If NetworkManager 
is installed then the nmcli command should be used and you shouldn't do any 
configuration in /etc/network/interfaces (although loopback is typically still 
controlled through this file).

Thanks,
Joshua Schaeffer


Re: DHCP isn't updating DNS

2017-07-27 Thread Joshua Schaeffer

> You should consider moving towards "standard", but "interim"'s not a
> problem for now.
> https://deepthought.isc.org/article/AA-01091/0/ISC-DHCP-support-for-Standard-DDNS.html

I've actually made a few changes since I've posted this in trying to figure 
this out and I did change to standard. This appears to have not made any 
difference. DNS is still not getting updated, but I will definitely keep the 
setting at standard.
>
>>   allowclient-updates;
>
> I would recommend denying client-updates. This tells clients that they
> can do the DNS update themselves. Given that you're trying TSIGs below,
> that would mean deploying keys to all the clients etc etc. Better to
> "deny client-updates" and centralise the work through the DHCP server.

This was also a change I made. I definitely do not want (and do not allow) 
clients to update DNS, so I changed this to deny.
>
>
> Some other options I have are "update-static-leases on" (Make sure DNS
> is updated even for hosts with a static address) "update-optimization
> on" (Actually, for debugging purposes, I had that off for a while. If
> it's off the DNS will be updated every time. If it's on, then the DNS
> won't be updated if the lease hasn't changed. If you're changing from
> 'interim' to 'standard' you definitely want this off to ensure the
> records get changed).
I saw these as well when I reread through the dhcpd.conf man page, but haven't 
tried them yet. I'll give that a go.

>
> I'm assuming you've cut something out of your config here, but given the
> config above, there's nothing that applies the DDNS settings to hosts.
> The ddns-* settings should apply to everything in their current scope
> and below (so, if you've put them in your subnet6 block, for example,
> that should be fine).

Yes I didn't include my entire conf file as it is a little long. Here is my 
subnet6 declaration that I've been focusing on:

subnet6 2620:5:e000:201e::/64 {
default-lease-time2419200;
max-lease-time2419200;

# LDAP Servers.
pool6 {
allow members of "ldap_servers";
range6 2620:5:e000:201e:0:1::/96;
}
# Kerberos Servers.
pool6 {
allow members of "krb5_servers";
range6 2620:5:e000:201e:0:2::/96;
}
# DHCP Servers.
pool6 {
allow members of "dhcp_servers";
range6 2620:5:e000:201e:0:3::/96;
}
# Puppet Servers.
pool6 {
allow members of "puppet_servers";
range6 2620:5:e000:201e:0:4::/96;
}
# DNS Servers.
pool6 {
allow members of "dns_servers";
range6 2620:5:e000:201e:0:5::/96;
}
# Catch-all DHCP group.
pool6 {
range6 2620:5:e000:201e:0:d::/96;
}
}

In particular I've been testing with a client that gets added to the 
"dhcp_servers" class. I know the classification works as the client actually 
gets an IP address in the the range specified, I just can't get DHCP to update 
the DNS servers with the  and PTR records. Since all my subnet's use the 
same ddns-* settings I don't specify this at the subnet or pool level, I just 
leave it in the top scope.

Thanks for your response,
Joshua Schaeffer


DHCP isn't updating DNS

2017-07-25 Thread Joshua Schaeffer
I'm having trouble getting my DHCPv6 server to update DNS and I'm not sure what 
I'm missing. From what I can tell I have everything setup and have tried 
numerous changes to the config file without success. Here is my 
named.conf.local file. I've tried allowing updates with both the update-policy 
and allow-update commands as well as through a key and just by IP address, but 
as far as I can tell the DHCP server isn't even attempting to communicate with 
the DNS server:

root@blldns01:~# cat /etc/bind/named.conf.local
//
// Do any local configuration here
//

// Consider adding the 1918 zones here, if they are not used in your
// organization
include "/etc/bind/zones.rfc1918";
include "/etc/bind/Kddns--rrs.+157+1.private";
include "/etc/bind/Kddns-ptr-rrs.+157+1.private";

key DHCP_UPDATER {
algorithm HMAC-MD5.SIG-ALG.REG.INT;
secret "==";
};

zone "appendata.net" in {
type master;
notify yes;
file "/var/lib/bind/db.appendata.net";
allow-update { 2620:5:e000:201e::4:1; };
#allow-update { key DHCP_UPDATER; };
#update-policy {
#grant "ddns--rrs" self *  TXT DHCID;
#};
};

zone "0.0.0.e.5.0.0.0.0.2.6.2.IP6.ARPA" in {
type master;
notify yes;
file "/var/lib/bind/db.2620.5.e000";
allow-update { 2620:5:e000:201e::4:1; };
#allow-update { key DHCP_UPDATER; };
#update-policy {
#grant "ddns-ptr-rrs" self * PTR TXT DHCID;
#};
};

In my dhcpd6.conf file I have my zones specified and have tried including the 
key file, declaring the key directly in the file, and simply not using the keys 
and just using IP based authentication. None of it has worked so far. I've also 
tried using primary and primary6 with the actual IP address in my zone 
declarations, but this hasn't made any difference:

#
# DDNS SETTINGS #
#
# The ddns-updates-style parameter controls whether or not the server will
# attempt to do a DNS update when a lease is confirmed. We default to the
# behavior of the version 2 packages ('none', since DHCP v2 didn't
# have support for DDNS.)
ddns-updateson;
ddns-update-styleinterim;
allowclient-updates;
ddns-domainname"appendata.net.";
ddns-rev-domainname"ip6.arpa.";
do-forward-updateson;

# Include keys used to securely communicate with the DNS server.
include"/etc/keys/Kddns--rrs.+157+1.private";
include"/etc/keys/Kddns-ptr-rrs.+157+1.private";

key DHCP_UPDATER {
algorithmHMAC-MD5.SIG-ALG.REG.INT;
secret"XXX==";
};

# Configuring zones for ddns-updates.
zone appendata.net. {
primaryns1-int.appendata.net;
#primary62620:5:e000::a1;
#keyDHCP_UPDATER;#  DNS key for RR's.
}
zone 0.0.0.e.5.0.0.0.0.2.6.2.ip6.arpa. {
primaryns1-int.appendata.net;
#primary62620:5:e000::a1;
#keyDHCP_UPDATER;# PTR DNS key for RR's.
}

I've tried putting various options and declarations in different scopes, but 
none of it has worked. The DHCP server gives out an IP address just fine, but 
it doesn't look like it is even trying to update the  and PTR records.

Jul 25 10:22:56 blldhcp01 dhcpd[1489]: Solicit message from 
fe80::216:3eff:fe32:2d49 port 546, transaction ID 0x9D08B00
Jul 25 10:22:56 blldhcp01 dhcpd[1489]: Picking pool address 
2620:5:e000:201e:0:1:b41e:f2fe
Jul 25 10:22:56 blldhcp01 dhcpd[1489]: Advertise NA: address 
2620:5:e000:201e:0:1:b41e:f2fe to client with duid 
00:01:00:01:21:0a:2b:43:00:16:3e:32:2d:49 iaid = 1043475785 valid for 2419200 
seconds
Jul 25 10:22:56 blldhcp01 dhcpd[1489]: Sending Advertise to 
fe80::216:3eff:fe32:2d49 port 546
Jul 25 10:22:57 blldhcp01 dhcpd[1489]: Request message from 
fe80::216:3eff:fe32:2d49 port 546, transaction ID 0x6C757900
Jul 25 10:22:57 blldhcp01 dhcpd[1489]: Reply NA: address 
2620:5:e000:201e:0:1:b41e:f2fe to client with duid 
00:01:00:01:21:0a:2b:43:00:16:3e:32:2d:49 iaid = 1043475785 valid for 2419200 
seconds
Jul 25 10:22:57 blldhcp01 dhcpd[1489]: Sending Reply to 
fe80::216:3eff:fe32:2d49 port 546

And there is nothing in DNS's logs, even when set to DEBUG. Can anybody see 
what I'm missing. If I sniff the wire I can see that there isn't any 
communication between my DHCP and DNS servers, so I don't think its a firewall 
setting as its not even getting that far.

Thanks,
Joshua Schaeffer


dhclient doesn't send DHCP options

2017-07-22 Thread Joshua Schaeffer
Anybody seen this issue where dhclient does not send the user-class option in 
it's solicit message. I have a minimal dhclient.conf file and when I start 
dhclient I can see that option 15 isn't sent:

root@blldhcptest01:~# cat /etc/dhcp/dhclient.conf
senduser-class"dhcp server";
requesttime-offset, host-name, dhcp6.name-servers, 
dhcp6.domain-search, dhcp6.fqdn;
do-forward-updatesoff;

root@blldhcptest01:~# dhclient -6 -pf /run/dhclient6.eth0.pid -I eth0 -d -v 
-cf /etc/dhcp/dhclient.conf
Internet Systems Consortium DHCP Client 4.3.3
Copyright 2004-2015 Internet Systems Consortium.
All rights reserved.
For info, please visit https://www.isc.org/software/dhcp/

Listening on Socket/eth0
Sending on   Socket/eth0
PRC: Soliciting for leases (INIT).
XMT: Forming Solicit, 0 ms elapsed.
XMT:  X-- IA_NA 3e:1e:cd:ff
XMT:  | X-- Request renew in  +3600
XMT:  | X-- Request rebind in +5400
XMT: Solicit on eth0, interval 1030ms.
RCV: Advertise message on eth0 from fe80::216:3eff:fe11:10ab.
RCV:  X-- IA_NA 3e:1e:cd:ff
RCV:  | X-- starts 1500753449
RCV:  | X-- t1 - renew  +0
RCV:  | X-- t2 - rebind +0
RCV:  | X-- [Options]
RCV:  X-- Server ID: 00:01:00:01:21:04:51:d0:00:16:3e:11:10:ab
IA_NA status code NoAddrsAvail: "No addresses available for this interface."

And I don't see the option in the packet:

No. TimeSourceDestination   Protocol 
Length Info
  2 1.640863fe80::216:3eff:fe1e:cdff ff02::1:2 DHCPv6   
116Solicit XID: 0x7ea155 CID: 00010001210664ed00163e1ecdff

Frame 2: 116 bytes on wire (928 bits), 116 bytes captured (928 bits)
Ethernet II, Src: Xensourc_1e:cd:ff (00:16:3e:1e:cd:ff), Dst: 
IPv6mcast_01:00:02 (33:33:00:01:00:02)
Internet Protocol Version 6, Src: fe80::216:3eff:fe1e:cdff 
(fe80::216:3eff:fe1e:cdff), Dst: ff02::1:2 (ff02::1:2)
User Datagram Protocol, Src Port: 546 (546), Dst Port: 547 (547)
DHCPv6
Message type: Solicit (1)
Transaction ID: 0x7ea155
Client Identifier
Option: Client Identifier (1)
Length: 14
Value: 00010001210664ed00163e1ecdff
DUID: 00010001210664ed00163e1ecdff
DUID Type: link-layer address plus time (1)
Hardware type: Ethernet (1)
DUID Time: Jul 22, 2017 13:33:01.0 MDT
Link-layer address: 00:16:3e:1e:cd:ff
Option Request
Option: Option Request (6)
Length: 6
Value: 001700180027
Requested Option code: DNS recursive name server (23)
Requested Option code: Domain Search List (24)
Requested Option code: Fully Qualified Domain Name (39)
Elapsed time
Option: Elapsed time (8)
Length: 2
Value: 
Elapsed time: 0 ms
Identity Association for Non-temporary Address
Option: Identity Association for Non-temporary Address (3)
Length: 12
Value: 3e1ecdff0e101518
IAID: 3e1ecdff
T1: 3600
T2: 5400

I have classes defined on my DHCP server that requires the user-class option 
and because it isn't being sent it can't match the client to any class and as 
such returns a NoAddrsAvail status code is returned. Anybody encountered this 
before and know how to fix it?

Thanks,
Joshua Schaeffer



Re: NTP.conf pool vs server

2017-06-07 Thread Joshua Schaeffer
On Wed, Jun 7, 2017 at 9:06 AM, Greg Wooledge <wool...@eeg.ccf.org> wrote:

> I largely agree with Gene.  The man pages are incredibly silly.  They
> don't tell you how to do the Most Basic Common Thing.  Instead they
> talk about "type s and r addresses" and "a preemptable association
> is mobilized" and "mobilizes a persistent  symmetric-active  mode
> association" and "type b and m addresses" and other such gibberish.
>

That is one of the ideas behind the info pages. Man pages have always been
technically oriented and are generally very focused. They don't really
offer context. Now, I'm not saying that info pages accomplish this (some
do, some don't), but that was one of the original ideas behind info pages,
is to be more real world and comprehensive. There are trade offs to both
approaches.

You typically get a dichotomy of groups about man pages and documentation
in general. Some people prefer the more technical nature of the man pages,
while others find it frustrating. Can be further exacerbated by the fact
that people tell other people to RTFM, but even reading a man page top to
bottom doesn't help when it actually comes to setting up a piece of
software (as you probably experienced yourself).

In general man pages are more helpful when you already understand the
software in question and are looking for specific information.

Thanks,
Joshua Schaeffer


Re: Debian hardware compatibility

2017-04-19 Thread Joshua Schaeffer
I don't think you'll have any issue with that set of hardware. If you've
never done water cooling before then you'll be very happy with your H80i
purchase. I have a Corsair Hydro H100i in both my Linux and Windows boxes
and absolutely love them. I'll never do anything but CPU water cooling
anymore. Also I would recommend the 850 Evo vs the 850 Pro. The pro's are
typically more expensive and only offer slightly better write performance
[1]. I've bought over 10 850 Evo's over the last few years and never been
dissatisfied with any of them. Just my personal opinion, but thought I
would mention it.

Thanks,
Joshua Schaeffer

[1]
http://ssd.userbenchmark.com/Compare/Samsung-850-Pro-256GB-vs-Samsung-850-Evo-250GB/2385vs2977

On Wed, Apr 19, 2017 at 3:57 AM, Dan Purgert <d...@djph.net> wrote:

> John Elliot V wrote:
> > This is a multi-part message in MIME format.
> > --50ECA958FA859516FB3191E9
> > Content-Type: text/plain; charset=utf-8
> > Content-Transfer-Encoding: 7bit
> >
> > I'm getting a new workstation. Proposed specs are here:
> >
> >  https://au.pcpartpicker.com/list/7MCfjc
> >
> > I tried to find out if my new hardware would run Debian stable, but
> > couldn't confirm.
> >
> > CPU is Intel Core i7-7700K on an Asus STRIX Z270F mobo. I'm planning to
> > drive two monitors from the onboard graphics controller, one via DVI the
> > other via HDMI.
> >
> > Will Debian (w/ KDE Plasma) run on my new kit, does anybody know?
> >
>
> I don't see anything that screams "linux incompatibility" on that list.
> Might have some "fun" with the graphics, but then I've not kept up on
> Intel's offerings in that regard.
>
>
> --
> |_|O|_| Registered Linux user #585947
> |_|_|O| Github: https://github.com/dpurgert
> |O|O|O| PGP: 05CA 9A50 3F2E 1335 4DC5  4AEE 8E11 DDF3 1279 A281
>
>


Re: Encrypted RAID1 for storage with Debian Jessie

2017-04-19 Thread Joshua Schaeffer
As already stated LUKS and mdadm are a good combination. I too use these in
all my recent systems. I Create RAID volumes, then LVM, then cryptsetup:

=
+  mdamd +
+   |  +
+LVM   +
+   |  +
+   LUKS  +
+   |  +
+ext4+
=

I can't speak to your system being on USB, but in general you can just do
something like the following:

$mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda /dev/sdb
$mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdc /dev/sdd
$mdadm --create /dev/md2 --level=1 --raid-devices=2 /dev/sde /dev/sdf

If you want to use LVM then you create the PV, VG, and LV:

$pvcreate /dev/md0
$pvcreate /dev/md1
$pvcreate /dev/md2
$vgcreate vg_data1 /dev/md0
$vgcreate vg_data2 /dev/md1
$vgcreate vg_data3 /dev/md2
$lvcreate vg_data1 -n lv_data1 -L 
$lvcreate vg_data2 -n lv_data2 -L 
$lvcreate vg_data3 -n lv_data3 -L 

Then create your LUKS partition:

$cryptsetup -v --verify-passphrase luksFormat /dev/mapper/lv_data1
vg_data1-lv_data1_crypt
$cryptsetup -v --verify-passphrase luksFormat /dev/mapper/lv_data2
vg_data2-lv_data2_crypt
$cryptsetup -v --verify-passphrase luksFormat /dev/mapper/lv_data3
vg_data3-lv_data3_crypt

Then create your filesystem and mount them:

$mkfs.ext4 /dev/mapper/vg_data1-lv_data1_crypt
$mkfs.ext4 /dev/mapper/vg_data2-lv_data2_crypt
$mkfs.ext4 /dev/mapper/vg_data3-lv_data3_crypt

$mount -t ext4 /dev/mapper/vg_data1-lv_data1_crypt /mnt/data1
$mount -t ext4 /dev/mapper/vg_data2-lv_data2_crypt /mnt/data2
$mount -t ext4 /dev/mapper/vg_data3-lv_data3_crypt /mnt/data3

One of my systems looks like this. On this particular system I only encrypt
home and swap:

$jschaeffer@zipmaster07 ~ $ lsblk
NAME  MAJ:MIN RM   SIZE RO TYPE
 MOUNTPOINT
sda 8:00 111.8G  0 disk
\sda1  8:10   100M  0 part
 /boot/efi
\sda2  8:20   250M  0 part
  \md0 9:00   250M  0 raid1 /boot
\sda3  8:30 111.5G  0 part
  \md1 9:10 111.4G  0 raid1
\vg_sys1-lv_var1 (dm-1)  252:1055G  0 lvm   /var
\vg_sys1-lv_tmp1 (dm-2)  252:20 4G  0 lvm   /tmp
\vg_sys1-lv_swap1 (dm-3) 252:30 6G  0 lvm
  \vg_sys1-lv_swap1_crypt (dm-5) 252:50 6G  0 crypt [SWAP]
\vg_sys1-lv_root1 (dm-4) 252:40  46.4G  0 lvm   /
sdb 8:16   0 111.8G  0 disk
\sdb1  8:17   0   100M  0 part
\sdb2  8:18   0   250M  0 part
  \md0 9:00   250M  0 raid1 /boot
\sdb3  8:19   0 111.5G  0 part
  \md1 9:10 111.4G  0 raid1
\vg_sys1-lv_var1 (dm-1)  252:1055G  0 lvm   /var
\vg_sys1-lv_tmp1 (dm-2)  252:20 4G  0 lvm   /tmp
\vg_sys1-lv_swap1 (dm-3) 252:30 6G  0 lvm
  \vg_sys1-lv_swap1_crypt (dm-5) 252:50 6G  0 crypt [SWAP]
\vg_sys1-lv_root1 (dm-4) 252:40  46.4G  0 lvm   /
sdc 8:32   0 931.5G  0 disk
\sdc1  8:33   0   100M  0 part
\sdc2  8:34   0 931.4G  0 part
  \vg_home1-lv_home1 (dm-0)  252:00   850G  0 lvm
\vg_home1-lv_home1_crypt (dm-6)  252:60   850G  0 crypt /home
sr011:01   3.8G  0 rom

Thanks,
Joshua Schaeffer


On Wed, Apr 19, 2017 at 3:11 AM, tv.deb...@googlemail.com <
tv.deb...@googlemail.com> wrote:

> On 19/04/2017 05:06, commentsab...@riseup.net wrote:
>
>> Hello,
>>
>> Is there an easy way to attach several pair of RAID1 disks (with full
>> disk encryption) to a Debian Jessie system?
>>
>> Here is a picture of what I'm trying to achieve: http://imgur.com/vF7IqX2
>>
>> I am building a home backup system, I have different type of data to
>> backup (work, family, random stuff - hence the three pairs in the
>> picture). The system (Debian Jessie) will be on a USB key.
>>
>> It's a backup system on a budget that I'd like to have up and running
>> within a couple of weeks, I know that ZFS (with FreeNAS for instance)
>> can achieve similar goals but it's out of budget ; I also know that work
>> is being done on BTRFS about encryption but it's not ready for prime
>> time yet.
>>
>> Always state the obvious so :
>>
>> - the idea behind having the SYSTEM on a independent USB drive is to
>> ha

Other's opinions about best/safest method to migrate LUN's while online

2017-04-14 Thread Joshua Schaeffer
Howdy all,

We are planning on migrating several LUN's we have on an oracle box to a
new NetApp all flash storage backend. We've gone through a few tests
ourselves to ensure that we don't cause impact to the box and everything
has been successful so far. The server will remain up during the migration
and we are not planning on bring down any services. I just wanted to see if
others had any similar experience and wouldn't mind sharing. Particularly,
does anyone see any steps that might cause impact, halt the box, or cause
path's to fail where the storage itself becomes unavailable. This is our
current high level steps:


   1. Zone the host to include the new HA NetApp pair *[no impact, no
   server changes, only SAN fabric additions (very safe)]*
   2. Create a volume on the destination HA NetApp pair *[no impact, no
   server changes (very safe)]*
   3. Validate the portset on NetApp to include the destination HA pair *[no
   impact, no changes, verification only (very safe)]*
   4. Add reporting nodes to LUN *[no impact, no server changes, NetApp
   additions only (very safe)]*
   5. LUN scan each HBA individually ($echo "- - -" >
   /sys/class/scsi_host/host3/scan && sleep 5 && echo "- - -" >
   /sys/class/scsi_host/host4/scan) *[Should not cause impact (generally
   safe)]*
   6. Validate that 8 new non-optimized paths now appear on the server
   ($multipath -ll) *[no impact, command does not make changes (very safe)]*
   7. Validate the new paths are secondary ($sanlun lun show -p -v) *[no
   impact, command does not make changes (very safe)]*
   8. Perform the NetApp LUN move *[no impact, no server changes (very
   safe)]*
   9. Remove the reporting nodes from LUN *[no impact, no server changes,
   NetApp deletion only (generally safe)]*
   10. Validate the 8 original paths are now failed ($multipath -ll) *[no
   impact, command does not make changes (very safe)]*
   11. Validate that Linux automatically sees 4 optimized paths among the 8
   new paths ($sanlun lun show -p -v) *[no impact, command does not make
   changes (very safe)]*
   12. Delete the failed paths (echo 1 > /sys/block/sdX/device/delete) *[Should
   not cause impact (generally safe)]*

My only concern is related to part of some Red Hat documentation I came
across [1] that states the following:

"interconnect scanning is not recommended when the system is under
memory pressure. To determine the level of memory pressure, run the
command vmstat
1 100; interconnect scanning is not recommended if free memory is less than
5% of the total memory in more than 10 samples per 100. It is also not
recommended if swapping is active (non-zero si and so columns in the
vmstat output).
The command free can also display the total memory."

These oracle boxes typically have all their memory used (I see the cached
39G).

[root@oraspace01 ~]# free -g
 total   used   free sharedbuffers
cached
Mem:   188187  1  0  0
39
-/+ buffers/cache:147 41
Swap:   79  0 79

I'm not an oracle DBA so I don't know a lot of specifics about their
inter-workings, but from what I understand some oracle systems/processes
can use all the memory a machine has, no matter how much you give it. I've
seen ZFS and VMWare do this as well. They claim a large amount of memory,
but aren't using it until they actually need it. It's more efficient and
allows for higher throughput and processing. So the fact that free thinks
the machine is low on memory isn't really an issue for me, I'm just
concerned with the documentation shown earlier.

Does anyone know if running a scan on the SCSI bus while the system thinks
there isn't much available memory would cause issues? Has anyone done
similar types of migrations (doesn't have to be with NetApp). In essence
all we are doing is presenting additional paths temporarily, moving the
storage, then deleting the old paths. Is there a better way to delete
paths? A rescan of the SCSI bus only adds paths (at least from what I found
and read). Anybody have some nifty or cleaver step to add that makes things
easier/safer/better/faster/etc?

Thanks,
Joshua Schaeffer

[1]
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/5/html/Online_Storage_Reconfiguration_Guide/scanning-storage-interconnects.html


Re: How to change where PXE booting looks for files with UEFI

2017-04-07 Thread Joshua Schaeffer
On Fri, Apr 7, 2017 at 12:31 PM, David Niklas  wrote:

> As someone who is curious about PXE, did you ever figure this out?
>
>
Unfortunately I was not able to spend too much time on this and have not
yet figured out how to set UEFI PXE booting options like where to look for
files. If I figure this out I will post a reply to the list.

Thanks,
Joshua


Re: GPU advice for Debian stretch?

2017-04-05 Thread Joshua Schaeffer
I've used the following cards over the years without any issues in my Linux
boxes. They are typically higher end (at least at the time the series was
relevant) because I buy them for my Windows gaming machine, then when I
upgrade the graphic card I put the now old card in my main Linux
workstation:

   - Geforce GTX 680
   - Radeon HD 6970
   - Radeon HD 4670

However, Almost any discrete, modern, mainline card (Geforce or ATI) will
work. If you want to use proprietary drives then just check the
manufacturers website if that particular card has a Linux driver for
download. I haven't video card shopped in a bit, but last I checked Geforce
typcially provides good drivers for their cards on Linux and FreeBSD. I
believe ATI typically works closer with the opensource community and sends
their changes to the appropriate groups, but also has drivers available for
direct download.

There are also a lot of sites that offer compatibility databases. One of
these might be useful to you:

   - https://wiki.debian.org/Hardware
   - http://www.linux-drivers.org/

If you are looking for an exact answer of "Card *XYZ* works in Debian
Stretch with package *ABC*" then I don't have an answer for you. Could you
provide a little more detail about your requirements, like what you plan to
use the card for. Do you do any video editing, gaming, movie playback,
multi-monitor support, etc? Also, what is your price range? Are you sold on
using AMD? Any physical size requirements of the card?

Personally, right now, I use Geforce, not because they are necessarily
better or worst then ATI, but mostly for their shadow play technology, and
I feel their Linux support is better. There was a time when ATI was better
at multi-monitor support, but I think Geforce is pretty adequate at the
moment for up to 4 monitors.

Anyway, without more information I would recommend one of the following:

   - Radeon R7 250
   - Radeon R7 350 or 360
   - GTX 950 or 960

In my limited searching I also saw the Radeon 7750 come up, but disclaimer,
I know nothing about this card, so can't really recommend it. Looked decent
though. Once you find a card then perhaps somebody can tell you if that
exact card has any issue with Debian Stretch.

Thanks,
Joshua Schaeffer

On Wed, Apr 5, 2017 at 6:51 AM, Martin Read <zen75...@zen.co.uk> wrote:

> I'm currently using an AMD "APU" system with integrated Radeon HD6530D
> graphics (yes, that's a component from 2011; the computer still works
> fine). This is kind of awkward when considering moving to stretch:
>
> * The DKMS-support package for the fglrx proprietary driver will not be
> available for stretch.
> * The radeon free driver *still* isn't on par with fglrx on my hardware.
> fglrx gave me OpenGL 4.4; radeon gives me 3.3 and visibly inferior
> performance.
> * The amdgpu free driver (which I'm guessing won't be in stretch except
> via backports anyway) doesn't support my hardware.
>
> I don't have the money to replace my computer right now, so I'm looking at
> the number-and-letter soup of PCI Express discrete graphics cards. Any
> recommendations? Not looking for some all-singing all-dancing thing with
> seventeen fans, just something that - with the drivers available for
> stretch - will get me at least as much performance and capability as my
> current chip achieves with its proprietary driver.
>
>


Re: Systemd services (was Re: If Linux Is About Choice, Why Then ...)

2017-04-03 Thread Joshua Schaeffer
On Mon, Apr 3, 2017 at 10:14 AM, Kevin O'Gorman  wrote:

>
>> To see a list of your available targets (assuming no major local changes),
>> use this command:
>>
>> $ find /lib/systemd/ -name '*.target'
>>
>>
> Are you sure?  On my system, this produces nothing at all.  But the
> directory
> exists and is populated.
>

What version of systemd do you have installed?

#systemd --version


Re: Wan/Lan problem

2017-03-29 Thread Joshua Schaeffer
I'm going to join the fray and take a crack at this. I'll try to help as
best I can to resolve the situation in your current setup, but I would like
to say that I agree with what others have posted and would say that this is
a little (but not too much) unorthodox. Typically desktop, server,
application, and personal machines are put behind the router in the IPv4
paradigm. We then use NATing to allow multiple machines onto the interwebs.
However I only say it is somewhat unorthodox, as dedicated network
equipment is often given external IP addresses. It just look like you are
using your Linux box as network equipment and as a personal machine.

I'm going to focus on questions concerning when both NIC's are up, seeing
as this is probably the desired end result.


On Wed, Mar 29, 2017 at 9:51 AM, Mike McClain <mike.junk...@copper.net>
wrote:

>
> > > When eth0 is up and eth1 up,
> > > the Linux box can not access the web.
> > > the Win2K box can access the web.
> > > the Linux box can not access the Win2K shares.
> > > 'ping ATTrouter' fails.
> > > 'ping -Ieth0 ATTrouter' works.
>
>
   1. What does a traceroute show on the Windows box?
   2. What does a tcpdump or wireshark output of a ping to 99.188.244.1
   show on both Linux and Windows?
   3. What happens when you remove the 192.168.1/24 route? Does the Linux
   box then have internet access? Does the Windows box lose internet access?
   4. A quick way to start to narrow down where the problem or problems
   exist is to completely disable your firewall and perform your tests again.
   If any of them succeed then you know it has something to do with netfilter.

I'll try to look at your config more and see if I can spot anything in
particular.

Thanks,
Joshua Schaeffer


Re: UID mismatch across systems

2017-03-26 Thread Joshua Schaeffer

On 03/26/2017 06:27 PM, Ross Boylan wrote:


The main practical problem is with backups, restores and copies that
use numeric ids instead of names.  This includes taking a disk that
was used on one system and using it in another.

Beyond that, I had a general sense that the mismatched ids could cause
trouble.  You're right, I'm not likely to be doing things like ssh'ing
after having assumed a system account id or sharing files owned by
system ids over NFS.  So it may be just the backup/copying that's an
issue.

Yes, I hadn't thought about backing up a system and then restoring on another 
system. Just a suggestion, but in the case of backups that are restored on 
another system you would probably *not* want backup the numeric ids. Rsync 
actually behaves like this by default and falls back to numeric ids. You have 
to specify the --numeric-ids flag in order for it to preserve all IDs. From 
rsync(1) man page:

 --numeric-ids
With this option rsync will transfer numeric group and user IDs rather than 
using user and group names and mapping them at both ends.

  By default rsync will use the username and groupname to determine 
what ownership to give files. The special uid 0 and the special group  0
are never mapped via user/group names even if the --numeric-ids option is not 
specified.

  If  a user or group has no name on the source system or it has no 
match on the destination system, then the numeric ID from the source sys‐
tem is used instead.

Rsync also has capabilities to map users and groups with the --usermap and 
--groupmap options. See the man page for details of these options as well. So 
by default rsync may be doing what you are looking for (or at least map the ids 
for you so that you don't have to go through the hassle of synchronizing all 
your systems) by backing up the system and attempting to restore it to the 
correct user by name first then by ID. Of course this assumes you are using 
rsync. Other backup programs (and there are aplenty) may do the opposite by 
default.


I was planning  on using kerberos.  Partly that's because I thought
NFSv4 needed it anyway.

It is required if you use sec=krb5, krb5i, or krb5p. sec=sys doesn't do any 
integrity checking, authentication, or encryption and Kerberos is not required 
for these exports. Right now I use LDAP and Kerberos for authentication and 
authorization (authn/authz). Kerberos handles the authn portion and actually 
uses LDAP as its backend. LDAP handles authz portion. I'm also in the process 
of setting up my NFSv4 server with krb5p security. Let me know if you have any 
questions on that.


I mean ids < 1000.  1000+ seems to be for users on my systems.  On my
systems it's 100-124 that have the problems.

Sorry, typo, I did in fact mean ids less than 1000.

Are you saying there is a way to change the uid/gid of a process that
is already running from the outside?  Does usermod do that if you
change the uid?

My concern is that if I change file uids existing processes will gag
and, worst case, the system becomes  non-functional even on reboot.*
This seems particularly acute with systemd.  I know I can shutdown
most services, change ids, and restart them.  But I have the
impression that the ones associated with systemd, and maybe some
others like messagebus, are essential and have to be left running.
And I am accessing the systems via ssh, and so changing the ssh u/gid
seems especially dicey.

*It just occurred to me I could temporarily make permissions more
permissive, or add group permissions, to avoid getting locked out.

Ross

No, I was just saying to create a new account in LDAP (this would be the new 
synchronized account) and then just change ownership of the files owned by the 
old account with chown. Your concerns are valid for accounts like systemd and 
daemon, and unfortunately I don't have any experience with this as I don't 
synchronize these types of accounts.

Thanks,
Joshua Schaeffer


Re: Thin Client

2017-03-26 Thread Joshua Schaeffer

On 03/26/2017 12:29 PM, Brian wrote:


Off of Ebay. Doesn't everyone? :)

I think the statement has been changed or is changing to: "Off of Amazon. Doesn't 
everyone?" :> (no wars please just a joke).


Re: SSH Access Issue

2017-03-26 Thread Joshua Schaeffer


On 03/26/2017 08:30 AM, Cindy-Sue Causey wrote:



In the case I'm thinking, it's about manually adding multiple lines to
a file that I'm not completely remembering just now. Gut is saying
it's /etc/network/interfaces. Mine's almost empty so I don't have an
example to confirm that.

Typically user's put a second gateway option in the /etc/network/interface file 
(which you talk about in the next paragraph). This usually results in not 
understanding what the gateway does.


What I encountered wasn't about declaring different values for
gateway, either. For whatever reasons due to innate [functionality],
it becomes a fail even if you declare the same gateway value for that
line within each new, separate block of declarations. Success is found
by declaring it once then omitting that line within any other new
blocks added over time.

While I've never put duplicate gateway information in /etc/network/interfaces I, at one 
point when learning about networking and setting it up in Debian, had put a gateway for 
each subnet in the interfaces file (which is incorrect and resulted in an error). A 
gateway, often called a "gateway of last resort" tells the system how to reach 
subnets that it is not attached to. That is the point of the gateway; it is the one place 
the system can send packets to when it doesn't know where to go. If you defined two 
gateways (meaning if this was allowed) you would be back to square one, the system 
wouldn't know which gateway to send the packet to. Defining two gateways could be an 
incorrect way of saying you are defining two routes (most likely static routes).


Between my setup and cognition, I've never had anything stable enough
to test if it matters which block that gateway is declared. I've
wondered if it matters that it be in the first block, or if it just
needs to show up somewhere in that file. I was consciously putting it
in the first block because that seemed to be the *logical* thing to do
k/t having touched on programming 20 years ago at a local tech school.

I hadn't really thought about this myself. I've always defined the gateway 
under the interface that is attached to the subnet where the gateway resides. 
A.K.A. if I have two networks:

auto eth0
iface inet eth0 static
  address 192.168.0.2/24

auto eth1
iface inet eth1 static
  address 10.1.10.2/24

And my gateway of last resort was on the 192.168.0 subnet then I would define 
the gateway under that interface

auto eth0
iface inet eth0 static
  address 192.168.0.2/24
  gateway 192.168.0.1

It never occurred to me to see if it could be put anywhere in the file. My 
hunch is it can and I guess I could take the 60 seconds to test it, but I'll 
leave that to more adventurous people.

Thanks,
Joshua Schaeffer


Re: UID mismatch across systems

2017-03-26 Thread Joshua Schaeffer


On 03/25/2017 03:03 PM, Boylan, Ross wrote:

The problem is that I can't convert to using a shared directory when different 
systems assign different uids to the same named user.  In other words, to get 
to the shared accounts solution I must already have solved the problem of 
mismatching ids.

Not entirely true, NFSv4 has the ability to map uid/gid between systems with 
the rpc.idmapd program, which uses the idmapd.conf configuration files.

The problems are mostly with system users, and I've seen some advice indicating 
such users don't normally go in LDAP.  So excluding would reduce the problem, 
for LDAP, but also leave lots of unsynchronized ids.

What is the issue of unsynchronized system ids? Are you allowing login of these 
system ids? Are they also sharing a filesystem (NFS, CIFS, etc) for these 
system ids? My assumption is that when you say shared directory you are talking 
about $HOME for a normal user. If that's not the case then it sounds like you 
are using an old technique where many systems mount shared filesystems to /usr, 
/usr/share, /opt, etc. I haven't seen that in years as disks are quite large 
enough to handle the space needed for these directories.

I'm not sure how much a problem it the mismatches are for NFSv4; I believe it 
allows user/kerberos based authentication, but I'm not sure what that means for 
the uids of the files.

Mismatched ids in NFSv4 will result in a uid/gid of -1, which translates to 
something like 4294967295 (I don't think that's the exact number but it's close 
to 2^32) when you run an ls -l on the NFS mount point.
LDAP is my go to solution for synchronized authorization and central account 
management (I do not use it for authentication, but that is my own personal 
preference). I advocate it, but I know some people prefer simpler solutions 
depending on the situation. A company of 10 systems can easily avoid all of the 
management, hardware, upkeep, etc of LDAP and use something like NIS, Puppet, 
etc or use no central management at all.

I guess I'm not understanding the core problem. I never put system ids (including 
root) into LDAP, only user's ids. Typing this out, it occurred to me that I am 
assuming you mean a system id is an id of >1000 (in Debian). If you are talking 
about some generic account that is not an actual system ID, but is not used by a 
specific user, then yes you have to find a way to synchronize and/or transfer the 
account. I would simply create the account in LDAP and then transfer all ownership 
of processes and files to that new account (as you already stated).

Thanks,
Joshua Schaeffer


Re: Samba domain provisioning tools issues provisioning error on clean Jessie install

2017-03-25 Thread Joshua Schaeffer



The observation: It does seem like maybe that file shouldn't exist at
the beginning if it's causing that kind of thing where the immediate,
successful fix is to delete it. Like I said, though.. that's an
"uneducated" observation.

Perhaps there's a necessary evil of it pre-existing. Perhaps maybe
(maybe not) there's a conscious intention that it's easier to delete
that file per that error message *if* that error occurs versus the
headaches that might result if that file was not in place for most
other users universally. *?* :)

This is certainly reasonable. The file exists because you install the Samba 
package which installs a default smb.conf file. My initial assumption is that 
it would indeed cause more issues to not be there, then to be there and require 
to delete it before provisioning a new domain.

Perhaps a more complex/time consuming solution would be to enhance the guess 
function that is performed when a domain is provisioned. I've never been 
involved with Samba development so I have no clue what effort this would take.

ERROR(): Provision failed - 
*ProvisioningError: guess_names*: 'server role=standalone server' in 
/etc/samba/smb.conf must match chosen server role 'active directory domain 
controller'!  Please remove the smb.conf file and let provision generate it

That is also an upstream issue versus a Debian issue.


The question: If I get a wild hair later and get a chance to attempt
this, do you mind if I "borrow" your domain name there? I don't have
anything to test with otherwise. I've attempted samba in the past, but
I don't think it's installed right now so this would be attempted from
a clean install.


Sure, feel free use the domain for testing. I do own the rights to the name and 
the domain itself so you wouldn't be able to do anything public with it :).


Samba domain provisioning tools issues provisioning error on clean Jessie install

2017-03-25 Thread Joshua Schaeffer

Ahoy all, I'm experiencing an issue when installing Samba on a fresh Debian 
Jessie install and looking to see if others have encountered this issue, if 
there is an obvious fix/dependency I'm missing, or if a bug should be reported.

*Problem description*
After a clean install of Debian Jessie and making sure all packages are 
updated, I installed samba and then ran the samba-tool to provision a new 
domain. I get the following error:

root@firebat-vm:~# samba-tool domain provision --use-rfc2307 --interactive
Realm [HARMONYWAVE.COM]: HARMONYWAVE.COM
 Domain [HARMONYWAVE]: harmonywave
 Server Role (dc, member, standalone) [dc]: dc
 DNS backend (SAMBA_INTERNAL, BIND9_FLATFILE, BIND9_DLZ, NONE) 
[SAMBA_INTERNAL]: SAMBA_INTERNAL
 DNS forwarder IP address (write 'none' to disable forwarding) [10.1.30.2]: 
10.1.30.2
Administrator password:
Retype password:
*ERROR(): Provision failed - 
ProvisioningError: guess_names: 'realm =' was not specified in supplied 
/etc/samba/smb.conf. Please remove the smb.conf file and let provision generate it*
  File "/usr/lib/python2.7/dist-packages/samba/netcmd/domain.py", line 434, in 
run
nosync=ldap_backend_nosync, ldap_dryrun_mode=ldap_dryrun_mode)
  File "/usr/lib/python2.7/dist-packages/samba/provision/__init__.py", line 
2022, in provision
sitename=sitename, rootdn=rootdn, domain_names_forced=(samdb_fill == 
FILL_DRS))
  File "/usr/lib/python2.7/dist-packages/samba/provision/__init__.py", line 
603, in guess_names
raise ProvisioningError("guess_names: 'realm =' was not specified in supplied 
%s.  Please remove the smb.conf file and let provision generate it" % lp.configfile)

I remove the /etc/samba/smb.conf file as the error suggests and then run the 
exact same command again and it provisions successfully. I just think that a 
clean install of Jessie should not throw this error the first time it is run. 
I've Googled and found a few pages and both Ubuntu and Debian bugs related to 
this exact error message, but it seems that most of them are related to 
upgrading the Samba package or after reboots, not when running the samba-tool 
utility.

After some further testing I found that I can just remove the 
/etc/samba/smb.conf file immediately after installing Samba and before running 
the samba-tool utility. It appears that the mere fact that an smb.conf file 
exists is the issue. Obviously this issue isn't that critical as the error 
tells you the problem and even how to fix it, it just seems odd that after a 
fresh install a provision does not work correctly, especially since it 
specifically asks for the realm.

*Steps to reproduce*

1. Install a clean OS of Debian Jessie
2. Upgrade all packages (apt-get update && apt-get -y dist-upgrade)
3. Install samba (apt-get install samba)
4. Run the samba-tool utility (samba-tool domain provision --use-rfc2307 
--interactive) and answer the questions asked by the script.


*TL;DR*
I guess the main question is: "Is a user expected to remove the smb.conf file before 
running the samba-tool utility to provision a domain? If so, I have not seen this in any 
documentation (Debian, Samba, Ubuntu or otherwise). Should this be considered a bug? The 
domain provision process expects that if the smb.conf file exists it is already setup for 
the domain being provisioned."

Thanks,
Joshua Schaeffer


How to change where PXE booting looks for files with UEFI

2017-03-15 Thread Joshua Schaeffer
Ahoy,

I've been learning how to setup PXE booting to install an OS image on a
100% UEFI system (CSM completely disabled). I use Debian 8 as my DHCP
server (ISC DHCP) and as my TFTP server (tftpd-hpa). I also use the Debian
netboot installer to provide the PXE environment including the
bootnetx64.efi file (I use it to serve pxelinux.0 as well).

I'm able to get to menu screens with PXE on UEFI and BIOS, but I have to
have specific files in certain directories for UEFI and I don't know how to
change the defaults. With BIOS I'm able to control the directories and
files that are served by changing the pxelinux.cfg/default file. For BIOS I
put my debian-installer directory (which contains the boot screen, menus,
etc) under gtk. So I just have lines like this in my pxelinux.cfg/default
file:

path gtk/debian-installer/amd64/boot-screens/
default gtk/debian-installer/amd64/boot-screens/vesamenu.c32

However, I can't figure out how to tell UEFI to look in specific
directories. Instead I've been left with just having to create a symlink
from the default directory that bootnetx64.efi looks in to point to the
actual directory that my GRUB files exist.

How do you configure UEFI PXE booting to use different directories then the
defaults?

Let me try to explain what I'm seeing a little better:

This is my TFTP server. The base directory is /srv/tftp:

root@broodwar:/srv/tftp/pxeboot# ls -l
total 48208
-rw-r--r-- 1 tftp tftp   435712 Mar 15 10:38 bootnetx64.efi
drwxr-xr-x 1 tftp tftp4 Sep 28 11:10 centos
drwxr-xr-x 1 tftp tftp4 Sep 28 11:10 debian
drwxr-xr-x 1 tftp tftp8 Sep 28 11:16 fedora
drwxr-xr-x 1 tftp tftp  120 Sep 17 10:04 gtk
-rw-r--r-- 1 tftp tftp   116624 Sep 28 09:30 ldlinux.c32
-rw-r--r-- 1 tftp tftp25372 Sep 30 10:40 memdisk
-rw-r--r-- 1 tftp tftp 29360128 Sep 13  2016 mini.iso
-rw-r--r-- 1 tftp tftp 19370928 Sep 13  2016 netboot.tar.gz
drwxr-xr-x 1 tftp tftp   16 Sep 28 11:17 opensuse
-rw-r--r-- 1 tftp tftp42988 Sep 13  2016 pxelinux.0
drwxr-xr-x 1 tftp tftp  248 Sep 30 10:31 pxelinux.cfg
drwxr-xr-x 1 tftp tftp   20 Sep 28 11:17 ubuntu
drwxr-xr-x 1 tftp tftp   32 Sep 30 09:40 windows
drwxr-xr-x 1 tftp tftp   52 Sep 17 10:04 xen

You can see that under the base directory I have a pxeboot directory which
can serve both pxelinux.0 and bootnetx64.efi. I also have a gtk folder
which is where the debian-installer folder resides under (this has items
like my boot screens and menus as well as GRUB related files for UEFI). So,
when booting via BIOS I tell my pxelinux.cfg file to look under the gtk
folder as shown above. bootnetx64.efi however is looking for the following
files:


   - debian-installer/amd64/grub/x86_64-efi/command.lst
   - debian-installer/amd64/grub/x86_64-efi/fs.lst
   - debian-installer/amd64/grub/x86_64-efi/crypto.lst
   - debian-installer/amd64/grub/x86_64-efi/terminal.lst
   - debian-installer/amd64/grub/grub.cfg

I know this because this is what is reported in the tftp log files. I have
to have a symlink called debian-installer in my root directory
(/srv/tftp/debian-installer) pointing to my gtk folder
(srv/tftp/pxeboot/gtk/debian-installer).

root@broodwar:/srv/tftp/pxeboot# ls -l ..
total 4
drwxr-xrwx 1 tftp tftp 128 Jan 23 15:24 cisco_config
lrwxrwxrwx 1 root root  29 Mar 15 12:34 debian-installer ->
pxeboot/gtk/debian-installer/
drwxr-xr-x 1 tftp tftp 242 Mar 15 12:33 pxeboot

How do I tell bootnetx64.efi to just look directly in that folder instead
of looking at the default location?

Thanks,
Joshua


Re: good LDAP resources

2017-02-25 Thread Joshua Schaeffer

LDAP can be very difficult to learn if you are just starting out with it, but 
also very powerful. There may be other faster solutions then a manual setup, 
but I found that I learned the most by doing all of it manually. On Red Hat 
based systems, I believe their IPA solution is quite good. It uses LDAP and 
Kerberos and does most of the leg work for you. I have no idea if any of that 
is compatible with Debian based systems (I don't think it is).

Anyway here are a lot of the resources I used when learning, configuring, and 
setting up my authentication system:

*

 *

   http://debian-handbook.info/browse/wheezy/sect.ldap-directory.html

 *

   http://ubuntuforums.org/showthread.php?t=1421998

 *

   http://www.openldap.org/lists/openldap-technical/201401/msg00140.html

 *

   https://help.ubuntu.com/community/GnuTLS

 *

   https://bugs.launchpad.net/ubuntu/+source/sudo/+bug/115967

 *

   help.ubuntu.com/community/OpenLDAPServer 
<https://help.ubuntu.com/community/OpenLDAPServer>

 *

   http://www.openldap.org/doc/admin24/guide.html

 *

   https://help.ubuntu.com/community/Kerberos

 *

   http://www.openldap.org/lists/openldap-technical/201201/msg00140.html

 *

   slapd-config(5)

 *

   *http://www.zytrax.com/books/ldap/* 
<http://www.zytrax.com/books/ldap/ch6/#security>

 *

   http://www.zytrax.com/books/ldap/ch7/#overview

 *

   http://www.zytrax.com/books/ldap/ape/config.html#olcsyncprovconfig

 *

   http://www.cyberciti.biz/faq/how-do-i-rotate-log-files/

 *

   https://www.ietf.org/rfc/rfc2307.txt

 * https://tools.ietf.org/id/draft-howard-rfc2307bis-02.txt

*
There's plenty more out there as well. If you want I can send you my own setup 
guide, which I built over the years from all these resources (and probably many 
more I never recorded), just keep in mind that doc is specific to myself and my 
business and it involves setting up OpenLDAP not just for authentication but 
for almost anything. I also don't use OpenLDAP for authentication only 
authorization. I use MIT Kerberos for auth (which uses OpenLDAP as its backend).

To be more specific to your question of "good resources" I would say as a 
subset of all the links above the below are the best ones to start with:

*http://debian-handbook.info/browse/wheezy/sect.ldap-directory.html
***help.ubuntu.com/community/OpenLDAPServer 
<https://help.ubuntu.com/community/OpenLDAPServer>
**http://www.zytrax.com/books/ldap/

As one last suggestion/comment/remark, I would suggest setting up OpenLDAP as 
your implementation of LDAP and would use PPolicy to authn/authz over TLS. If 
you don't want to send passwords over the wire then use Kerberos for the 
authentication component.

Thanks,
Joshua Schaeffer

On 02/25/2017 03:16 PM, bri...@aracnet.com wrote:

I need to set-up some sort of password server for a small network so that i 
don't have to set-up accounts on every machine.

It looks like LDAP is the best way to do that.

Is it ?

I've been looking at the LDAP how-to's and even tried to turn things on using 
one of them, but I can't quite get things working.

Can someone point me to a good resource as to how to make it work ?

Thanks!





Re: hotpluggable member of a bridge

2017-01-05 Thread Joshua Schaeffer
Interesting, thanks for the explanation.

On Thu, Jan 5, 2017 at 9:32 AM, Reco <recovery...@gmail.com> wrote:

> Hi.
>
> On Thu, 5 Jan 2017 09:19:35 -0700
> Joshua Schaeffer <jschaeffer0...@gmail.com> wrote:
>
> > >
> > >
> > > A sample configuration would be:
> > >
> > > allow-ovs br0
> > > iface br0 inet4 static
> > > address …
> > > netmask …
> > > ovs_type OVSBridge
> > >
> > > allow-br0 eth0
> > > iface eth0 inet6 auto
> > > ovs_type OVSPort
> > > ovs_bridge br0
> > >
> > > allow-hotplug usb0
> > > iface usb0 inet6 auto
> > > ovs_type OVSPort
> > > ovs_bridge br0
> > >
> > > Reco
> > >
> > >
> > Pardon my ignorance, can you explain why you set an IPv4 address on your
> > bridge and an IPv6 address on your bridge interfaces? I've never seen
> this
> > before and would like to know what this accomplishes. Perhaps its a typo
> as
> > I thought IPv4 was just set with "inet" and IPv6 was set with "inet6".
>
> It's simple, although isn't obvious from this abridged example.
> I need a single IPv4 for both interfaces, so I set it on a bridge.
> I don't need distinct IPv4 on ports, so I don't set it there.
>
> Bridged interfaces retain their MACs, so they would get different IPv6
> ULAs, which are provided by radvd from the different host.
> And I don't need these IPv6 either.
>
> So, I can do:
>
> allow-br0 eth0
> iface eth0 inet manual
>  ovs_type OVSPort
>  ovs_bridge br0
>
> And get myself all kinds of unneeded trouble, or I can do:
>
> allow-br0 eth0
> iface eth0 inet6 auto
> autoconf 0
> accept_ra 0
> ovs_type OVSPort
> ovs_bridge br0
>
> Barring this IPv4/IPv6 difference, there should be no noticeable
> outcome between 'inet manual' and 'inet6 auto'.
>
> Reco
>
>


Re: hotpluggable member of a bridge

2017-01-05 Thread Joshua Schaeffer
>
>
> A sample configuration would be:
>
> allow-ovs br0
> iface br0 inet4 static
> address …
> netmask …
> ovs_type OVSBridge
>
> allow-br0 eth0
> iface eth0 inet6 auto
> ovs_type OVSPort
> ovs_bridge br0
>
> allow-hotplug usb0
> iface usb0 inet6 auto
> ovs_type OVSPort
> ovs_bridge br0
>
> Reco
>
>
Pardon my ignorance, can you explain why you set an IPv4 address on your
bridge and an IPv6 address on your bridge interfaces? I've never seen this
before and would like to know what this accomplishes. Perhaps its a typo as
I thought IPv4 was just set with "inet" and IPv6 was set with "inet6".

Thanks,
Joshua Schaeffer


Re: Rebuilding Debian package from source

2016-09-19 Thread Joshua Schaeffer
>
>
> I doubt it's possible to make build paths containing spaces work correctly
> in the general case, even if the original poster does manage to hack
> it up this one time.  My advice would be to stop attempting to use a
> directory with a space in its name as part of a build setup.  It's doomed.
>
>
Go to know. I will not use spaces in the future. I was able to get the
package rebuilt by following this guide:
https://wiki.debian.org/BuildingAPackage

Which, incidentally, does not use spaces in its base directory.

Thanks,
Joshua Schaeffer


Re: Rebuilding Debian package from source

2016-09-19 Thread Joshua Schaeffer
Test...

"Build Test" is the directory I downloaded everything into (a.k.a. it was
the directory I was in when I ran apt-get source nginx-extras)

jschaeffer@mutalisk:~/Build Test$ pwd
/home/jschaeffer/Build Test

jschaeffer@mutalisk:~/Build Test$ ls -l
total 2004
drwxr-xr-x 1 jschaeffer jschaeffer154 Sep 18 19:02 nginx-1.6.2-5+deb8u2
-rw-r--r-- 1 jschaeffer jschaeffer 609344 Jun  1 12:36
nginx_1.6.2-5+deb8u2.debian.tar.xz
-rw-r--r-- 1 jschaeffer jschaeffer   2873 Jun  1 12:36
nginx_1.6.2-5+deb8u2.dsc
-rw-r--r-- 1 jschaeffer jschaeffer   3692 Sep 18 19:14
nginx_1.6.2-5+deb8u2-harm_amd64.build
-rw-r--r-- 1 jschaeffer jschaeffer 621532 Sep 18 19:14
nginx_1.6.2-5+deb8u2-harm.debian.tar.xz
-rw-r--r-- 1 jschaeffer jschaeffer   2051 Sep 18 19:14
nginx_1.6.2-5+deb8u2-harm.dsc
-rw-r--r-- 1 jschaeffer jschaeffer 804164 Sep 17  2014
nginx_1.6.2-5+deb8u2.orig.tar.gz

jschaeffer@mutalisk:~/Build Test$ ls -l nginx-1.6.2-5+deb8u2/
total 596
drwxr-xr-x 1 jschaeffer jschaeffer246 Sep 18 19:11 auto
-rw-r--r-- 1 jschaeffer jschaeffer 236013 Sep 16  2014 CHANGES
-rw-r--r-- 1 jschaeffer jschaeffer 359556 Sep 16  2014 CHANGES.ru
drwxr-xr-x 1 jschaeffer jschaeffer180 Sep 18 18:50 conf
-rwxr-xr-x 1 jschaeffer jschaeffer   2369 Sep 16  2014 configure
drwxr-xr-x 1 jschaeffer jschaeffer 68 Sep 18 18:50 contrib
drwxr-xr-x 1 jschaeffer jschaeffer   1476 Sep 18 18:59 debian
drwxr-xr-x 1 jschaeffer jschaeffer 36 Sep 18 18:50 html
-rw-r--r-- 1 jschaeffer jschaeffer   1397 Sep 16  2014 LICENSE
drwxr-xr-x 1 jschaeffer jschaeffer 14 Sep 18 18:50 man
-rw-r--r-- 1 jschaeffer jschaeffer 49 Sep 16  2014 README
drwxr-xr-x 1 jschaeffer jschaeffer 46 Sep 18 18:50 src
drwxr-xr-x 1 jschaeffer jschaeffer 40 Sep 18 19:02 Test

jschaeffer@mutalisk:~/Build Test$ ls -l nginx-1.6.2-5+deb8u2/Test/
total 0
drwxr-xr-x 1 jschaeffer jschaeffer 12 Sep 18 19:13 nginx-1.6.2-5+deb8u2

jschaeffer@mutalisk:~/Build Test$ ls -l
nginx-1.6.2-5+deb8u2/Test/nginx-1.6.2-5+deb8u2/
total 0
drwxr-xr-x 1 jschaeffer jschaeffer 20 Sep 18 19:14 debian


On Mon, Sep 19, 2016 at 4:59 AM, Zoltán Hermann <zoltan...@gmail.com> wrote:

> Hello,
>
> "cp: cannot stat ‘Test/nginx-1.6.2-5+deb8u2/auto’: No such file or
> directory"
>
> Test or Build Test ?
>
> Greetings
> Zoltán
>
>
>
>
> Joshua Schaeffer <jschaeffer0...@gmail.com> ezt írta (2016. szeptember
> 19., hétfő):
> > I'm trying to rebuild the nginx-extras package from Jessie and I'm
> running into an error when I run debuild. I want to add a module to Nginx.
> I've just been following this to rebuild the package:
> https://raphaelhertzog.com/2010/12/15/howto-to-rebuild-debian-packages/
> >
> > Here is what I've done:
> >
> > Downloaded the source using apt.
> >
> > apt-get source nginx-extras
> >
> > Built the dependencies for the package
> >
> > apt-get build-dep nginx-extras
> >
> > Updated the debian/rules file to include my module
> > Added my module to the debian/modules directory
> > Ran debuild to compile the package.
> >
> > debuild -us -uc
> >
> > When I run the debuild command I get an error about no
> Test/nginx-1.6.2-5_deb8u2/auto directory existing. As the error suggests it
> doesn't exist, but I'm not sure how to fix it. Creating the directory just
> leads to another error. Would anybody be able to tell me why I'm getting
> this error and how to fix it. I do have devscripts installed on the machine.
> >
> > jschaeffer@mutalisk:~/Build Test/nginx-1.6.2-5+deb8u2$ debuild -us -uc
> > ...
> > make[1]: Entering directory '/home/jschaeffer/Build
> Test/nginx-1.6.2-5+deb8u2'
> > dh_testdir
> > mkdir -p /home/jschaeffer/Build Test/nginx-1.6.2-5+deb8u2/
> debian/build-full
> > cp -Pa /home/jschaeffer/Build Test/nginx-1.6.2-5+deb8u2/auto
> /home/jschaeffer/Build Test/nginx-1.6.2-5+deb8u2/debian/build-full/
> > cp: cannot stat ‘Test/nginx-1.6.2-5+deb8u2/auto’: No such file or
> directory
> > cp: will not create hard link 
> > ‘Test/nginx-1.6.2-5+deb8u2/debian/build-full/Build’
> to directory ‘Test/nginx-1.6.2-5+deb8u2/debian/build-full/Build’
> > debian/rules:141: recipe for target 'config.arch.full' failed
> > make[1]: *** [config.arch.full] Error 1
> > make[1]: Leaving directory '/home/jschaeffer/Build
> Test/nginx-1.6.2-5+deb8u2'
> > debian/rules:117: recipe for target 'build' failed
> > make: *** [build] Error 2
> > dpkg-buildpackage: error: debian/rules build gave error exit status 2
> > debuild: fatal error at line 1376:
> > dpkg-buildpackage -rfakeroot -D -us -uc failed
>


Rebuilding Debian package from source

2016-09-18 Thread Joshua Schaeffer

I'm trying to rebuild the nginx-extras package from Jessie and I'm running into 
an error when I run debuild. I want to add a module to Nginx. I've just been 
following this to rebuild the package: 
https://raphaelhertzog.com/2010/12/15/howto-to-rebuild-debian-packages/

Here is what I've done:

1. Downloaded the source using apt.
 * apt-get source nginx-extras
2. Built the dependencies for the package
 * apt-get build-dep nginx-extras
3. Updated the debian/rules file to include my module
4. Added my module to the debian/modules directory
5. Ran debuild to compile the package.
 * debuild -us -uc

When I run the debuild command I get an error about no 
Test/nginx-1.6.2-5_deb8u2/auto directory existing. As the error suggests it 
doesn't exist, but I'm not sure how to fix it. Creating the directory just 
leads to another error. Would anybody be able to tell me why I'm getting this 
error and how to fix it. I do have devscripts installed on the machine.

jschaeffer@mutalisk:~/Build Test/nginx-1.6.2-5+deb8u2$ debuild -us -uc
...
make[1]: Entering directory '/home/jschaeffer/Build Test/nginx-1.6.2-5+deb8u2'
dh_testdir
mkdir -p /home/jschaeffer/Build Test/nginx-1.6.2-5+deb8u2/debian/build-full
cp -Pa /home/jschaeffer/Build Test/nginx-1.6.2-5+deb8u2/auto 
/home/jschaeffer/Build Test/nginx-1.6.2-5+deb8u2/debian/build-full/
*cp: cannot stat ‘Test/nginx-1.6.2-5+deb8u2/auto’: No such file or directory*
cp: will not create hard link 
‘Test/nginx-1.6.2-5+deb8u2/debian/build-full/Build’ to directory 
‘Test/nginx-1.6.2-5+deb8u2/debian/build-full/Build’
debian/rules:141: recipe for target 'config.arch.full' failed
make[1]: *** [config.arch.full] Error 1
make[1]: Leaving directory '/home/jschaeffer/Build Test/nginx-1.6.2-5+deb8u2'
debian/rules:117: recipe for target 'build' failed
make: *** [build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2
debuild: fatal error at line 1376:
dpkg-buildpackage -rfakeroot -D -us -uc failed


Re: sssd can't find sudo users

2016-02-20 Thread Joshua Schaeffer

I figured this out. Found this in slapd's log:

Feb 20 23:27:19 baneling slapd[22588]: conn=1058 op=0 BIND dn="" method=128
Feb 20 23:27:19 baneling slapd[22588]: conn=1058 op=0 RESULT tag=97 err=0 text=
Feb 20 23:27:19 baneling slapd[22588]: conn=1058 op=1 SRCH 
base="ou=SUDOers,dc=harmonywave,dc=com" scope=2 deref=0 
filter="(&(objectClass=sudoRole)(cn=defaults))"
Feb 20 23:27:19 baneling slapd[22588]: conn=1058 op=1 SEARCH RESULT tag=101 
err=13 nentries=0 text=TLS confidentiality required
Feb 20 23:27:19 baneling slapd[22588]: conn=1058 op=2 SRCH 
base="ou=SUDOers,dc=harmonywave,dc=com" scope=2 deref=0 
filter="(&(objectClass=sudoRole)(|(sudoUser=jschaeffer)(sudoUser=%jschaeffer)(sudoUser=%#5000)(sudoUser=%administrator)(sudoUser=%sftp-users)(sudoUser=%wheel)(sudoUser=%#4000)(sudoUser=%#4001)(sudoUser=%#4002)(sudoUser=ALL)))"
Feb 20 23:27:19 baneling slapd[22588]: conn=1058 op=2 SEARCH RESULT tag=101 
err=13 nentries=0 text=TLS confidentiality required
Feb 20 23:27:19 baneling slapd[22588]: conn=1058 op=3 SRCH 
base="ou=SUDOers,dc=harmonywave,dc=com" scope=2 deref=0 
filter="(&(objectClass=sudoRole)(sudoUser=*)(sudoUser=+*))"
Feb 20 23:27:19 baneling slapd[22588]: conn=1058 op=3 SEARCH RESULT tag=101 
err=13 nentries=0 text=TLS confidentiality required

I didn't have my /etc/ldap/ldap.conf file using start_tls so it wouldn't 
connect with TLS. I updated the URI parameter in the file to:

URI ldap://baneling.harmonywave.com/starttls

And it works now.

Thanks,
Joshua



Re: sssd can't find sudo users

2016-02-20 Thread Joshua Schaeffer

Opps, typo in the subject.

On 02/20/2016 09:47 PM, Joshua Schaeffer wrote:

I setup an SSO environment using Debian 8 systems. I have a Kerberos server 
which uses LDAP as its backend. I have users and groups created in OpenLDAP. 
The SSO environment seems to be working correctly. I installed SASL, GSSAPI, 
and SSSD on a test client. I can see my users and groups using getent from my 
test client and I can log into the server (locally and through SSH).

I also have sudo-ldap installed and I'm trying to get SSSD to lookup my sudo users in 
LDAP, but I can seem to get this to work. I keep getting a "user is not in the 
sudoers file.  This incident will be reported." error. My configuration for the test 
client is below:

root@korhal: cat /etc/sssd/sssd.conf
[sssd]
config_file_version= 2
services= nss,pam
domains= HARMONYWAVE

[nss]
debug_level= 5
filter_users= root
filter_groups= root
#fallback_homedir= /home/%u

[pam]

[domain/HARMONYWAVE]
debug_level= 5
auth_provider= krb5
chpass_provider= krb5
krb5_server= immortal.harmonywave.com
krb5_realm= HARMONYWAVE.COM
cache_credentials= false

access_provider= simple
id_provider= ldap
ldap_uri= ldap://baneling.harmonywave.com
ldap_tls_reqcert= demand
ldap_tls_cacert= /etc/ssl/certs/ca.harmonywave.com.pem
ldap_search_base= dc=harmonywave,dc=com
ldap_id_use_start_tls= true
ldap_sasl_mech= GSSAPI
ldap_user_search_base= ou=People,dc=harmonywave,dc=com
ldap_group_search_base= ou=Group,dc=harmonywave,dc=com
ldap_user_object_class= posixAccount
ldap_user_name= uid
ldap_fullname= cn
ldap_user_home_directory= homeDirectory
ldap_group_object_class= posixGroup
ldap_group_name= cn
ldap_sudo_search_base= ou=SUDOers,dc=harmonywave,dc=com

sudo_provider= ldap

Getent shows that it can find me, my group, and that I am part of the wheel 
group:

root@korhal:/home/jschaeffer# getent passwd jschaeffer
jschaeffer:*:5000:5000:Joshua Schaeffer:/home/jschaeffer:/bin/bash
root@korhal:/home/jschaeffer# getent group jschaeffer
jschaeffer:*:5000:jschaeffer
root@korhal:/home/jschaeffer# getent group wheel
wheel:*:4002:jschaeffer

I have the wheel group in OpenLDAP:

root@korhal:/home/jschaeffer# ldapsearch -LLL -Y GSSAPI -H 
ldap://baneling.harmonywave.com -b ou=SUDOers,dc=harmonywave,dc=com
SASL/GSSAPI authentication started
SASL username: jschaef...@harmonywave.com
SASL SSF: 56
SASL data security layer installed.
dn: ou=SUDOers,dc=harmonywave,dc=com
objectClass: top
objectClass: organizationalUnit
ou: SUDOers

dn: cn=%wheel,ou=SUDOers,dc=harmonywave,dc=com
objectClass: top
objectClass: sudoRole
cn: %wheel
sudoUser: %wheel
sudoHost: ALL
sudoCommand: ALL

dn: cn=defaults,ou=SUDOers,dc=harmonywave,dc=com
objectClass: top
objectClass: sudoRole
cn: defaults
description: Add default sudoOptions's here

When I try to run any command with sudo it fails:

jschaeffer@korhal:~$ sudo ls
[sudo] password for jschaeffer:
jschaeffer is not in the sudoers file.  This incident will be reported.

Any help would be appreciated. Thanks,
Joshua




sssd can't fine sudo users

2016-02-20 Thread Joshua Schaeffer

I setup an SSO environment using Debian 8 systems. I have a Kerberos server 
which uses LDAP as its backend. I have users and groups created in OpenLDAP. 
The SSO environment seems to be working correctly. I installed SASL, GSSAPI, 
and SSSD on a test client. I can see my users and groups using getent from my 
test client and I can log into the server (locally and through SSH).

I also have sudo-ldap installed and I'm trying to get SSSD to lookup my sudo users in 
LDAP, but I can seem to get this to work. I keep getting a "user is not in the 
sudoers file.  This incident will be reported." error. My configuration for the test 
client is below:

root@korhal: cat /etc/sssd/sssd.conf
[sssd]
config_file_version = 2
services= nss,pam
domains = HARMONYWAVE

[nss]
debug_level = 5
filter_users= root
filter_groups   = root
#fallback_homedir   = /home/%u

[pam]

[domain/HARMONYWAVE]
debug_level = 5
auth_provider   = krb5
chpass_provider = krb5
krb5_server = immortal.harmonywave.com
krb5_realm  = HARMONYWAVE.COM
cache_credentials   = false

access_provider = simple
id_provider = ldap
ldap_uri= ldap://baneling.harmonywave.com
ldap_tls_reqcert= demand
ldap_tls_cacert = /etc/ssl/certs/ca.harmonywave.com.pem
ldap_search_base= dc=harmonywave,dc=com
ldap_id_use_start_tls   = true
ldap_sasl_mech  = GSSAPI
ldap_user_search_base   = ou=People,dc=harmonywave,dc=com
ldap_group_search_base  = ou=Group,dc=harmonywave,dc=com
ldap_user_object_class  = posixAccount
ldap_user_name  = uid
ldap_fullname   = cn
ldap_user_home_directory= homeDirectory
ldap_group_object_class = posixGroup
ldap_group_name = cn
ldap_sudo_search_base   = ou=SUDOers,dc=harmonywave,dc=com

sudo_provider   = ldap

Getent shows that it can find me, my group, and that I am part of the wheel 
group:

root@korhal:/home/jschaeffer# getent passwd jschaeffer
jschaeffer:*:5000:5000:Joshua Schaeffer:/home/jschaeffer:/bin/bash
root@korhal:/home/jschaeffer# getent group jschaeffer
jschaeffer:*:5000:jschaeffer
root@korhal:/home/jschaeffer# getent group wheel
wheel:*:4002:jschaeffer

I have the wheel group in OpenLDAP:

root@korhal:/home/jschaeffer# ldapsearch -LLL -Y GSSAPI -H 
ldap://baneling.harmonywave.com -b ou=SUDOers,dc=harmonywave,dc=com
SASL/GSSAPI authentication started
SASL username: jschaef...@harmonywave.com
SASL SSF: 56
SASL data security layer installed.
dn: ou=SUDOers,dc=harmonywave,dc=com
objectClass: top
objectClass: organizationalUnit
ou: SUDOers

dn: cn=%wheel,ou=SUDOers,dc=harmonywave,dc=com
objectClass: top
objectClass: sudoRole
cn: %wheel
sudoUser: %wheel
sudoHost: ALL
sudoCommand: ALL

dn: cn=defaults,ou=SUDOers,dc=harmonywave,dc=com
objectClass: top
objectClass: sudoRole
cn: defaults
description: Add default sudoOptions's here

When I try to run any command with sudo it fails:

jschaeffer@korhal:~$ sudo ls
[sudo] password for jschaeffer:
jschaeffer is not in the sudoers file.  This incident will be reported.

Any help would be appreciated. Thanks,
Joshua