[CentOS] SELINUX blocks procmail from executing perl script without logging
Hi, I'm upgrading our request tracker from Centos 7 to 8 and found some unexpected SELINUX issues with procmail. Even after I create a policy which allows all denied operations, procmail is still not allowed to run a perl script (in my case rt-mailgate). I get the following error in the procmail log: "Can't open perl script "/opt/rt5/bin/rt-mailgate": Permission denied" but I have no denied audit entry in /var/log/audit/audit.log. If I set selinux to permissive, everything works fine. Any idea how to debug this? Best regards, Radu ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NBDE, clevis and tang for non-root disk
On Tue, Nov 27, 2018 at 8:06 PM mark wrote: > Sorry, I think you misunderstood. The key for root is *not* in > /etc/crypttab - that's only for the secondary ones. > > mark > > I understood correctly, just that you mentioning that one can put the key in the /etc/crypttab gave me the idea to check if the initramfs image will have the same content for crypttab. So now I have 2 working solutions: 1) /etc/crypttab on OS has a reference to the file that contains the key to decrypt the second volume (the key is on the encrypted root fs). I have checked and the initramfs /etc/crypttab has only the line for the root volume, without any reference to the second volume. The root volume gets decrypted by clevis+tang. The second volume is decrypted after the root volume is decrypted, /etc/crypptab is read and the key is found. 2) the initramfs /etc/crypttab was manually updated to add the line for the second volume. Clevis + tang will decrypt both the root fs and the second volume. I was surprised to find out the the /etc/crypttab in initramfs is different from the one in OS. So now I'm searching for the correct way to force dracut to include /etc/crypttab unchanged in the initramfs image. Radu ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] NBDE, clevis and tang for non-root disk
On Tue, Nov 27, 2018 at 3:14 PM mark wrote: > What we do is to have the encryption key of the secondary filesystem in > /etc/crypttab, which is, of course, 600. As it boots, it decrypts from > that as > it mounts the rest of the system. > > mark > Thanks, this is working as expected and it gave me the hint needed to find the actual problem. The problem is that the initramfs image generated by dracut -f does not include the /etc/crypttab from the OS (it only contains the entry for the root device). Once I have manually added the other volumes in the /etc/crypttab file from the initramfs image, clevis is able to decrypt all volumes. Now the question is why the generated iniramfs image has a different /etc/crypttab. How can I specify /etc/crypttab for the initramfs so that furhter kernel updates will not replace it with the wrong file? Radu ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] NBDE, clevis and tang for non-root disk
Hi, Has anybody managed to get network disk bound disk encryption to work with a non-root disk? It works fine for the root device, but the moment I add another volume to /etc/crypttab the system will no longer boot automatically. A tcpdump on the tang server shows no traffic while the system is stuck at the LUKS password prompt. The second encrypted volume is set up in the same way as the root device and I can unlock the volume using clevis-luks-unlock -d /dev/vda3. I've seen in https://rhelblog.redhat.com/2018/04/13/an-easier-way-to-manage-disk-decryption-at-boot-with-red-hat-enterprise-linux-7-5-using-nbde/ that clevis-luks-askpass.path needs to be enabled but it doesn't make a difference. Any ideas on what 's wrong or how to debug this? Best regards, Radu ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] Huge write amplification with thin provisioned logical volumes
Hi, I've noticed huge write amplification problem with thinly provisioned logical volumes and I wondered if anyone can explain why it happens and if and how can be fixed. The behavior is the same on Centos 6.8 and Centos 7.2. I have a NVME card (Intel DC P3600 -2 TB) on which I create a thinly provisioned logical volume: pvcreate /dev/nvme0n1 vgcreate vgg /dev/nvme0n1 lvcreate -l100%FREE -T vgg/thinpool lvcreate -V4M -T vgg/thinpool -n brick1 mkfs.xfs /dev/vgg/brick1 If I run a write test ( dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync ) I see in iotop that the actual disk write is 30 times the amount of data that I'm actually writing to disk). Total DISK READ: 0.00 B/s | Total DISK WRITE: 1001.23 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 10:59:53 34453 be/4 root0.00 B/s 30.34 M/s 0.00 % 12.10 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync Total DISK READ: 0.00 B/s | Total DISK WRITE: 991.92 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 10:59:54 34453 be/4 root0.00 B/s 30.05 M/s 0.00 % 12.63 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync Total DISK READ: 0.00 B/s | Total DISK WRITE: 1024.52 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 10:59:55 34453 be/4 root0.00 B/s 31.05 M/s 0.00 % 12.49 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync 10:59:55 1057 be/3 root0.00 B/s 15.39 K/s 0.00 % 0.01 % [jbd2/sda1-8] Total DISK READ: 0.00 B/s | Total DISK WRITE: 967.60 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 10:59:56 34453 be/4 root0.00 B/s 29.32 M/s 0.00 % 12.75 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync Total DISK READ: 0.00 B/s | Total DISK WRITE: 943.66 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 10:59:58 34453 be/4 root0.00 B/s 28.60 M/s 0.00 % 11.79 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync 10:59:58 34448 be/4 root0.00 B/s3.84 K/s 0.00 % 0.00 % python /usr/sbin/iotop -o -b -t Total DISK READ: 0.00 B/s | Total DISK WRITE: 959.40 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 10:59:59 34453 be/4 root0.00 B/s 29.07 M/s 0.00 % 11.81 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync Total DISK READ: 0.00 B/s | Total DISK WRITE: 948.38 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 11:00:00 34453 be/4 root0.00 B/s 28.73 M/s 0.00 % 11.57 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync For a 30MB/s write at the application level I get around 1000MB/s write at the device level, i.e. a 33x amplification. On Centos 6 if I try to align the data using the values from https://gluster.readthedocs.io/en/latest/Administrator%20Guide/Setting%20Up%20Volumes/ I get only a 7x amplification. On Cetos 7 I can see the same 7x amplification using the default lvcreate options. This is the Centos 7 iotop output: 12:48:29 Total DISK READ : 0.00 B/s | Total DISK WRITE : 32.24 M/s 12:48:29 Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 226.63 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 12:48:29 15234 be/3 root0.00 B/s3.80 K/s 0.00 % 35.20 % [jbd2/dm-8-8] 12:48:29 15258 be/4 root0.00 B/s 32.24 M/s 0.00 % 10.64 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync 12:48:29 14870 be/4 root0.00 B/s0.00 B/s 0.00 % 0.05 % [kworker/u80:1] 12:48:29 15240 be/4 root0.00 B/s0.00 B/s 0.00 % 0.03 % [kworker/u80:2] 12:48:29 15255 be/4 root0.00 B/s3.80 K/s 0.00 % 0.00 % python /usr/sbin/iotop -o -b -t 12:48:30 Total DISK READ : 0.00 B/s | Total DISK WRITE : 31.97 M/s 12:48:30 Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 224.85 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 12:48:30 15234 be/3 root0.00 B/s0.00 B/s 0.00 % 35.14 % [jbd2/dm-8-8] 12:48:30 15258 be/4 root0.00 B/s 31.97 M/s 0.00 % 10.61 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync 12:48:30 14870 be/4 root0.00 B/s0.00 B/s 0.00 % 0.05 % [kworker/u80:1] 12:48:30 15240 be/4 root0.00 B/s0.00 B/s 0.00 % 0.03 % [kworker/u80:2] 12:48:31 Total DISK READ : 0.00 B/s | Total DISK WRITE : 32.50 M/s 12:48:31 Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 228.94 M/s TIME TID PRIO USER DISK READ DISK WRITE SWAPIN IO COMMAND 12:48:31 15234 be/3 root0.00 B/s0.00 B/s 0.00 % 35.28 % [jbd2/dm-8-8] 12:48:31 15258 be/4 root0.00 B/s 32.48 M/s 0.00 % 10.72 % dd if=/dev/zero of=./zero.img bs=4k count=10 oflag=dsync Still 7x write amplifications seems too much. Has anyone seen this or has any
[CentOS] Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi, After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using libgfapi are no longer able to start. The libvirt log file shows: [2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify] 0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up [2016-11-02 14:26:41.864075] I [MSGID: 114020] [client.c:2356:notify] 0-testvol-client-0: parent translators are ready, attempting connect on transport [2016-11-02 14:26:41.882975] I [rpc-clnt.c:1947:rpc_clnt_reconfig] 0-testvol-client-0: changing port to 49152 (from 0) [2016-11-02 14:26:41.889362] I [MSGID: 114057] [client-handshake.c:1446:select_server_supported_programs] 0-testvol-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2016-11-02 14:26:41.890001] I [MSGID: 114046] [client-handshake.c:1222:client_setvolume_cbk] 0-testvol-client-0: Connected to testvol-client-0, attached to remote volume '/data/brick1/testvol'. [2016-11-02 14:26:41.890035] I [MSGID: 114047] [client-handshake.c:1233:client_setvolume_cbk] 0-testvol-client-0: Server and Client lk-version numbers are not same, reopening the fds [2016-11-02 14:26:41.917990] I [MSGID: 114035] [client-handshake.c:201:client_set_lk_version_cbk] 0-testvol-client-0: Server lk version = 1 [2016-11-02 14:26:41.919289] I [MSGID: 104041] [glfs-resolve.c:885:__glfs_active_subvol] 0-testvol: switched to graph 73332d32-3937-3130-2d32-3031362d3131 (0) [2016-11-02 14:26:41.922174] I [MSGID: 114021] [client.c:2365:notify] 0-testvol-client-0: current graph is no longer active, destroying rpc_client [2016-11-02 14:26:41.922269] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 0-testvol-client-0: disconnected from testvol-client-0. Client process will keep trying to connect to glusterd until brick's port is available [2016-11-02 14:26:41.922592] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-gfapi: size=84 max=1 total=1 [2016-11-02 14:26:41.923044] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-gfapi: size=188 max=2 total=2 [2016-11-02 14:26:41.923419] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-gfapi: size=140 max=2 total=2 [2016-11-02 14:26:41.923442] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-testvol-client-0: size=1324 max=2 total=5 [2016-11-02 14:26:41.923458] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-testvol-dht: size=1148 max=0 total=0 [2016-11-02 14:26:41.923546] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-testvol-dht: size=3380 max=2 total=5 [2016-11-02 14:26:41.923815] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-testvol-read-ahead: size=188 max=0 total=0 [2016-11-02 14:26:41.923832] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-testvol-readdir-ahead: size=60 max=0 total=0 [2016-11-02 14:26:41.923844] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-testvol-io-cache: size=68 max=0 total=0 [2016-11-02 14:26:41.923856] I [MSGID: 101053] [mem-pool.c:617:mem_pool_destroy] 0-testvol-io-cache: size=252 max=1 total=3 [2016-11-02 14:26:41.923877] I [io-stats.c:3747:fini] 0-testvol: io-stats translator unloaded [2016-11-02 14:26:41.924191] I [MSGID: 101191] [event-epoll.c:659:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 2 [2016-11-02 14:26:41.924232] I [MSGID: 101191] [event-epoll.c:659:event_dispatch_epoll_worker] 0-epoll: Exited thread with index 1 2016-11-02T14:26:42.825041Z qemu-kvm: -drive file=gluster://s3/testvol/c7.img,if=none,id=drive-virtio-disk0,format=qcow2,cache=none: Could not read L1 table: Bad file descriptor The brick is available , runs on the same host and mounted in another directory using fuse (to confirm that it is indeed fine). If I downgrade the gluster server to 3.8.4 everything works fine. Anyone has seen this or has any idea how to debug? Regards, Radu ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Disk usage incorrectly reported by du
note that on a large file system with a large number of files, thats VERY expensive, as rsync has to keep a list of every inode number on the whole file system and verify each directory entry isn't pointing to an inode its already linked. if there's a few million files, this data structure gets HUGE in memory. Thanks for the tip. I will keep an eye on the memory usage. For the filesystem in question with around 0.8 million files the memory usage of rsync is acceptable (around 160MB as reported by top RES column and 330 MB for VIRT). ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Disk usage incorrectly reported by du
I have an ext4 filesystem for which the reported disk usage is not correct. I have noticed the discrepancy after I rsync-ed the content to another filesystem and noticed that the used space on the target is almost double of the size reported on the source. Both machines are running the same software - with the same kernel version and same coreutils version (which I later upgraded to latest available version). Both filesystems are clean (verified with fsck.ext4). No sparse files. After further investigation I think that the problem is most likely on the source machine. Here is the du output for for one directory exhibiting the problem: #du -h |grep \/51 201M./51/msg/8 567M./51/msg/9 237M./51/msg/6 279M./51/msg/0 174M./51/msg/10 273M./51/msg/2 341M./51/msg/7 408M./51/msg/4 222M./51/msg/11 174M./51/msg/5 238M./51/msg/1 271M./51/msg/3 3.3G./51/msg 3.3G./51 after changing the directory and running du again I get different numbers #cd 51 du -h 306M./msg/8 676M./msg/9 351M./msg/6 338M./msg/0 347M./msg/10 394M./msg/2 480M./msg/7 544M./msg/4 407M./msg/11 312M./msg/5 326M./msg/1 377M./msg/3 4.8G./msg 4.8G. Do you have any idea what could cause this behaviour? ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Disk usage incorrectly reported by du
No process is reading or writing to the target filesytem (it is a backup machine) or the source machine (I am working on a LVM snapshot but the problem exists for the source filesytem as well). The problem I describe is on the same machine (the source). On Wed, Mar 19, 2014 at 2:33 PM, zGreenfelder zgreenfel...@gmail.comwrote: On Wed, Mar 19, 2014 at 8:14 AM, Radu Radutiu rradu...@gmail.com wrote: I have an ext4 filesystem for which the reported disk usage is not correct. I have noticed the discrepancy after I rsync-ed the content to another filesystem and noticed that the used space on the target is almost double of the size reported on the source. Both machines are running the same software - with the same kernel version and same coreutils version (which I later upgraded to latest available version). Both filesystems are clean (verified with fsck.ext4). No sparse files. After further investigation I think that the problem is most likely on the source machine. Here is the du output for for one directory exhibiting the problem: #du -h |grep \/51 201M./51/msg/8 567M./51/msg/9 237M./51/msg/6 279M./51/msg/0 174M./51/msg/10 273M./51/msg/2 341M./51/msg/7 408M./51/msg/4 222M./51/msg/11 174M./51/msg/5 238M./51/msg/1 271M./51/msg/3 3.3G./51/msg 3.3G./51 after changing the directory and running du again I get different numbers #cd 51 du -h 306M./msg/8 676M./msg/9 351M./msg/6 338M./msg/0 347M./msg/10 394M./msg/2 480M./msg/7 544M./msg/4 407M./msg/11 312M./msg/5 326M./msg/1 377M./msg/3 4.8G./msg 4.8G. Do you have any idea what could cause this behaviour? ___ so you have software creating file on machines A B, synchronized from A to B and now B is using 2x the space; was the software running on B when you did the sync? I've seen similar things happen on all unix systems when you don't close out the file handles on running programs but then overwrite their opened files.to fix it you have to have make the programs close and re-open their files. with well written programs you can do that via a signal or some other trigger mechanism, others will need to be restarted. often it's easier to just schedule a reboot and restart everything rather than wade through all the individual process shutdowns, restarts and time that you'll take affecting production processes, but YMMV. -- Even the Magic 8 ball has an opinion on email clients: Outlook not so good. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Disk usage incorrectly reported by du
http://mradomski.wordpress.com/2007/01/08/finding-an-unlinked-open-file-and-other-lsof-uses/ There are no open files. The filesystem was unmounted, verified (fsck) , mounted again - the behavior remains. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Disk usage incorrectly reported by du
The space used by hard-linked files will be included only in the first directory where they are encountered. In your first case, linked files seen prior to the /51 directory would not have had their space included again under that directory. In the second case, _only_ the /51 directory is being examined, so all space will be included. -- Bob Nichols NOSPAM is really part of my email address. Do NOT delete it. Thank you, this makes sense. I was starting to worry that the filesystem is broken. I'll modify my rsync command to preserve hard links. Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Latest openswan update does no longer connect to Cisco VPN 3000 Series
Both servers are directly connected to Internet so NAT should not be enabled. I've tried to upgrade again and noticed that pluto keeps dying and restarting ervery 30 seconds (just enough for the other VPNs to connect). Here is the log from the old (working) openswan version when connecting to Cisco VPN: Mar 10 10:00:09 firewall pluto[18894]: added connection description ciscovpntest Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: initiating Main Mode Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] method set to=106 Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: ignoring Vendor ID payload [FRAGMENTATION c000] Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: enabling possible NAT-traversal with method draft-ietf-ipsec-nat-t-ike-05 Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: transition from state STATE_MAIN_I1 to state STATE_MAIN_I2 Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: STATE_MAIN_I2: sent MI2, expecting MR2 Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: received Vendor ID payload [Cisco-Unity] Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: received Vendor ID payload [XAUTH] Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: ignoring unknown Vendor ID payload [9bad1e05974f138cfc1f0c2b58144a88] Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: ignoring Vendor ID payload [Cisco VPN 3000 Series] Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: I will NOT send an initial contact payload Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike-02/03: no NAT detected Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: Not sending INITIAL_CONTACT Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: transition from state STATE_MAIN_I2 to state STATE_MAIN_I3 Mar 10 10:00:10 firewall pluto[18894]: ciscovpntest #2: STATE_MAIN_I3: sent MI3, expecting MR3 Mar 10 10:00:11 firewall pluto[18894]: ciscovpntest #2: received Vendor ID payload [Dead Peer Detection] Mar 10 10:00:11 firewall pluto[18894]: ciscovpntest #2: Main mode peer ID is ID_IPV4_ADDR: 'xxx.xxx.xxx.xxx' Mar 10 10:00:11 firewall pluto[18894]: ciscovpntest #2: transition from state STATE_MAIN_I3 to state STATE_MAIN_I4 The openswan-2.6.32-27.2.el6_5 (not working) log: Mar 10 09:57:54 firewall pluto[17287]: added connection description ciscovpntest Mar 10 09:57:55 firewall pluto[17287]: ciscovpntest #2: initiating Main Mode Mar 10 09:57:56 firewall pluto[17287]: ciscovpntest #2: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] method set to=106 Mar 10 09:57:56 firewall pluto[17287]: ciscovpntest #2: ignoring Vendor ID payload [FRAGMENTATION c000] Mar 10 09:57:56 firewall pluto[17287]: ciscovpntest #2: enabling possible NAT-traversal with method draft-ietf-ipsec-nat-t-ike-05 Mar 10 09:57:56 firewall pluto[17287]: ciscovpntest #2: next payload type of ISAKMP NAT-D Payload has an unknown value: 130 Mar 10 09:58:04 firewall pluto[17287]: ciscovpntest #2: discarding duplicate packet; already STATE_MAIN_I1 Mar 10 09:58:05 firewall pluto[17287]: ciscovpntest #2: discarding duplicate packet; already STATE_MAIN_I1 Mar 10 09:58:13 firewall pluto[17287]: ciscovpntest #2: discarding duplicate packet; already STATE_MAIN_I1 Mar 10 09:58:25 firewall pluto[17287]: ciscovpntest #2: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] method set to=106 Mar 10 09:58:25 firewall pluto[17287]: ciscovpntest #2: ignoring Vendor ID payload [FRAGMENTATION c000] Mar 10 09:58:25 firewall pluto[17287]: ciscovpntest #2: enabling possible NAT-traversal with method draft-ietf-ipsec-nat-t-ike-05 Mar 10 09:58:25 firewall pluto[17287]: ciscovpntest #2: ASSERTION FAILED at /builddir/build/BUILD/openswan-2.6.32/programs/pluto/ikev1_main.c:1112: st-st_sec_in_use==FALSE and after 30 seconds pluto restarts. To me this looks like a regression. Where should I report this problem? Centos or Redhat Bugzilla? Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] Latest openswan update does no longer connect to Cisco VPN 3000 Series
Does anyone else noticed problems after updating openswan to openswan-2.6.32-27.2.el6_5.i686 ? In our case a connection to Cisco VPN 3000 Series would no longer work. I can see in the log an ASSERTION FAILED error and the connection would remain in Pending phase 2. Mar 7 16:24:40 firewall pluto[7647]: ciscovpntest #2: discarding duplicate packet; already STATE_MAIN_I1 Mar 7 16:24:53 firewall pluto[7647]: ciscovpntest #2: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] method set to=106 Mar 7 16:24:53 firewall pluto[7647]: ciscovpntest #2: ignoring Vendor ID payload [FRAGMENTATION c000] Mar 7 16:24:53 firewall pluto[7647]: ciscovpntest #2: enabling possible NAT-traversal with method draft-ietf-ipsec-nat-t-ike-05 Mar 7 16:24:53 firewall pluto[7647]: ciscovpntest #2: ASSERTION FAILED at /builddir/build/BUILD/openswan-2.6.32/programs/pluto/ikev1_main.c:1112: st-st_sec_in_use==FALSE Mar 7 16:24:53 firewall pluto[7647]: ciscovpntest #2: using kernel interface: netkey Mar 7 16:24:53 firewall pluto[7647]: ciscovpntest #2: #2: ciscovpntest:500 STATE_MAIN_I1 (sent MI1, expecting MR1); EVENT_RETRANSMIT in 39s; nodpd; idle; import:admin initiate Mar 7 16:24:53 firewall pluto[7647]: ciscovpntest #2: #2: pending Phase 2 for ciscovpntest replacing #0 Downgrading openswan to openswan-2.6.32-27.el6.i686 solves the problem. The problem is restricted to this VPN connection, other 2 VPNs continue to work fine with the new version. Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] is there a way to make the kernel see a new ethernet device without rebooting?
I wonder if it has to do with the type of NIC. In my case, vmware says it's of type 'flexible', and the CentOS o.s uses the 'pcnet32' driver for it. Try: modprobe pcnet32 or if the module is already loaded rmmod pcnet32 modprobe pcnet32 Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Intel DH67BL + CentOS 5.5 IRQ #177 nobody cared
IRQ 177 nobody cared (try booting with the irqpoll option) Have you tried what the error message suggests (add irqpool to the kernel line in grub.conf) ? Regards, Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
[CentOS] RHEL 6 beta manuals online
Hi, I've just noticed that the RHEL 6 beta manuals are online at http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/6-Beta/ I thought you might be interested :) Regards, Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] OpenSSH-5.3p1 selinux problem on CentOS-5.4.
Just for the reference if you want to keep SELINUX enabled and create a new instance of sshd (with the stock CentOS 5.4 sshd) with sftp only you can do the following: -create a copy of /etc/ssh/sshd_config e.g. cp /etc/ssh/sshd_config /etc/ssh/sftpd_config -chage /add the following lines in sftpd_config Port 1234 ChrootDirectory %h Subsystem sftpinternal-sftp AllowUsers externaluser -let SELINUX know that port 1234 (or whatever you put in your sftpd_config) is of type ssh_port_t semanage port -a -t ssh_port_t -p tcp -n 1234 -make sure that the sftp user's home directory respects the requirements of ChrootDirectory sshd_config directive : This path, and all its components, must be root-owned directories that are not writable by any other user or group. For file transfer sessions using “sftp”, no additional configuration of the environment is necessary if the in-process sftp server is used chown root /home/externaluser chmod g-w /home/externaluser -create a directory in which externaluser will be able to write mkdir /home/externaluser/upload chown externaluser /home/externaluser/upload - create a copy of /etc/init.d/sshd init script cp /etc/init.d/sshd /etc/init.d/sftpd - modify it to reflect the sftpd_config config file and a new pid file - make it start automatically chkconfig sftpd --add sftp Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] /etc/aliases file wildcard
On Wed, Oct 28, 2009 at 3:14 AM, Jerry Geis ge...@pagestation.com wrote: I have been trying to find out if the /etc/aliases file can accept wildcards in the user name I was hoping that a line like or similiar: machine*: myaccount would take any name matching machine* and forward onto the myaccount mailbox. man aliases didnt really help me nor did I find anything else. Is there a way to pattern match in /etc/aliases with an * or something? Thanks, Jerry I think that it is possible to do this by using postfix and the /etc/postfix/virtual config file. Regards, Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] no more single cd installs?
Installs from the first CD still work. You just need to make sure that you chose Customize software packages now instead of the default Customize later and deselect every package on the next screen. It has worked for me on all CentOS v5 (including 5.3) and Fedora 9,10 and 11. Later you can use yum to add the Base group and any other required packages. Regards, Radu On Tue, Jul 14, 2009 at 4:46 AM, William Warrenhescomins...@emmanuelcomputerconsulting.com wrote: Scott Ehrlich wrote: On Mon, Jul 13, 2009 at 9:10 PM, Julian Thomasj...@jt-mj.net wrote: On Mon, 13 Jul 2009 16:35:07 -0700 Mark Tomandl wrote: I've had this issue with the 5.3 install as well. I found that using the text interface (as opposed to the default graphical interface) and de-selecting everything except Base and Editors (you might not need this option) in the Customize software selection screen will work. One way to avoid this is, if it is an option for you, is to do a network install from a nearby mirror. Another option is to burn a DVD. All the packages you want, no disc swapping. -- Julian Thomas: j...@jt-mj.net http://jt-mj.net In the beautiful Genesee Valley of Western New York State! -- -- Use a spreadsheet to do the math, but check it by hand just to be sure. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos not when your colo'ed server had only a cd-rom drive. ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Need to test serial port connection
./Setup install * Checking kernel version (2.4.18 or later required)... * Checking for glibc... * Checking glibc version (2.2.4 or later required)... Uncompressing JRE distribution ./bin/jvmShell install /tmp/ML ./install/install.cfg Extracting JVM files to /tmp/ML/jre /tmp/ML/jre/bin/java -cp .:/tmp/ML/lib/em.jar -Djava.compiler=NONE install.LxInstall install ./install/install.cfg /tmp Exception in thread main java.lang.UnsatisfiedLinkError: /tmp/ML/jre/lib/i386/libawt.so: libXp.so.6: cannot open shared object file: No such file or directory Hi, It looks like the installer has its own JRE that requires libXp. Just install it using. yum install libXp Regards, Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Sendmail - STARTTLS not appear on one client
On Fri, Nov 28, 2008 at 11:30 PM, happymaster23 [EMAIL PROTECTED] wrote: Hi, I have Sendmail configured to use STARTTLS for authentication. On all internet connections and computers (that I have tested) works connection over encrypted SMTP flawlessly. Today I was setting up mail client on PC of my customer and standardily checked boxes, that I want to use SSL for POP3 and SMTP. Next I wanted to check configuration (by sending email from this mailbox to this mailbox) but it does not work. So I have opened telnet, connected via port 25 and writed ehlo hostname and then finded out, that there is missing STARTTLS. Is possible, that some bad configuration on client side (firewall, etc...) can cause this error including that this function is missing in printout of ehlo? POP3S working good. Hi, I have seen this kind of problem (STARTTLS not available for a single client but working for everyone else) when the client is behind a CISCO firewall with the FIXUP SMTP configuration option enabled. Disable it using no fixup protocol smtp 25 on the firewall and try again. Regards, Radu ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Random files in homedir gets deleted
Hi you can try to use the kernel audit facility: 1) enable the auditd daemon: service auditd start 2) enable audit for the home directory (only audit write operations to the directory inode); the command is not recursive and you cannot use wildcards auditctl -w /home/user -pw 3) after a file disapears use ausearch to find who removed it (and what command was used to remove it); suppose file test was removed ausearch -f /home/user/test Radu On Jan 4, 2008 11:25 AM, Christopher Thorjussen [EMAIL PROTECTED] wrote: You can enable auditing to determine if the files are disappearing due to human/machine intervention (audit file system deletes) or if it is due to file system corruption (files disappear and no delete audits recorded). It may just be an errant rsync script. -Ross How do I enable auditing of the home dir? /Christopher ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] logwatch reports not benig emailed
Have you run system-switch-mail and selected postfix? Is postfix service running? Logwatch is sending mail ok to another mail server on my install of CentOS 5 with postfix. The only change I made was to add the line MailTo = desired email address to /etc/logwatch/conf/logwatch.conf. Radu On 6/26/07, Kanwar Ranbir Sandhu [EMAIL PROTECTED] wrote: On Mon, 2007-06-25 at 12:22 -0400, Kanwar Ranbir Sandhu wrote: I can send mail on the command line, and I get it at my email address, delivered to the internal mail server: great! But, every night when logwatch runs, the damned reports never make it to my mailbox. What's worse is that I can manually run logwatch (logwatch --mailto root), and I get the report in my mailbox! All CentOS 5 servers are exhibiting the same behaviour. No ideas what's going on? This happening on every single CentOS 5 server I've installed (6 so far). Regards, Ranbir -- Kanwar Ranbir Sandhu Linux 2.6.20-1.2944.fc6 i686 GNU/Linux 07:59:25 up 16 days, 22:46, 1 user, load average: 0.39, 0.31, 0.22 ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos