I should add to this that I'm able to run X-windows programs on my
local workstation, such as gnome-terminal, xclock, etc. Xming opens
them up with no issues whatsoever. It's just that I can't get a gdm
login screen when trying to connect via xdmcp.
If it's any help here is what I see in
A bit more info if it's helpful: I have tried kdm as well and get the
exact same results, gray screen with an X cursor, no login window or
greeter at all.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
I have a domain, let's call it example.com. I am able to do zone
transfers on the local host as follows:
dig example.com AXFR @localhost
This command outputs all of the contents of the zone as expected. I
am unable to do zone transfers on my subdomain though:
dig subdomain.example.com AXFR
What am I missing here?
Ok, I was able to sort this one out on my own. I was missing some
periods on my NS records, apparently this was somehow preventing the
transfers. All is good.
___
CentOS mailing list
CentOS@centos.org
Thanks, gents!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
environment?
thanks
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Is there a flag for the df command to get the total disk space used on
all filesystems as one number? I have a server with a lot of mounted
shares. I'm looking for a simple way to measure rate of data growth
across all shares as one total value.
___
df -kl | awk '/^\/dev\// { avail += $3/1024 } END { printf(%d Mb
used\n,avail)} '
Awesome, this is going into my bag of goodies. Thanks!
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
If a disk based archive will work, backuppc (
http://backuppc.sourceforge.net/) is fairly painless and it's scheme of
compression and hardlinking duplicates lets you keep about 10x the history
you'd expect. If you need offsite copies you'll have to run an independent
instance elsewhere or
IMO, this is easier to setup than selinux, *may* meet all your needs and
will not be affected by upgrades.
I would agree with this. Try just creating a user with rbash as his login
shell and then sudo /bin/su - username. Poke around and see what you are
able to do, and you'll find out if it
/dev/sda is the virtual disk as it appears to CentOS as I can access it
with hdparm.
Do I need to use another device for the RAID array (which?) or is it
impossible to smart monitor thru a RAID controller?
You probably will want to install the HP Proliant Support Pack as it will
include the
Can anyone help make sense of this? This is an ext3 partition. It's
only showing 403GB out of 426GB used, but then it says only 632MB
available? Where'd the extra ~25GB go?
[EMAIL PROTECTED] df -H /disks/vrac5
Filesystem Size Used Avail Use% Mounted on
/dev/sdb2
I would like to use swatch to tail a log file for PageTurnEvent, and
if this is not seen in the past 15 minutes then a restart script
should be run.
Does anyone know if this is possible with the swatch program?
___
CentOS mailing list
CentOS@centos.org
Does anyone know if this is possible with the swatch program?
I don't see how as swatch is looking for things that happen, not
those that don't.
I figured as much. Before I go and write my own, are there any
general purpose utilities that can simply monitor a log file for
inactivity? In
I would like to block all DNS queries that come from one particular ip
address. I used TCPdump to verify that the queries were in fact,
coming from this IP:
[EMAIL PROTECTED]:~]$ sudo tcpdump -n udp port 53 and src 10.100.1.1
tcpdump: listening on eth0
11:12:17.162100 10.100.1.1.19233
On Tue, Jul 15, 2008 at 11:55 AM, nate [EMAIL PROTECTED] wrote:
Sean Carolan wrote:
What is confusing me is why my iptables rule is not working correctly.
TCPdump shows that the source is correct. Any ideas?
try blocking tcp as well, most name servers listen on both tcp and
udp.
I do
I do have a rule for blocking TCP, forgot to mention that. You can
see from my tcpdump output above that the inbound packet is UDP
though. I wonder why iptables doesn't block it even with this rule?
The really strange part about this is, if I remove the ACCEPT rules
that are further down in
Strange...your rule seems ok to me. Try with DROP instead of REJECT ?
Nice! it works :)
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
On Tue, Jul 15, 2008 at 1:43 PM, nate [EMAIL PROTECTED] wrote:
Sean Carolan wrote:
I do have a rule for blocking TCP, forgot to mention that. You can
see from my tcpdump output above that the inbound packet is UDP
though. I wonder why iptables doesn't block it even with this rule?
Try
I'm attempting to block access to port 53 from internet hosts for an
internal server. This device is behind a gateway router so all
traffic appears to come from source ip 10.100.1.1. Here are my
(non-working) iptables rules:
-A RH-Firewall-1-INPUT -s 10.100.1.1 -m tcp -p tcp --dport 53 -j
CRITICAL : [ipv6_test] Kernel is not compiled with IPv6 support
[ OK ]
FATAL: Module off not found.
CRITICAL : [ipv6_test] Kernel is not compiled with IPv6 support
Try looking inside /etc/modprobe.conf for these lines:
alias net-pf-10
I've used the guide on mantic.org before, worked well for me:
http://www.mantic.org/wiki/Installing_BackupPC
We use BackupPC extensively where I work, once you get it settled down
and in a steady state it is invaluable.
___
CentOS mailing list
Yep. They are there. So what is the 'proper' method to get them out (other
than using VI and deleteing the lines?)?
I would comment them out and add another comment like this:
# Un-comment these to disable ipv6
#alias net-pf-10 off
#alias ipv6 off
You will need to reboot the server to enable
Are you running tcpdump on the same machine that is doing the filtering?
You do realize that tcpdump sees the packets as they come from the
interface and before they are passed to the filter rules, right?
I had forgotten this important piece of information. Thank you for
pointing this out.
This awk command pulls URLs from an apache config file, where $x is
the config filename.
awk '/:8008\/root/ {printf $3 \t}' $x
The URL that is output by the script looks something like this:
ajpv12://hostname.network.company.com:8008/root
Is there a way to alter the output so it only shows
The URL that is output by the script looks something like this:
ajpv12://hostname.network.company.com:8008/root
Is there a way to alter the output so it only shows hostname by
itself? Do I need to pipe this through awk again to clean it up?
awk '/:8008\/root/ {printf $3 \t}' $x | sed
The awk output that was piped into to the sed command looks like this:
ajpv12://host1.domain.company.com:8008/root
ajpv12://host2.domain.company.com:8008/root
ajpv12://host3.domain.company.com:8008/root
___
CentOS mailing list
CentOS@centos.org
those are supposed to be tab-separated urls, all on one line.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
the time to help me Ross. This is going
to be extremely helpful.
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
This is a bit naive and childish:
how terribly shocking...I suggest also blocking China, 'cause they're
commies, and France because they eat frogs
The OP is not discriminating against Africa because of government systems,
skin color, or diet. He is trying to reduce lost revenue, credit card
Ever heard of the Western Union scam?
Yes, it usually goes something like this:
Scammer emails an online business asking if he can over-pay you with a
check. The check looks just like any other business check and is often
printed with the name of a real bank. The scammer then asks you to
Sounds similar to the mod_jk connector in apache to connect to
tomcat. When I had to deal with this I setup a dedicated apache
instance on each system running tomcat whose sole purpose for
existence was for testing that connector.
We have decided to take this tactic and set up a dedicated
[EMAIL PROTECTED]:~/ApacheJServ-1.1.2]$ ./configure
--with-jdk-home=/usr/local/mercury/Sun/jdk1.5.0_01
--with-JSDK=/usr/local/mercury/Sun/JSDK2.0/lib/jsdk.jar
--with-apache-src=/usr/include/httpd/
If I run the configure command without --with-apache-src here is what I get:
configure: error:
This seems to indicate that it wants the apache header files, which
are installed in /usr/include/httpd. Anyway if someone has an idea
how I can get a working mod_jserv module for CentOS3 let me know.
Ok, so after doing some more reading it appears that you can simply
build the mod_jserv.so
mod_jserv is really old, are you sure it can be compiled against apache
2?
If you need a jk connector, use mod_jk. You can find the source rpm in
the RHWAS repository (I didn't check if CentOS has a binary version
somewhere).
ciao
ad
Hi Andrea, thanks for your reply. I know mod_jserv is
Hi Andrea, thanks for your reply. I know mod_jserv is ancient, but we
have to support it because it's still being used on production
machines. Will mod_jk connect in the same way that mod_jserv does?
I have mod_jk module properly loaded now, how would I duplicate this
function of jserv with
I have mod_jk module properly loaded now, how would I duplicate this
function of jserv with mod_jk?
IfModule mod_jserv.c
ApJServMount /servlets ajpv12://servername.com:8008/root
ApjServAction .html /servlets/gnujsp
/IfModule
I should add that servername.com is localhost, so this
I found this on the mod_jk howto from the apache site:
*
For example the following directives will send all requests ending in
.jsp or beginning with /servlet to the ajp13 worker, but jsp
requests to files located in /otherworker will go to remoteworker.
JkMount /*.jsp
Andrea thank you again for your help. I think I have almost got this
set up right. I copied your workers.properties file and the
appropriate entries from mod_jk.conf and now I can connect, but get a
400 error. I only have the default Apache site configured on this
box, and my mod_jk.conf file
JkOptions +ForwardKeySize +ForwardURICompat -ForwardDirectories
I'm not too famillar with those JkOptions but looking at my old
mod_jk configs I have no JkOptions defined, try removing them and
see if anything changes? My old configs were ajp13, so perhaps
they might be needed with ajp12,
I guess what I'm not clear on is how you replace mod_jserv's configuration:
ApJServMount /servlets ajpv12://host.domain.com:8008/root
with the equivalent version using JkMount.
On the old server running mod_jserv our configuration looks like this:
IfModule mod_jserv.c
Might it be
JkMount /*.html ajp12
assuming ajp12 is the name of your worker in worker.properties
Yea, I tried that and even just a simple wildcard like this:
JkMount /* ajp12
but no dice. If I can't solve this then I may have to just install
apache 1.3 everywhere to
Sounds similar to the mod_jk connector in apache to connect to
tomcat. When I had to deal with this I setup a dedicated apache
instance on each system running tomcat whose sole purpose for
existence was for testing that connector.
So say setup an apache instance on another port, and have it
How about setting up a cron to monitor it and auto restart if it's not
responding?
wget -q --timeout=30 http://localhost:8008/ -O /dev/null || (command to
restart jserv)
I tried pulling up port 8008 in a web browser, but it doesn't work
quite like that. Apache is configured with mod_jserv
Check
http://support.hyperic.com/confluence/display/hypcomm/HyperForge/#HyperFORGE-pluginforge
for existing plugins.
Perhaps what you want can be done with a JMX plugin ?
Hyperic looks interesting, but anytime someone claims Zero-Touch
Systems Management I have to raise a skeptical eyebrow.
owner:group at the end of my script, but was curious
if rsync had this built in. Or maybe there is some ACL setting that
will force the right owner and group on all new files.
thanks
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman
Do your user and group names on both your source and destination
systems have matching numeric values?
No. The source system is a Windows machine running cygwin-rsyncd.
Linux/UNIX systems carry the numeric values and look up the text
values in /etc/passwd and /etc/group for display. If
What rsync options are you using? rsync has options to preserve owner
and group, if you exclude those options, then won't the files assume
the user and group of the user account on the destination machine? I
haven't tested this, but it looks good on paper.
Currently the script runs as root,
firewall policy worked out.
thanks
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
$ ldd /usr/sbin/sendmail.sendmail | grep wrap
libwrap.so.0 = /usr/lib/libwrap.so.0 (0x00319000)
tcp_wrappers never sees the connection directly. sendmail handles it
from start to end.
Thanks for this info. I will set up an iptables rule to block this access.
*ALL* IP access to port 25. Incidentally I found
some other ports that shouldn't have been exposed so those were closed
off as well.
thanks
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
I have a bunch of these scattered through /var/log/messages on a
couple of servers:
I/O error: dev 08:20, sector 0
The server functions just fine. Anyone have an idea what might be
causing this error?
___
CentOS mailing list
CentOS@centos.org
I have a virus and spam filter device that can do VRFY commands to
reject invalid email before it gets to the next mail hop. How can I
configure the SMTP server to only allow VRFY commands from one
particular IP address, and nowhere else? I don't want spammers to be
able to hammer on the gateway
Ok, I can't quite figure out how to make this work. I want to
simultaneously log everything for facility local5 in a local file and
a remote syslog-ng server. local7 is working fine getting the
boot.log log entries transferred over to the syslog-ng server, but not
so much with local5. Local
UPDATE:
The problem seems to be on the client side, because when I do this:
logger -p local5.info test
the file does show up properly on the syslog-ng host. Anyone have an
idea why the other processes that write to local5 on the client are
not logging to the remote host?
local5.*
I have also found that there are a small handful of hosts that seem to
spit out a line or two of log output once in a while on the server,
but have not yet identified a pattern.
___
CentOS mailing list
CentOS@centos.org
We have a directory full of installation and configuration scripts
that are updated on a fairly regular basis. I would like to implement
some sort of version control for these files. I have used SVN and CVS
in the past, but I thought I'd ask if anyone can recommend a simple,
easy-to-use tool
I dont really think you can get much easier than CVS if you need
centralized management over a network. If it never gets off the
machine then there is RCS. If those aren't simple enough... I don't
think any of the others are going to help.
Thanks for the pointers, it looks like we will go
I have run into a snag with my CVS installation:
[EMAIL PROTECTED]:~]$ cvs co -P installfiles
cvs checkout: Updating installfiles
cvs [checkout aborted]: out of memory; can not allocate 1022462837 bytes
Unfortunately we have a couple of large binary .tgz files in the
repository. I was able to
Try upping your ulimit.
What does ulimit -a give.
[EMAIL PROTECTED]:~]$ ulimit -a
core file size(blocks, -c) 0
data seg size (kbytes, -d) unlimited
file size (blocks, -f) unlimited
max locked memory (kbytes, -l) 4
max memory size (kbytes, -m) unlimited
Checking in binary files into CVS or any repository control system is
usually a broken thing. You want to either check in the stuff inside
the tar ball seperately (if its going to change), or just copy it into
the archive by updating CVSROOT/cvswrappers
This comes back to the point of
Because these tools are meant to deal with source code files and deal
with diffs of such files. You are cramming a 1 gigabyte of compressed
bits at it and its trying to make sure it could give you a diff of it
later on. I don't have any idea why you would want to store it in a
CVS type
just copy it into the archive by updating CVSROOT/cvswrappers
*.tar -k 'b' -m 'COPY'
*.tbz -k 'b' -m 'COPY'
*.tgz -k 'b' -m 'COPY'
This worked great. Thank you, Stephan. The enormous .tar.gz is now
easily swallowed by the CVS snake.
___
CentOS
Thank you, Stephan. The enormous .tar.gz is now
easily swallowed by the CVS snake.
I mis-spelled your name, Stephen, my bad.
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos
Keychain is quite a useful tool for automating SSH logins without
having to use password-less keys:
http://www.gentoo.org/proj/en/keychain/
Normally it is used like this to set the SSH_AUTH_SOC and
SSH_AGENT_PID variables:
source ~/.keychain/hostname-sh
(This is what's in hostname-sh)
If the variables are exported when you call Perl, they will be
available inside Perl. And, when you call system, they will be
available for those processes as well. Are you having any specific
problems doing this? If you can't make it work, please send more
details on what exactly is not
One solution would be to source ~/.keychain/hostname-sh in the shell
before calling the perl script. That should work.
Ok, can't do this because end-users will not like the extra step.
Another one would be to source it before calling scp:
system (source ~/.keychain/hostname-sh;
I just customized my prompt with a PS1= variable. Since updating my
.bashrc this way, when I try to run commands remotely with ssh I get
this:
[EMAIL PROTECTED]:~]$ ssh server pwd
No value for $TERM and no -T specified/home/scarolan
No value for $TERM
Can anyone recommend an enterprise-class monitoring system for both
Linux and Windows servers? Here are my requirements:
SNMP trap collection, ability to import custom MIBs
isup/isdown monitoring of ports and daemons
Server health monitors (CPU, Disk, Memory, etc)
SLA reporting with nice graphs
You might take a look at OpenNMS and ZenOSS. I'm not sure if either
could do everything you're asking for out of the box however.
Thanks, ZenOSS just might fit the bill.
___
CentOS mailing list
CentOS@centos.org
I tried to use Zenoss for monitoring a small network (about 5 subnets)
and i had really a hard time with relationships (a version of sept 2007).
Did you use the 'enterprise' or the OS version?
___
CentOS mailing list
CentOS@centos.org
I understand that Red Hat has purchased and open-sourced (well sort
of) what was formerly known as Netscape Directory Server. I am
looking for version 6 of netscape directory server, does anyone know
if this is available somewhere?
I believe the only thing you can download is the code that was audited
for suitable GPL License which is what is known as Fedora Directory
Server...
http://directory.fedoraproject.org/wiki/Download
I figured as much. I have an old version of Netscape Directory Server
which I was hoping to
You can set that as an option in yum.conf . However, you do run the chance of
running out of space in /boot if you get too many kernels piled up there. The
default is to keep the last 2 (or 3?) kernels and delete the older ones.
I wonder why it is trying to delete a newer kernel than the one
So, yes there are deeply compelling reasons to upgrade. If you want to
have patches for several kernel buffer exploits, as well as many other
security and functionality patches, you need to do one thing;
yum upgrade, and answer yes.
Or even easier;
yum -y upgrade.
When I have some time to
If you want to keep your existing kernel for a while, just change the
grub default back after the update installs the new one. Then you can
switch, reboot, and rebuild the necessary stuff whenever you have a chance.
Thanks, this is probably what I will end up doing. I tend to err on the side
Maybe there's an ntp expert out there who can help me with this. I have an NTP server serving our local network. It is
set up to use pool.ntp.org servers for it's upstream sync. ntpq -p reveals that the server is stuck on stratum 16,
which I understand means not synced. The clients are
The zeros in the reach column indicate that the server has been unable to
receive any packets from the upstream servers.
Is your server inside a firewall? If so, perhaps it is blocking NTP traffic.
You need to have it allow UDP port 123 in both directions. You don't need
port forwarding from
This is almost certainly incorrect unless you're running a very, very
old RHEL/CentOS release. I believe /var/lib/ntp is the canonical
directory for the drift file in 4.x and 5.x. I doubt ntpd is allowed to
write to /etc/ntp, especially if SELinux is enabled.
Good observation, Paul. That
Could somebody please repost the solution or point me at the correct
resource.
I would also appreciate advice on how to do this on a RHEL4 server being
updated with up2date.
Is it safe just to delete the old kernel and initrd files from the boot
partition and the grub conf file?
Unless
sure, I use webmin's LDAP Users and Groups module on every network
server that I maintain. It's perfect for my needs.
Yes, this is exactly what I'm trying to do. It would be perfect for our
needs too.
The first question that occurs to me is if you did all that. When you do
'getent
not really, have you run system-config-authentication ? That also
configures pam nss which are necessary items.
Yes, I have and unfortunately when the 'ldap' tags are added to
/etc/nsswitch.conf the system won't allow me to authenticate, su or sudo
at all!
If each user shows only once
Thanks for your patience, Craig. So I took your advice and started
with a fresh install of CentOS 5, and followed the instructions in the
documentation exactly as they are written. I got this far:
[EMAIL PROTECTED] migration]# ./migrate_all_online.sh
Enter the X.500 naming context you wish to
. Now I understand what you meant about LDAP not
being designed for authentication. Thank you again for your time,
Craig. This was a good learning experience for me.
thanks
Sean
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman
sure but for less than $20 and 2-3 hours, you can master LDAP and be the
envy of all the guys in your office and the object of affection for all
the ladies.
;-)
kerberos is actually a more secure authentication system because
passwords don't continually cross the network.
I do plan to get
On Jan 10, 2008 6:38 PM, Craig White [EMAIL PROTECTED] wrote:
On Thu, 2008-01-10 at 14:40 -0600, Sean Carolan wrote:
Can anyone point me to a how to or beginners guide to setting up LDAP
authentication on CentOS5 with replication?
well, if you want something that's comprehensive, I
201 - 286 of 286 matches
Mail list logo