Re: [SLUG] Virtualisation and DRBD

2010-08-30 Thread Joel Heenan
While the setup you describe is possible there are a number of drawbacks:

- DRBD disks are very slow, depending on the protocol you set them to use
but still there is a performance penalty. If you DRBD the entire image of a
running xen guest, your going to suffer performance hits for everything the
OS does - not just the HA important data

- Live migration is fantastic but you lose the ability to independently
upgrade either side of the cluster. This makes rollback harder and more
complex

- Likewise, you kinda put all your HA eggs into one basket since this one
DRBD setup might be the basis for a number of clusters. Can create some
complex failure scenarios, and be hard to maintain e.g. should you wish to
resize the DRBD disk maybe

- Finally you now make a decision and describe the VM's themselves as
cluster resources, or you have clusters on-top of clusters and have a dom0
cluster and domU cluster.

For these reasons I would lean towards running DRBD inside each VM, with a
minimum amount of shared state of the disk

Joel

On Wed, Aug 25, 2010 at 12:46 PM, Nigel Allen d...@edrs.com.au wrote:


 Hi All

 We're investigating both virtualisation of servers and High Availability
 at the same time.

 Currently looking at Linux-HA and DRBD (amongst others).

 The idea of DRBD appeals to both me and the client as it means (or seems
 to at least) that we could add a third (off-site) machine into the
 equation for real DR.

 What happens when we then introduce Virtualisation into the equation
 (currently have 4 x servers running Centos  Windoze Server - looking at
 virtualising them onto one single box running Centos-5).

 I suppose the (first) question is: If we run 4 virtualised servers (B,
 C, D, and E) on our working server A (complete with it's own storage),
 can we also use DRBD to sync the entire box and dice onto server A1
 (containing servers B1, C1, D1, and E1) or do we have to sync them all
 separately? Will this idea even float? Can we achieve seamless failover
 with this. If not, how would you do it

 Any input (as ever) gratefully accepted.

 Confused at the Console

 Nigel.





 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] A little script help

2010-03-22 Thread Joel Heenan
Hi Josh,

Having such strict rules about your passwords obviously reduces the
keyspace and makes them far less secure. Also for the really paranoid
$RANDOM in bash is a pseudo-random number generator that could
possibly be predicted - you should use a more secure random source.

Here is my attempt at the script you are looking for. I'm guessing you
wanted the middle of the password to be lower-case only? Anyway
shouldn't be too hard to modify if this is not what your after.


#!/bin/bash

ULENGTH=1 # number of upper-case letters
LLENGTH=5 # number of lower-case letters
NUMLENGTH=2 # number of numbers

ULETTERS=ABCDEFGHIJKLMNOPQRSTUVWXYZ
LLETTERS=abcdefghijklmnopqrstuvwxyz
NUMBERS=123456789

for (( i=0; i$ULENGTH; i++ ))
do
  PASS=$PASS${ULETTERS:$(($RANDOM%${#ULETTERS})):1}
done
for (( i=0; i$LLENGTH; i++ ))
do
PASS=$PASS${LLETTERS:$(($RANDOM%${#LLETTERS})):1}
done
for (( i=0; i$NUMLENGTH; i++ ))
do
PASS=$PASS${NUMBERS:$(($RANDOM%${#NUMBERS})):1}
done
echo $PASS
exit


Joel

On Tue, Mar 16, 2010 at 6:18 PM, Josh Smith
joshua.smi...@optusnet.com.au wrote:


 Hey everybody.


 I am using this script at the moment. It a little password generator
 script I was wondering if anyone could help me out.

 I would like the First letter to be a Capital and the last two to be any
 digits?

 The one that I have puts capitals all over the place and sometimes it
 does not have any number's at all.

 I plan to use this for work for the (DRN) I have run out of things to
 put as my password and I am not aloud to have the same password twice


 Thanks in advance Josh.

 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Sticky bit on /var/tmp

2010-03-22 Thread Joel Heenan
On Wed, Mar 10, 2010 at 10:07 AM, Craig Dibble cr...@rootdev.com wrote:
 Does anyone have any thoughts on removing the sticky bit on the /var/tmp
 directory and setting it to 777?

In the past there have been exploits which relied upon racing
processes then modify files they have placed in /tmp or /var/tmp to
gain elevated privileges. Googling race tmp exploit will show up
lots of these. It is almost certainly bad practice to do this.

 The reason for this is that we have a large amount of data moving through
 that folder, in the order of more than 100GB.

I think data of that size belongs in /var/cache/ or /var/spool/ or
simply somewhere else entirely. /var/tmp/ is for temporary files that
survive between reboots[1]. If you have an application that requires
lots of space, I would put it on a separate partition and keep it away
from my OS partitions, maybe stuff it all somewhere under /opt/

[1] 
http://www.pathname.com/fhs/pub/fhs-2.3.html#VARTMPTEMPORARYFILESPRESERVEDBETWEE

Joel
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Asset Tracking / Inventory Management

2009-11-09 Thread Joel Heenan
SLUG,

Looking for a lightweight open-source asset tracking / inventory management
tool. Our needs are basic, but we may want to customize the model or
starting getting funky with it.

The tool should:

 - Have a reasonable web based interface
 - Allow us access to the data raw or otherwise leverage it for our
automated tools (monitoring, configuration management)
 - Allow us to customize the model

Basically we want to store the information about a number of xen guests and
their hosts, and some network related information in a database where we can
all access it.

Any suggestions? A simple rails/django app you have used would be
sufficient.

Joel
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] mii-tool or ethtool ?

2008-08-04 Thread Joel Heenan
mii-tool does not support gigabit.

Joel

On Mon, Jul 28, 2008 at 5:34 PM, Tony Sceats [EMAIL PROTECTED] wrote:

 Hi Sluggers,

 Anyone ever had mii-tool and ethtool tell them different things? As you can
 see the duplex setting is being reported differently (this is a Broadcom
 Corporation NetXtreme II BCM5706 Gigabit Ethernet (rev 02)). Does anyone
 know of another method to check the Duplex setting so I can verify which
 one
 is correct?

 (btw, I had changed the settings using mii-tool)

 # ethtool eth0
 Settings for eth0:
Supported ports: [ TP ]
Supported link modes:   10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes:  Not reported
Advertised auto-negotiation: No
Speed: 100Mb/s
Duplex: Half
Port: Twisted Pair
PHYAD: 1
Transceiver: internal
Auto-negotiation: off
Supports Wake-on: g
Wake-on: d
Link detected: yes
 # mii-tool -v eth0
 eth0: 100 Mbit, full duplex, link ok
  product info: vendor 00:08:18, model 21 rev 2
  basic mode:   100 Mbit, full duplex
  basic status: link ok
  capabilities: 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD
  advertising:  100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Oracle 9i database and samba

2008-02-28 Thread Joel Heenan
On Fri, Feb 29, 2008 at 10:08 AM, Simon Wong [EMAIL PROTECTED] wrote:

 On Thu, 2008-02-28 at 18:04 +0900, Adrian Chadd wrote:
   Can anyone confirm whether Oracle 9i running under Windows will work
 if
   it's DB files are stored on a samba share?
  
   I have to try and run Mincom's Minescape under Windows but want the
 data
   and DB shared via Samba.
 
  .. why do you think this would be a good idea?

 because the instance of Windows will be running under VMware server (or
 Workstation) and I like to keep the data outside the VM where it can
 also be shared with other instances of the app.

 What are you thinking?


Simon,

Why don't you run the database on a big fat database server then have all
the applications connect to it via oracle listeners through TCP/IP. That
would be the standard solution to this problem.

This solution is strange because

  - you may hit issues because not all FS locking commands will be supported
  - running multiple database instances connecting to the same DB files is
normally done in an Oracle RAC configuration. You don't run Oracle on the
app servers you run it on database servers and use a shared filesystem like
ASM or OCFS2 normally connected to the DB files over a SAN. Network
filesystems are not normally used for database files.

Is there a good reason you need more than one oracle instance?

Joel
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Asynchronous Distributed Filesystem

2008-01-15 Thread Joel Heenan
SLUG,

We have a requirement in a new project to have a distributed
filesystem. Files are written to one of 32 * 200MB volumes and we need
to keep them in sync with a DR site. Rsync, I believe, will be just
too slow to replicate changes - unless there is some way to make the
rsync daemon hook into the kernel and know what changes have been
made?

RHEL GFS I think will not work across such differences and won't do it
asynchronously.

unionfs is I think too experimental.

Continous Access, using our SAN to replicate the data, at this point
has to be discounted because of licensing.

How do other people generally solve this problem?

Thanks

Joel
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Asynchronous Distributed Filesystem

2008-01-15 Thread Joel Heenan
Hi,

In response to Adrian I'm looking for a solution that will work well under RHEL.

Thanks for the suggestions thus far I'll check them out now. Comments below

On 1/16/08, Alex Samad [EMAIL PROTECTED] wrote:
 On Wed, Jan 16, 2008 at 05:15:51PM +1100, Joel Heenan wrote:
  SLUG,
 
  We have a requirement in a new project to have a distributed
  filesystem. Files are written to one of 32 * 200MB volumes and we need
  to keep them in sync with a DR site. Rsync, I believe, will be just
  too slow to replicate changes - unless there is some way to make the
  rsync daemon hook into the kernel and know what changes have been
  made?

 lustre comes to mind ?  you haven't really expanded on how the striping is
 supposed to be done ?

Umm not striping mirroring. Looking at having all the data replicated
out to a DR site so there would be to separate instances.


 
  RHEL GFS I think will not work across such differences and won't do it
  asynchronously.
 can you expand on across such differences.


Sorry I didn't mean to say differences I meant to say distances. The
DR site is a good 20km away. I have not researched this thoroughly but
it was my understanding that GFS was designed for fibre connected
volumes not over large distances with higher latency.

 
  unionfs is I think too experimental.
 
  Continous Access, using our SAN to replicate the data, at this point
  has to be discounted because of licensing.
 
  How do other people generally solve this problem?
 
  Thanks
 
  Joel
  --
  SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
  Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
 

 --
 We will stand up for terror --we will stand up for freedom.

 - George W. Bush
 10/18/2004
 Marlton, NJ
 in a campaign speech

 -BEGIN PGP SIGNATURE-
 Version: GnuPG v1.4.6 (GNU/Linux)

 iD8DBQFHjabdkZz88chpJ2MRAtajAKCyBHYpuJVrDn0/kZxA6Ip2jD7ZOwCgmBL6
 ztaW/YLUVg5UdXS29A7eArg=
 =SVkT
 -END PGP SIGNATURE-

 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Thanks

Joel
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] bypass SA ?

2007-07-08 Thread Joel Heenan

Why did you place 'internal' in quotes, is it internal or not ? lol

If its internal you want to set up your trust paths correctly

http://wiki.apache.org/spamassassin/TrustPath

Joel

On 7/6/07, Voytek Eymont [EMAIL PROTECTED] wrote:

What's my easiest way to exempt some users from SA checks ?

I currently have some users' 'internal' email rejected by exceeding SA
score, I need to allow it at the expense of these users getting more spam
or whatever, what 's my best option ?

PLS copy all replies to my email, as I'm not on list at this point , thanks


--
Jul  6 11:30:52 koala amavis[19428]: (19428-06) LMTP::10024
/var/amavis/tmp/amavis-20070706T105644-19428: [EMAIL PROTECTED] -
[EMAIL PROTECTED] SIZE=73604 Received: from koala.sbt.net.au ([127.0.0.1])
by localhost (koala.sbt.net.au [127.0.0.1]) (amavisd-new, port 10024) with
LMTP for
[EMAIL PROTECTED]; Fri,  6 Jul 2007 11:30:52 +1000 (EST)
Jul  6 11:30:52 koala amavis[19428]: (19428-06) Checking: bEXJ8Az8-bAF
[61.29.101.10] [EMAIL PROTECTED] - [EMAIL PROTECTED]
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p007 1 Content-Type:
multipart/mixed
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p001 1/1 Content-Type:
text/plain,
size: 130 B, name:
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p008 1/2 Content-Type:
message/rfc822
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p009 1/2/1 Content-Type:
multipart/related
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p010 1/2/1/1 Content-Type:
multipart/alternative
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p002 1/2/1/1/1 Content-Type:
text/plain, size: 2022 B, name:
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p003 1/2/1/1/2 Content-Type:
text/html, size: 46212 B, name:
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p004 1/2/1/2 Content-Type:
image/gif, size: 3244 B, name: image001.gif
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p005 1/2/1/3 Content-Type:
image/png, size: 4083 B, name: image002.png
Jul  6 11:30:52 koala amavis[19428]: (19428-06) p006 1/2/1/4 Content-Type:
image/gif, size: 5662 B, name: image003.gif
Jul  6 11:30:58 koala amavis[19428]: (19428-06) SEND via SMTP:  -
[EMAIL PROTECTED], 250 2.6.0 Ok, id=19428-06, from MTA([127.0.0.1]:10025):
250 Ok:
queued as E37C12384F8
Jul  6 11:30:58 koala amavis[19428]: (19428-06) SPAM, [EMAIL PROTECTED] -
[EMAIL PROTECTED], Yes, score=6.564 tag=0.5 tag2=6.31 kill=6.31
tests=[AWL=-2.340, BAYES_00=-2.599, DATE_IN_FUTURE_06_12=1.668,
HTML_90_100=0.113,
HTML_MESSAGE=0.001, RCVD_IN_SORBS_WEB=1.456, RCVD_IN_XBL=3.897,
TVD_ACT_193=2,
TVD_FW_GRAPHIC_ID3=2, UPPERCASE_50_75=0.368], autolearn=no, quarantine
bEXJ8Az8-bAF
([EMAIL PROTECTED])

--


--
Voytek

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] graduate programmers

2006-02-23 Thread Joel Heenan
Ken Foskey [EMAIL PROTECTED]:


On Fri, 2006-02-17 at 08:24 +1100, Benno wrote:
  

On Fri Feb 17, 2006 at 08:14:07 +1100, ashley maher wrote:


G'day,

Anybody know the ball park for grad programmers these days in Sydney?
  

I'm not sure, it would totally depend on experience, and skills,
and not all graduates are made the same, but maybe ~$50k.



No graduate is worth $50K.

$30K if you have some experience at programming at Uni.

$40K if you got good marks specifically with the languages that they
want

$45K if you have great marks and specific knowledge, understanding of
the processes and can demonstrate it in an interview.

Ken
  


I have just graduated, and my friends and I are on between 40k - 50k.
The highest I heard was 74 k US, to live in the US and work for
Microsoft. I did a four year degree and most of the people I know are
smart so maybe those figures are skewed. I've certainly heard that 30k
is the norm for IT graduates.

I found it was quite difficult to get a job but that those that were
offered were mostly very good. The competition out there is ridiculous
for graduate positions and some of the bullshit they expect you to put
up with is ridiculous. I would get calls the day before hand about major
interviews which gives you no time to prepare or get time off work. At
some places you spend 4 hours completing psychometric surveys to cull
people, and while those surveys are accurate to some degree the level to
which they are used was astounding to me. In one process we were told
that 95% of people had been cut by these surveys.

I picked my job because I liked it rather than the salary. I hope it
pays off.

Joel
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Re: slug Digest, Vol 34, Issue 31

2006-01-22 Thread Joel Heenan
Just further to this I have a Dvico Fusion HD DVB-T Plus and I have it
working sweet. You need to download the stock 2.6.12 kernel and compile
it with the options as specified in this page

http://www.itee.uq.edu.au/~chrisp/Linux-DVB/DVICO/

I'm running debian with latest mythtv, mythweb and so forth. If your
interested I'll send you my remote config because that took me hours
manually plugging in the codes and so forth.

Joel

Bill:

I have a Dvico Fusion HD DVB-T card which I haven't been able to get to 
work under any distro.


Kaffeine worked direct from the Kanotix 2005-4 Lite Final LiveCD so 
I  installed it to this PC, booted Kaffeine, set Channels to Sydney-nth 
shore, scanned channels and it worked. Did same some time ago with earlier 
vrsion of Kanotix for my AverMedia card, which also worked.

I know that this thread was about mythtv, but the Kanotix config files may 
help somebody trying myth tv with a Dvico Fusion card.

Bill


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Merging two physical directories using possibly using ln-s and simple moving question

2005-07-07 Thread Joel Heenan
Slug,

Two embarassingly simple questions.

First, I have two directories with music files. I want to have them
appear as one directory to the filesystem, but actually be in two
physical locations (its too large for one disk). so I have
/home/media/music/ and /misc/media/music/ and I want all the
directories in /misc/media/music to appear in /home/media/music.

I thought one way would be to recursively ln-s the directories, and
then cron this operation. I am thinking there is a better way though.
What are your thoughts?

Second question: back in my windows days I would use the GUI to do
tasks like grab all the files created yesterday and move to this
directory. In a linux terminal I can't work out an easy way to do
this. One way is

find -ctime 1 | xargs mv

But that is very inefficient, and not particularly easy to type or
remember. Another way is to open norton commander or a variant and use
that, but call me crazy I still reckon this task and other similar
tasks are just easier with a GUI. Would you agree with that or do you
have solutions to these tasks in command form?

Joel
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] Squid: Sometimes files stop downloading after 1 mb

2005-05-10 Thread Joel Heenan
I have looked through the config file and it is all defaults. I tried
setting 

reply_body_max_size 0 

And that didn't work (I think the documentation is wrong on this) because
you need an allow line. 

reply_body_max_size 0 allow all

Works but doesn't fix the problem. Likewise

Reply_body_max_size 9 allow all

Also doesn't fix it.

Not at work and can't SSH for some reason but I looked at the machine and
there was no major resource usage. It has 512mb ram, not sure about the
processor.

It started happening recently but I'm not sure which update caused it,
because it is an intermittent problem.

I guess I will post a bug report.

-Original Message-
From: Kevin Saenz [mailto:[EMAIL PROTECTED] 
Sent: Monday, May 09, 2005 8:34 PM
To: [EMAIL PROTECTED]
Cc: slug@slug.org.au
Subject: Re: [SLUG] Squid: Sometimes files stop downloading after 1 mb

what is your config file like? are you caching sites? how much disk 
space do you have? is your system swapping heaps?
We have one squid server with 1Ghz CPU, 1 Gb RAM and 2 TB of disk space 
looking after 2800 ppl


Hey SLUG,

We are experiencing a strange problem with Squid. We start Squid and it
runs fine at first. Then after a period of time it gets into a state
where when downloading a file after just before it reaches the 1 mb mark
the download pauses as if it has timed out. Once Squid is in this state
it can be reliably reproduced. I have sometimes found the downloads
resume after a period of waiting, say 10-15 minutes, but are then
corrupted. Once in this state it can be restarted and the problem
sometimes re-appears and sometimes goes away. 

We are using Squid 2.5.9-6

Joel

  



-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Squid: Sometimes files stop downloading after 1 mb

2005-05-08 Thread Joel Heenan
Hey SLUG,

We are experiencing a strange problem with Squid. We start Squid and it
runs fine at first. Then after a period of time it gets into a state
where when downloading a file after just before it reaches the 1 mb mark
the download pauses as if it has timed out. Once Squid is in this state
it can be reliably reproduced. I have sometimes found the downloads
resume after a period of waiting, say 10-15 minutes, but are then
corrupted. Once in this state it can be restarted and the problem
sometimes re-appears and sometimes goes away. 

We are using Squid 2.5.9-6

Joel

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] FIX: Winamp hangs when accessing samba shares

2004-09-12 Thread Joel Heenan
I'm sorry if this has been covered, I haven't had time to read this maillist
much this year! Just want to get this solution googled for some poor person
having the same problem.

If you are experiencing this problem the solution is 

use sendfile=no

In your smb.conf


-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Web Project Management Software - Calendar, Source Sharing, etc.

2004-07-12 Thread Joel Heenan
Dear Slug,

I was wondering if sluggers could recommend some software so that myself and
the project team and submit availability calendars and compare times that we
are all available. I guess it needs to be pretty powerful so I can submit my
week schedule and have it know to repeat that and then add once off events.

I would also like some source management software. At the moment I'm looking
at obliquid . Does anyone have any experience with this?

Thanks a lot for your help!

Joel

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] upgrading PHP: Apache 1.3x, RH7.3, RPMs

2004-01-13 Thread Joel Heenan
 I have PHP Version 4.1.2 on RH7.3 Apache 1.3x, which is about 
 the latest PHP build as available from RH RPMs via RHN;
 
 I'd like to install a more recent PHP build with the current 
 Apache 1.3x; as it seems I can not get it from rhn's RPM, 
 what do other do in similar predicament ?
 
 does the PHP binary differ between Apache 1.x and 2.x, can I 
 install a PHP meant for Apache 2.x?
 
 presumedly I can simply install a newer php library, if i do 
 that 'manually'
 will the RPM system get very ofended ?

You should be aware that RedHat is no longer releasing errata for 7.3 as I
understand it. This means if a security hole is discovered tomorrow you will
be in a very difficult situation and will probably be forced to shutdown
services while you build them, etc. etc. 

You can download and install Php. The RPM system will be very offended and
confused. When you try and install something requiring PHP it may screw up
or whatever. But what are you going to install? Redhat 7.3 no longer has any
new programs. I would imagine you have most of the stuff out there requiring
PHP that you want.

As of December last year I was in your position, now I have changed to
Debian and not looked back (assuming this email gets through ;-). Its
awesome. Apt-get makes me feel like god. Best bet is that you change OS,
unless you can use the Fedora RPMs (I don't think you can?).

Joel

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] Questions about AWStats Logs !!

2003-11-18 Thread Joel Heenan


 -Original Message-
 From: Louis [mailto:[EMAIL PROTECTED]
 Sent: Tuesday, November 18, 2003 9:21 PM
 To: [EMAIL PROTECTED]; [EMAIL PROTECTED]
 Subject: RE: [SLUG] Questions about AWStats Logs !!


 Hi Joel:

 I have read the manual and also found the config file
 to modify.

 I have some questions see below under [Louis].

 Anyone else, feel free to answer them as well.


 [Louis] My objective here is first to have AWStats stop counting hits
 fron cron jobs. For each web site on the server,
 do I enter the IP address of the server itself in each site AWStats
 config file or do I enter the IP address of the site
 hostname in SkipHosts in each site AWStats config file?

If cron is running on localhost, you enter localhost.


 [Louis] Most of the hits from the Hosts (Top 10) section of AWStats is
 from my ISP as a result of the activities
 I do regularly on the server. I don't want AWStats to look at my ISP IP
 period. The ISP IP is dynamically allocated.
 So do I just enter say the top 2 level only, i.e xxx.xxx.ignore.ignore

 Where xxx.xxx is the part I enter ?

You enter ^xxx.xxx. as in the example directly above SkipHosts.


 [Louis] I have other questions from AWStats log display. With the
 Connect to site from section of AWStats:

 1. the part Links from an Internet Search Engine, the data is provided
 in two parts, i.e

 - engine Pages/Hits

 1.1- What does Hits mean here ?
  Does it mean the no. of times Search Engine spidered the site ?

I think hits is files, number of things accessed. So if your webpage has an
external stylesheet, or some external javascript, the spider may grab these.
Google may even get the images for its image search.


 1.2- What does pages viewed means here ?
  Does it mean total pages that the Search Engine spidered on the
  site ?

It means the number of pages viewed, the number of .php, .html, .htm, etc.
files accessed I guess.


 2. the part Links from an external page, sometimes I see pages from
 the site itself
 listed in this section. How can that be ?

Not sure, that is a bit perplexing. Do they have the same hostname? Make
sure your HostAliases is set up correctly if you have missed a host here
that your site could be called then it will think its external.


 Thanks.

No worries hope that helps.

Joel

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] Questions about AWStats Logs !!

2003-10-28 Thread Joel Heenan

  
   Issue 4:
   ==
   Cron scripts that accesses web pages not via browser but
  command line.
   Does AWStats count stats for this as hits and logged them as Unique
   Visitors for host that resolve to the domain name where the cron is
   executed ?
 
  By default they are included. There is an option called
  SkipHosts which can be used to skip them.

 [Louis] I see. What conf file do I have to modify ? For SkipHost
 I just enter the full IP address of the server ?

 Also can I use this SkipHost so that when I connect to the site
 from my dial up ISP AWStats won't log these hits ?

http://awstats.sourceforge.net/docs/index.html

This explains where the config file should be and what it is called. I don't
mean to shrug off your question but you are honestly better off reading the
official manual for this. I will only explain it more poorly.

Have a look at the documentation in the conf file and on the website about
SkipHosts. You should not need to enter the full IP of the server because it
will only access itself through localhost. To ignore all hits from your ISP
you may need to ignore everyone going through your ISP which may or may not
be a good idea. If you are on a major ISP then your reporting may become
skewed.

Joel

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] Questions about AWStats Logs !!

2003-10-20 Thread Joel Heenan
 Issue 1:
 
 First let start with Unique Visitor. From my understanding a unique
 visitor is a host that came to a web site. A unique visitor can have more
 than one visits, and a visit can consist of more than 1 page viewed. Not
 let's say I connect to the internet using my ISP that uses dynamic IP
 addressing. If I go to my own main url site (i.e the base index.html) will
 AWStats count this as a Unique Visitor if the IP address changes from each
 connection despite the fact that I visted the site myself ?

A unique visitor is one IP address accessing your website in one day, I
think. If you have a dynamic IP and you visit your website you will be
counted as one visitor for that day, unless you IP address is reallocated
during the day and you visit again. Most dynamic IPs I have experienced do
not change that frequently. Telstra cable - around 3 months. When I had
dyamic ADSL - around a week with the one IP.

Also I don't know about your network setup, if you are hosting at home its
most likely all the connections would be local connections.


 For such a visit does this unique visitor falls in the category Direct
 address / Bookmarks from AWStats Connect to site from ?

 If so then does Direct address / Bookmarks  means I'm the one visited
 the site, i.e if most of the hits are from hosts that resolves to my ISP
 domain name provider ?

There is a HTTP field called referrer in which the browser provides the URL
of a website that has linked to your website. If the referrer is empty then
it is a direct address / bookmark. That means the user has typed it in or
selected a bookmark. Otherwise they have followed a link.

If the majority of hits resolve to you, chances are you have been the major
viewer of your website yes. Direct address / Bookmarks does not imply you
though as your friends may have bookmarked your website.


 Issue 2:
 =
 If I connect to my site via ftp or SSH is this also logged as Unique
 Visitor ? Does uploads and downloads counts as visits ?

Awstats is just a program processing error_log and access_log. This is only
connected to httpd.
You can set it up to log mail and ftp but this is done separately with a
different conf file.


 Issue 3:
 =
 AWStats has a section called Authenticated users (Top 10). What falls in
 the category Other logins (and/or anonymous users) ?

Sorry not sure, I would guess it would be the logins that don't make the top
10 or the anonymous log ins?


 Issue 4:
 ==
 Cron scripts that accesses web pages not via browser but command line.
 Does AWStats count stats for this as hits and logged them as Unique
 Visitors for host that resolve to the domain name where the cron is
 executed ?

By default they are included. There is an option called SkipHosts which can
be used to skip them.

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] apt-get Older versions of redhat (freshrpms question)

2003-09-18 Thread Joel Heenan
Slug,

I was going to upgrade my redhat 7.2 system to redhat 9 because I was under
the impression that no more rpms were being built  this version of redhat
(as it says on http://valhalla.freshrpms.net/ ). Then with all the ssh stuff
I ran apt-get update  apt-get upgrade and it found new ssh packages and
installed them!

I am only interested in updating the system so it doesn't get r00ted, does
anyone know what the deal is whether security packages are still being built
and for how much longer? Do I need to upgrade or should I just keep checking
the freshrpms website to see if they have stopped building packages for my
system?

Joel

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


RE: [SLUG] Anyone Installed AWStats on a Server

2003-09-06 Thread Joel Heenan

 Hi Sluggers:

 I'm trying to install AWStats onto my web server. However I am stuck with
 the setup/install instructions.

 It asks me to install all scripts on the server's cgi-bin, where is that
 located for root ?

It has nothing to do with root per se. Your webserver runs under a user a
separate user account normally, mine runs under apache group apache. Some
run under www-data group www-data, some nobody group nobody.

cgi-bin for me is located in /var/www/cgi-bin/ . It depends on your OS. try
locate cgi-bin (damn cryptic linux commands ;-)
That is where you install it.


 It also asks me to install some other files in a directory readable by web
 server. Again does root have a directory that I can access via web ?

Forget root. Find out what user your webserver is running under. locate
http.conf then go less on that file and check it out. Then just make sure
that user has read access to the directory you put the files. Its not a big
issue generally most directories are created with read access.

joelh [01:04:32] joelh $ ls -dl /var/www/cgi-bin/
drwxr-xr-x   11 root root 4096 Sep  7 00:00 /var/www/cgi-bin/

Mine is owned by root but has read access by all.

Joel

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] Processing files with spaces

2003-08-08 Thread Joel Heenan
Hello Sluggers!

I am often moving or coping files with spaces in them. What I would really
like is be able to get just go

| xargs mv %1 /newdir

or something similar. At the moment I am stuck writing scripts like this

#!/bin/bash
IFS='
'
for f in `/bin/ls | grep -i $1`
do
mv -v $f $2
done

What I would like is a general solution where I can say execute this command
on every line in this file. Should I extend this script so it takes in the
mv -v script on command line and works off standard input or is there an
easier/better way? Surely this is a simple problem but I can't see the
simple solution. :-(

Joel

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] apt-get on Redhat

2003-07-01 Thread Joel Heenan
Thanks to Luke's advice I have installed apt-get and synaptic and after a
few hitches everything is running very smoothly I love it. Just a quick
question, I'm running redhat 7.2 will upgrades only be available as long as
redhat keeps releasing them or are they built by users?

Is there any pressing reason why I should upgrade considering I have a
slowish system (433 celeron) and I do not want any extra bloat.

Once again thanks Luke loving it!

Joel

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] RE: IMAP and procmail (Regular Expression Question)

2003-06-05 Thread Joel Heenan
 * ^((List-Id|X-(Mailing-)?List):(.*[]\/[^]*))

My understanding of this regular expression. Indulge me:

Message must begin with either List-Id or X-(Mailing-)List where the
Mailing- part is optional.
It must then be followed by a colon then anything up to a / then anything
up to another 

So help me out a bit, I copied the line from Slug into a file and egrep did
not match slug's List-Id line!

List-Id: Linux and Free Software Discussion slug.slug.org.au

Why is the / in there? Why is the  in brackets what purpose does this
serve?

:-/

Joel

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


Re: [SLUG] PHP MySQL help

2003-03-21 Thread Joel Heenan
I use CURRENT_TIMESTAMP with DATETIME as the column type. 
Thus just modify s/time()/CURRENT_TIMESTAMP/ should fix your woes.

Joel
http://cow.whyi.org


On Thu, 20 Mar 2003, Robert Maurency wrote:

 I've got a MySQL question for you.
 
 Is the php value ?echo time();? suitable for a MySQL timestamp field?
 
 The reason I ask is because I'm getting an odd result in my web content
 publishing site.
 
 I'm grabbing form data from a query string and inserting it into MySQL.
 (NewsDate is a MySQL timestamp field)
 
 $query = INSERT INTO News(Headline, Story, Status, Author,
 NewsDate) VALUES(' . $_GET['Headline'] . ', ' . 
$_GET['Story'] . ', ' .
 $_GET['Status'] . ', ' . $_GET['Author'] . ', ' . time() . ');
 
 This query works fine, but when I view the information on my 
news page the
 date returned ($row-NewsDate) is:
 
 00
 
 Which formatted with this function ?echo date(h:i d M Y,
 $row-NewsDate);? returns:
 
 11:00 01 Jan 1970
 
 If I leave the time() function out of my insert statement then 
the returned
 value is:
 
 20030320084636 or (02:14 19 Jan 2038)
 
 Does anyone know what is going on here?
 Any help, much appreciated.
 Rob
 (I'm making the tranistion between ASP  Access to PHP and MySQL and am
 having a tough time with this GUI-less database.)

 
 *
 This mail, including any attached files may contain
 confidential and privileged information for the sole
 use of the intended recipient(s). Any review, use, 
 distribution or disclosure by others is strictly prohibited.
 If you are not the intended receipient (or authorised to 
 receive information for the recipient), please contact 
 the sender by reply e-mail and delete all copies of
 this message.
 *
 
-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug


[SLUG] RE: slug digest, Vol 1 #2621 - 12 msgs

2003-03-17 Thread Joel Heenan
Ken Foskey wrote:
On Mon, 2003-03-17 at 18:02, Chris Samuel wrote:
 You're system is not specified in /etc/hosts.allow, by the look
 of things.
From memory using hosts.allow is a major security hole and it is
recommended that you don't use it at all.  My betters will confirm or
deny.

Why is using hosts.allow / hosts.deny a major security hole and under what
circumstances? I ask because I use them quite alot especially to restrict
ssh connections. Is it just that relying on DNS lookups is flawed? Should I
write IPtables instead to restrict access to SSH and mail etc. ? It is much
more convenient to use hosts.allow .

Joel
http://cow.whyi.org

-- 
SLUG - Sydney Linux User's Group - http://slug.org.au/
More Info: http://lists.slug.org.au/listinfo/slug