[SLUG] Project Management in Open Source

2008-08-27 Thread Raphael Kraus

G'day all,

I'm currently researching how open source projects are project managed. 
Texts, papers, and other resources that deal directly with this topic 
seem to be scant at best.


Can anyone provide any pointers? (Yes, this is for my Masters degree!) :)

All the best,

Raphael Kraus
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] DODO

2008-08-26 Thread Raphael Kraus
I'd want steer clear of Dodo if you want adequate support. There is a
practical side on how they manage to keep bargain basement prices.

Personally, I have to recommend Internode. They tend to deliver on their
promises. FWIW, no ISP (or other organisation) is perfect IMHO.

All the best,

Raphael

On Wed, August 27, 2008 12:14 pm, jam wrote:
> To my great angst, yet again my ISP has been acquired by iinet, and yet
> again
> I am going to move.
>
>They won't respond to support mail: 10 over they last year
>Drew Keating: "The TIO can't MAKE us change anything"
>iinet are blacklisted as their invoice mail is not RFC compliant
>( [EMAIL PROTECTED])
>
>
> Anybody: comments on DoDo?
> I will run my own mail, dns, www and ssh servers.
> dns is authorative for tigger.ws
>
> Thanks
> James



-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Prefered video card for Linux dual-head?

2008-08-26 Thread Raphael Kraus

G'day...

I remember days when nVidia setup was not so easy. In fact it was often 
down right frustrating with conflicting instructions. (From memory there 
was a nv as well as nvidia driver, and sometimes neither of which worked!)


The stellar efforts of the Linux developers figuring out nVidia cards, 
and distributions like Ubuntu have really taken the sting out of using 
the cards.


Dean Pittman makes a number of excellent points in this regard, and his 
advice should be well heeded.


Too bad if you need to use a kernel that can't run the pre-built driver. 
Also, just because the current models of nVidia work with current 
setups, doesn't mean you're not going to be disappointed in the near 
future.


There are /very/ pragmatic (not just idealistic) reasons for buying 
hardware products with open source drivers for use on open source platforms.


All the  best,

Raphael

jam wrote:

On Monday 25 August 2008 08:27:55 [EMAIL PROTECTED] wrote:
  

My boss bought second-hand extra monitors for all of us and I now need
to buy a graphics card which can support dual-head for Debian/Ubuntu
(and Windows XP and Vista).

  

[...]

I totally disagree, but I'm pragmatic not idealistic.

nvidia-settings let you setup 2 monitors, specify master monitor, and get it 
working in seconds, all in an easy GUI and IMHO setting up dual monitors IS a 
GUI task.
  



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Joomla! training courses

2008-08-10 Thread Raphael Kraus
G'day all,

Does anyone know of any Joomla! training courses being run locally?

TIA and all the best,

Raphael Kraus

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Removing ability to delete files from vsftpd (or other ftpd)

2007-09-06 Thread Raphael Kraus
Excellent! Thanks heaps Glen!

Wouldn't you know it, I missed the option in `man vsftpd.conf` (and
couldn't find it on google).

Thanks again!

Raphael

On Thu, 2007-09-06 at 16:15 +0930, Glen Turner wrote:
> On Thu, 2007-09-06 at 09:32 +1000, Raphael Kraus wrote:
> 
> > What I would like to do is remove the ability of users to delete files,
> > even though they can upload them.
> > 
> > I'm suspecting that maybe this isn't possible using vsftpd
> 
> See vsftpd.conf and cmds_allowed.  Grab the supported commands from the
> vsftpd source code, you can find what they mean from the FTP RFCs.
> 
> Don't list RNFR, RNTO, DELE or RMD in cmds_allowed.
> 
> 
> 
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Removing ability to delete files from vsftpd (or other ftpd)

2007-09-05 Thread Raphael Kraus
G'day all,

I've got vsftpd setup so that only specific users can log-in. I have
permissions worked so that users can log-in, upload files but not read
or download them.

What I would like to do is remove the ability of users to delete files,
even though they can upload them.

I'm suspecting that maybe this isn't possible using vsftpd, but maybe
I'm just getting it wrong! (As vsftpd uses unix file permissions where
write permission on a directory means you can create or delete a file
irregardless of the permissions on the file itself.)

TIA and all the best,

Raphael
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] MySQL Description Dumper

2007-09-05 Thread Raphael Kraus
G'day Dean and all,


> im looking for something that will spit out something human readable 
> from mysql, describing all the tables
> 
> something that will connect to mysql and spit out a html data
> dictionary


Personally I find mysqldump to be great. How humanly readable it is does
depend on how well the database has been designed. (Having said that, if
the implementation is confusing, it doesn't matter how well you try to
present the raw data.) The proviso here is that it would be helpful if
you understood at least some SQL, as after all you are wanting to look
at table structure. (It's really not that complicated though, if you
take a look at it.)

You can get a dump of the database from the command line, or via a web
interface if you are using phpmyadmin.

HTH - all the best,

Raphael




-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] KVM/QEMU behaviour under (Debian or) Ubuntu Desktop AMD64

2007-08-30 Thread Raphael Kraus

G'day all,

Working with the above, I'm finding some problems. (AMD2 socket.)

First, I find excessive:

printk: 249 messages supressed.
rtc: lost some interrupts at 1024Hz.

messages in /var/log/messages.

This happens for both QEMU and KVM. However, in KVM I can specify 
-no-rtc. (QEMU has the KQEMU enhancement enabled.)


The virtual machines also seem to have periods of slow-downs and 
no-responsiveness under KVM. I'm not sure if this is the case under QEMU 
(I'm trying it now).


I also find that I can install Ubuntu  under QEMU, but I'm unable to do 
so under KVM. I can install Ubuntu 6 under QEMU, and it will run then 
under KVM however if I upgrade to Feisty it will fall over at bootup 
under KVM  but not QEMU.


The hpet emulation arguments are set to yes (as recommended by the KVM 
webpage) by the Ubuntu defaults for KVM. I think the problem must lie 
with QEMU somewhere, and is carried by KVM.


I'm wanting to keep the system fairly standard so may need to submit a 
bug report to Ubuntu/Debian.


Have others experienced this behaviour? I'm very keen to hear from you.

All the best,

Raphael
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] removing duplicate files

2007-07-04 Thread Raphael Kraus

G'day...




In that circumstance, there's a way to do it with just ls:

spindle:~/tmp polleyj$ ls -lR */somefile somefile
-rw-r--r--   1 polleyj  polleyj  0 Jul  4 21:20 hello/somefile
-rw-r--r--   1 polleyj  polleyj  0 Jul  4 21:20 somefile

it doesn't scale though; for the general case, find is more useful, as
you've said.



Whilst we're being pedantic...

$ find . -type d -name ""

Yes, you're right find is very useful... ;)

All the best!

Raphael
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] A file transfer problem

2007-06-22 Thread Raphael Kraus

G'day Leslie,

This is called IP aliasing - see the IP aliasing mini-HOWTO 
http://tldp.org/HOWTO/IP-Alias/index.html


All the best!

Raphael

Leslie Katz wrote:
I'm trying to transfer a large file from computer A to computer B 
using a null modem cable. The file is a compressed file which, when 
transferred, will allow me to install on computer B the distribution 
called Delilinux. I have no other way of getting the file onto 
computer B than via the cable.


Preparatory to such a transfer, I've run a couple of installation 
floppy disks on computer B. They're supposed, among other things, to 
install pppd on computer B. In order to make it easy to transfer the 
file using pppd, you're prompted to do certain things after the 
installation on computer B of pppd.


One is obviously to run pppd on computer A. However, computer B tells 
you, in effect, that computer A must have the address 192.168.0.1. 
I've run on computer A the command I'm told to, but no connection 
between the two computers is created.


Computer A is behind a modem/router and already has the address 10.1.1.1.

Is that likely to be the reason why no connection is created?

If so, is there a way to give 10.1.1.1 the alias of 192.168.0.2?

If not, it seems that computer B has on it Telnet. Can I use that to 
transfer the file instead?


Thanks for reading this,

Leslie




--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Postfix question

2007-06-13 Thread Raphael Kraus

Of course. Note the use of "E.g." inferring "for example" ;)

For an LDAP example see 
http://postfix.wiki.xs4all.nl/index.php?title=Relay_recipient_maps_using_LDAP_against_Active_Directory


Aah... the beauty of documentation... :)

All the best,

Raphael

Dave Kempe wrote:

Raphael Kraus wrote:

E.g. in main.cf:

relay_recipient_maps 
<http://www.postfix.org/postconf.5.html#relay_recipient_maps> = 
hash:/etc/postfix/relay_recipients



And ensure that you put the list in relay_recipients and run postmap 
on it.


or you could make that an ldap lookup i think

dave

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Postfix question

2007-06-13 Thread Raphael Kraus

Howard Lowndes wrote:


That's what I am trying to achieve.  Ideally and ultimately, the 
Postfix machine will do a lookup into the Domino LDAP system to find 
valid users, but until I can get that working I am doing LDAP lookups 
into an OpenLDAP database where the user account names match those in 
the Domino LDAP database, and it's this OpenLDAP lookup that is not 
finding a match but at the same time is not rejecting the email.


The Postfix option you are after is relay_recipient_maps - see
http://www.postfix.org/postconf.5.html#relay_recipient_maps


E.g. in main.cf:

relay_recipient_maps 
 = 
hash:/etc/postfix/relay_recipients



And ensure that you put the list in relay_recipients and run postmap on it.

All the best,

Raphael
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Postfix question

2007-06-12 Thread Raphael Kraus

Howard,

I think you have the wrong idea about relayhost. The relayhost parameter 
in main.cf of postfix is for you to specify an external SMTP server to 
send through (aka a smarthost).


Don't specify an internal host for this (unless you insist on sending 
through that host). Usually the parameter would be set to your ISP's 
SMTP server, or the SMTP server specified by your SPF records.


What you want to do is set up the relay_domains and transport parameters:

Something like:

relay_domains = yourdomainname.com.au
transport_maps = hash:/etc/postfix/transport

in main.cf and put in /etc/postfix/transport

yourdomainname.com.au smtp:[192.168.0.143]

Again, remember to run postmap /etc/postfix/transport

Obviously you'll also have to adjust domain names and IP addresses as 
needed.


http://www.postfix.org/ has wonderful documentation available. There are 
also a lot of examples that you can learn from.


All the best.

Raphael

Howard Lowndes wrote:
I have a Linux/Postfix server that accepts email from the Internet, 
performs filtering checks on the email and then forwards acceptable 
emails onto a Linux/Domino server on the local intranet.


The Postfix checks are all being done by LDAP so I am able to see what 
is happening on the Linux/Postfix server.


Postfix has the relayhost parameter set in main.cf to point to the 
Linux/Domino server so that emails are correctly forwarded on.


I can see the Linux/Postfix server doing all the checks that I have 
specified in main.cf.  These include:

smtpd_client_restrictions
smtpd_helo_restrictions
smtpd_sender_restrictions
smtpd_recipient_restrictions

However, the smtpd_recipient_restrictions appear to be failing safe 
with a default DUNNO result rather than a default REJECT result.  The 
same checks, when not used in conjunction with a relayhost setting 
appear to default fail as REJECT rather than DUNNO.


Am I right in assuming that the use of the relayhost parameter is 
causing this change in default behaviour, and how is the best way to 
fix it?



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Frustration Rant

2007-05-23 Thread Raphael Kraus

It appears to be up and working for me.

Rick Welykochy wrote:
I am sick and tired of people who are sick and tired. And I've had 
enough.


BTW: I'd love to post a rant about Linux Vserver, but I cannot even 
get to

their site to garner some information before whining. Can anyone access

http://www.linux-vserver.org/

I can ping it but I can't http to it. :(

-rick

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Hotmail blocking emails

2007-05-10 Thread Raphael Kraus

G'day Stu,


We just got 40 bounces (550s) from hotmail.com. Nothing's changed
here.  We've checked all the usual suspects (dns/mta) but mail is
going everywhere else OK. As usual the bounce-reply is non-specific
and points to a general, vague support page.

Q: If you start getting bounces from a particular mob, what MTA/spam
list checkers to you use to see if you've broken some rule?


Does your outgoing mail server have a reverse DNS PTR record set for its 
IP address?


(Note that it doesn't need to match your domain, just so long as its a 
hostname that will revert back to the IP.)



You can always check the IP address of your mail servers from 
http://www.dnsstuff.com/ - There's a broad ranging SPAM database check 
there.


All the best!

Raphael
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] Problems backing up a server to SMBFS

2006-12-03 Thread Raphael Kraus
Just wanted to thank Marty Richards, Tuxta, Penedo, Sonia Hamilton and
Michael Fox for assisting me on this one a short while ago.

It turns out that using cifs rather than smbfs resolves this.

I have to say I'm surprised at this, as it is Samba 3 and also that cifs
does seem to be somewhat unstable at times.

Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Raphael Kraus
Sent: Wednesday, 29 November 2006 12:00 PM
To: SLUG List
Subject: [SLUG] Problems backing up a server to SMBFS

G'day all,

Distro/kernel: Debian GNU/Linux 2.6.8-3-k7-smp

I'm writing backups to a directory mounted using smb, with tar - tar
clpsvzf myfilename.tar.gz --atime-preserve --same-owner /

It seems to be stopping after 2GB, and it isn't apparent as to what is
causing the problem.

Any suggestions as to either diagnosing this or to as what may be the
problem?
 
TIA!

Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

Wild Internet & Telecom Pty Ltd, ABN 98 091 470 692 Correspondence -
201/6 Lachlan Street, Waterloo NSW 2017 Telephone 1300-13-9453 |
Facsimile 1300-88-9453 http://www.wildit.net.au DISCLAIMER &
CONFIDENTIALITY NOTICE: The information contained in this email message
and any attachments may be confidential information and may also be the
subject of client legal - legal professional privilege. If you are not
the intended recipient, any use, interference with, disclosure or
copying of this material is unauthorised and prohibited. This email and
any attachments are also subject to copyright. No part of them may be
reproduced, adapted or transmitted without the written permission of the
copyright owner. If you have received this email in error, please
immediately advise the sender by return email and delete the message
from your system.


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Problems backing up a server to SMBFS

2006-11-28 Thread Raphael Kraus
G'day all,

Distro/kernel: Debian GNU/Linux 2.6.8-3-k7-smp

I'm writing backups to a directory mounted using smb, with tar - 
tar clpsvzf myfilename.tar.gz --atime-preserve --same-owner /

It seems to be stopping after 2GB, and it isn't apparent as to what is
causing the problem.

Any suggestions as to either diagnosing this or to as what may be the
problem?
 
TIA!

Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

Wild Internet & Telecom Pty Ltd, ABN 98 091 470 692
Correspondence - 201/6 Lachlan Street, Waterloo NSW 2017
Telephone 1300-13-9453 | Facsimile 1300-88-9453
http://www.wildit.net.au
DISCLAIMER & CONFIDENTIALITY NOTICE: The information contained in this email 
message and any attachments may be confidential information and may also be the 
subject of client legal - legal professional privilege. If you are not the 
intended recipient, any use, interference with, disclosure or copying of this 
material is unauthorised and prohibited. This email and any attachments are 
also subject to copyright. No part of them may be reproduced, adapted or 
transmitted without the written permission of the copyright owner. If you have 
received this email in error, please immediately advise the sender by return 
email and delete the message from your system.


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] Google for trends

2006-11-27 Thread Raphael Kraus
See
http://www.google.com/trends?q=Ubuntu%2C+Apple+Mac&ctab=0&geo=all&date=a
ll 


Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Sam Lawrance
Sent: Sunday, 26 November 2006 10:28 AM
To: Lindsay Holmwood
Cc: SLUG List
Subject: Re: [SLUG] Google for trends


On 26/11/2006, at 10:16 AM, Lindsay Holmwood wrote:

> More telling though
>
> Ubuntu vs Mac:
> http://www.google.com/trends?q=ubuntu%2C+mac&ctab=0&geo=all&date=all

Is that a big mac or an apple mac?  Or maybe a MAC address?

> Ubuntu vs Apple:
> http://www.google.com/trends?q=ubuntu%2C+apple&ctab=0&geo=all&date=all

A pink lady apple, or some other variety?  I know that I like to google
my fruit.


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

Wild Internet & Telecom Pty Ltd, ABN 98 091 470 692
Correspondence - 201/6 Lachlan Street, Waterloo NSW 2017
Telephone 1300-13-9453 | Facsimile 1300-88-9453
http://www.wildit.net.au
DISCLAIMER & CONFIDENTIALITY NOTICE: The information contained in this email 
message and any attachments may be confidential information and may also be the 
subject of client legal - legal professional privilege. If you are not the 
intended recipient, any use, interference with, disclosure or copying of this 
material is unauthorised and prohibited. This email and any attachments are 
also subject to copyright. No part of them may be reproduced, adapted or 
transmitted without the written permission of the copyright owner. If you have 
received this email in error, please immediately advise the sender by return 
email and delete the message from your system.


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] wget remembering previous downloads

2006-09-08 Thread Raphael Kraus
 
And that should have been

Cat wget.log.?? >>wget.log

Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Raphael Kraus
Sent: Friday, 8 September 2006 5:21 PM
To: Craig Dibble
Cc: slug@slug.org.au
Subject: RE: [SLUG] wget remembering previous downloads

Can't find an exclude option that matches only files with  wget :( :( :(

I was going to use something like:

grep 'ftp://' wget.log | cut -f4 -d'/' | wget -m -nH -o wget.log.??
--exclude-??? - ftp://:@/
cat wget.log.?? >wget.log
rm wget.log.??

Damn... Darn... Et al...

Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Raphael Kraus
Sent: Friday, 8 September 2006 5:06 PM
To: Craig Dibble
Cc: slug@slug.org.au
Subject: RE: [SLUG] wget remembering previous downloads

Thanks Craig...

It sounds like the clue I've been after... :)

Don't worry - it'll all be scripted. :) 


Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: Craig Dibble [mailto:[EMAIL PROTECTED]
Sent: Friday, 8 September 2006 10:19 AM
To: Raphael Kraus
Cc: slug@slug.org.au
Subject: Re: [SLUG] wget remembering previous downloads

Raphael Kraus wrote:
> G'day all...
>  
> I'm doing some man'ning to no avail here...
>  
> Is there a way to have wget (downloading via ftp) to remember what it 
> has successfully downloaded, and not to download the same file again -

> even if the file is deleted from disk?
>  
> If not, has anyone else had to face this problem before?

wget -nc will stop it downloading an existing file in the same
directory, but to do what you're suggesting - if you then move or delete
said file you would probably need to do something clever like output to
a logfile (-o  initially, then -a  to append to the same
file), then pipe that file back in using an 'exclude' to ignore those
files. You'd probably need to wrap that in a script to get it to work
correctly though.

HTH,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] wget remembering previous downloads

2006-09-08 Thread Raphael Kraus
Can't find an exclude option that matches only files with  wget :( :( :(

I was going to use something like:

grep 'ftp://' wget.log | cut -f4 -d'/' | wget -m -nH -o wget.log.??
--exclude-??? - ftp://:@/
cat wget.log.?? >wget.log
rm wget.log.??

Damn... Darn... Et al...

Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Raphael Kraus
Sent: Friday, 8 September 2006 5:06 PM
To: Craig Dibble
Cc: slug@slug.org.au
Subject: RE: [SLUG] wget remembering previous downloads

Thanks Craig...

It sounds like the clue I've been after... :)

Don't worry - it'll all be scripted. :) 


Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: Craig Dibble [mailto:[EMAIL PROTECTED]
Sent: Friday, 8 September 2006 10:19 AM
To: Raphael Kraus
Cc: slug@slug.org.au
Subject: Re: [SLUG] wget remembering previous downloads

Raphael Kraus wrote:
> G'day all...
>  
> I'm doing some man'ning to no avail here...
>  
> Is there a way to have wget (downloading via ftp) to remember what it 
> has successfully downloaded, and not to download the same file again -

> even if the file is deleted from disk?
>  
> If not, has anyone else had to face this problem before?

wget -nc will stop it downloading an existing file in the same
directory, but to do what you're suggesting - if you then move or delete
said file you would probably need to do something clever like output to
a logfile (-o  initially, then -a  to append to the same
file), then pipe that file back in using an 'exclude' to ignore those
files. You'd probably need to wrap that in a script to get it to work
correctly though.

HTH,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] wget remembering previous downloads

2006-09-08 Thread Raphael Kraus
Thanks Craig...

It sounds like the clue I've been after... :)

Don't worry - it'll all be scripted. :) 


Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: Craig Dibble [mailto:[EMAIL PROTECTED] 
Sent: Friday, 8 September 2006 10:19 AM
To: Raphael Kraus
Cc: slug@slug.org.au
Subject: Re: [SLUG] wget remembering previous downloads

Raphael Kraus wrote:
> G'day all...
>  
> I'm doing some man'ning to no avail here...
>  
> Is there a way to have wget (downloading via ftp) to remember what it 
> has successfully downloaded, and not to download the same file again -

> even if the file is deleted from disk?
>  
> If not, has anyone else had to face this problem before?

wget -nc will stop it downloading an existing file in the same
directory, but to do what you're suggesting - if you then move or delete
said file you would probably need to do something clever like output to
a logfile (-o  initially, then -a  to append to the same
file), then pipe that file back in using an 'exclude' to ignore those
files. You'd probably need to wrap that in a script to get it to work
correctly though.

HTH,
Craig
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] wget remembering previous downloads

2006-09-07 Thread Raphael Kraus



G'day all...
 
I'm 
doing some man'ning to no avail here...
 
Is 
there a way to have wget (downloading via ftp) to remember what it has 
successfully downloaded, and not to download the same file again - even if the 
file is deleted from disk?
 
If 
not, has anyone else had to face this problem before?
 
Regards,
 
Raphael 
Kraus
Software 
Developer
[EMAIL PROTECTED]
02 8306 0007 Direct 
Line
02 8306 0077 Sales | 02 
8306 0099 Fax
02 8306 0088 
Support
02 8306 0055 
Administration
1300 13 WILD 
(9453) National | 1300 88 WILD (9453) Fax
 
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

[SLUG] qmail relay queue

2006-09-04 Thread Raphael Kraus
G'day,

Does anyone know how to view messages that are relayed via qmail.

/var/log/qmail/smtpd/current shows the relays as they occur, but not the
contents of the messages.

We're wanting to check if a mail server is relaying NDRs through us or
spam.

Thanks! 
 
Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] Poor Gb network performance

2006-08-29 Thread Raphael Kraus
> Also, I've seen problems with the Realteks in 2.6.9 kernels, that
seemed to go away with later 2.6 kernels (around 2.6.13, from memory).
> Given the debian box is 2.6.8, that might be an issue, so you might
want to try a later kernel. 

That may be a good point, however as the machine is off-site, rather
critical, uses some compiled software and eth0 is less important (i.e.
it's our internal network - I'm more worried about large backups not
taking forever to transfer), I'm not too keen to upgrade the kernel just
at the moment  :^/

I'll discuss this with my colleage and we'll see if we can organise to
give it a crack...

Thanks heaps for all your help!

Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] Poor Gb network performance

2006-08-29 Thread Raphael Kraus
Hrmm... Have thought about the cables - may be worth a shot as the
cabinet is so tight, but it is a long shot...

Aah... A Nvidia "softnic" is on-board both of the machines, and a PCI
Realtek. On Debian the Nvidia is eth0 and Realtek eth1 and vice versa on
the Ubuntu.

Not sure on any control utilities - been a while since I've examined nic
drivers (or any other drivers for that matter). I'll check it out.
(Refernces welcome :)

Thanks heaps for all your help!


Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Jeff Waugh
Sent: Tuesday, 29 August 2006 4:45 PM
To: Raphael Kraus
Cc: slug@slug.org.au
Subject: Re: [SLUG] Poor Gb network performance



> Speed: 1000Mb/s
> Duplex: Full

> [   22.910730] eth0: forcedeth.c: subsystem: 01043:8141 bound to

- 8< - snip - 8< -

> eth0: Identified chip type is 'RTL8169s/8110s'.
> eth0: RTL8169 at 0xf89ca000, 00:0f:b5:8d:59:ad, IRQ 201
> eth0: Auto-negotiation Enabled.
> eth0: 1000Mbps Full-duplex operation.

So you've got an Nvidia "softnic" (rather like a softmodem, so kind of
icky if you're looking for performance) and a Realtek, which are not
widely known for their performance. I'm not sure that explains the full
extent of your performance problems, but it's a start.

Are there any control utilities or module parameters for forcedeth?
Perhaps you could switch it between performance modes (to favour CPU or
throughput, pick your poison).

Long shot: Make sure you have a good cable.

- Jeff

-- 
linux.conf.au 2007: Sydney, Australia
http://lca2007.linux.org.au/
 
  "Suddenly there was a terrible roar all around us, and the sky was
full
   of what looked like huge bats." - Raoul Duke, Fear and Loathing in
Las
   Vegas
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] Poor Gb network performance

2006-08-28 Thread Raphael Kraus

G> And what are you using to test this, incidentally? 

I was using ncftp to watch the transfer speed.

Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


RE: [SLUG] Poor Gb network performance

2006-08-28 Thread Raphael Kraus
On the Ubuntu box:
---CUT---
# ethtool eth0
Settings for eth0:
Supported ports: [ MII ]
Supported link modes:   10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Supports auto-negotiation: Yes
Advertised link modes:  10baseT/Half 10baseT/Full
100baseT/Half 100baseT/Full
1000baseT/Full
Advertised auto-negotiation: Yes
Speed: 1000Mb/s
Duplex: Full
Port: MII
PHYAD: 9
Transceiver: external
Auto-negotiation: on
Supports Wake-on: g
Wake-on: d
Link detected: yes
=

..and...

---CUT---
# grep eth0 /var/log/dmesg
[   22.910730] eth0: forcedeth.c: subsystem: 01043:8141 bound to
:00:0a.0
=


On the Debian box:
---CUT---
# ethtool eth0
Settings for eth0:
No data available
=

...and...

---CUT---
# grep eth0 /var/log/dmesg
eth0: Identified chip type is 'RTL8169s/8110s'.
eth0: RTL8169 at 0xf89ca000, 00:0f:b5:8d:59:ad, IRQ 201
eth0: Auto-negotiation Enabled.
eth0: 1000Mbps Full-duplex operation.
=

:(

Regards,
 
Raphael Kraus
Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax

-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On
Behalf Of Jeff Waugh
Sent: Tuesday, 29 August 2006 3:46 PM
To: slug@slug.org.au
Subject: Re: [SLUG] Poor Gb network performance



> Is anyone aware of any issues that would be causing this?

Run:

  ethtool 

on both machines, and reply with the output.

- Jeff

-- 
linux.conf.au 2007: Sydney, Australia
http://lca2007.linux.org.au/
 
   "Debian is not as minor as many business end people think." - Alan
Cox
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Poor Gb network performance

2006-08-28 Thread Raphael Kraus
G'day all,

We've got Ubuntu and Debian machines connected to our gigabyte networks.

Ubuntu kernel: 2.6.15-26-amd64-server
Debian kernel: 2.6.8.3-k7-smp

(Processors are AMD Athlon64 dual-cores.)

Both machines don't seem to be getting through data throughput across
the network. We're getting 10 ~ 12 MB/s (usu. ~11MB/s).

By my calculations we should be getting around ten times this.  Netgear
switches are being used.

Is anyone aware of any issues that would be causing this?

Regards,
 
Raphael Kraus
Systems Administrator and Software Developer
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] FTP directory synchronisation

2006-02-12 Thread Raphael Kraus

Thanks to everyone who gave assistance with this one.

Greatly appreciated!

Raphael



--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] FTP directory synchronisation

2006-02-12 Thread Raphael Kraus

G'day Glen and all,

Glen Turner wrote:

See wget and it's associated 'mirror' script.



Thanks heaps! Can you provide a reference for the mirror script?

All the best

Raphael


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] FTP directory synchronisation

2006-02-12 Thread Raphael Kraus

G'day all,

I'm wanting to perform FTP synchronisation (similar to rsync) - i.e. a 
local and remote directory are made up to date at a set schedule.


Where two files of the same name exist on both remote and local hosts, 
the older one is overwritten with the newer. Subdirectories are searched 
recursively.


Are there any suggestions on how this can be / should be done?

TIA!

Raphael


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Video conferencing

2005-12-14 Thread Raphael Kraus
G'day...

I'd like to video conference with my father (Townsville, QLD) and my
brother (Brooklyn, NY, USA). They've both got Mac's (OS X) and surely
have iChat.

I run a Debian box, and a webcam (Logitech Orbit) that works.

Can anyone offer any hints, tips or advice on what software I should use
to do this. Ideally, it'd be great if we each had two to three small
windows (one each of the other two parties and optional third of
ourselves for our own vanity)...

Don't know if this is possible and if gnome-meeting is capable.

All nudges in the right direction appreciated :)

Thanks

Raphael
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] [ot] Recursive web query to capture web database

2005-11-29 Thread Raphael Kraus
G'day again Richard and all,

> > Any suggestions 
> > 
> 
> WWW::Mechanize :)

Additionally, I'd recommend signing up to the Sydney Perl Mongers
mailing list and attending the monthly meetings.

See http://sydney.pm.org/

All the best...

Raphael
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] [ot] Recursive web query to capture web database

2005-11-28 Thread Raphael Kraus
G'day Richard,

> I will like to use perl to write a web-based query to capture data 
> from a 
> government departments website.  
> 
> The site has tablular information similar to tax tables but only a small 
> portion of the total data is shown  at any give query, I would like to 
> capture the total.
> 
> Any suggestions 
> 

WWW::Mechanize :)

Raphael

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: ocaml vs python/ruby/perl etc. was [SLUG] Why not C

2005-11-28 Thread Raphael Kraus
Ok, I know I shouldn't bite - but I just have to...

> Read the note on usage again.
> Here it is in terms geeks will grok:
> "less" is used for continuous quantities,
> "fewer" is used for discrete quantities
> 
> less water, fewer drops.
> less population, fewer people.
> less traffic, fewer cars.

less water != fewer drops

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] MondoRescue via NFS

2005-11-15 Thread Raphael Kraus
G'day...
 
Has anyone had any experience using MondoRescue over NFS?
 
I'm getting very frustrated with the documentation, as it is sparse in
the spots I'm needing more information. (E.g. "Press enter a few times
and it'll work" - no it doesn't and there's no further descriptions
argh!) 
 
Anyway, I've performed a back-up to an NFS server. How do I restore from
NFS? I've got another box here with which I want to test a bare metal
restore on.
 
Thanks!
 
Regards,
 
Raphael Kraus
Software Developer
Wild Internet & Telecom
[EMAIL PROTECTED]
02 8306 0007 Direct Line
02 8306 0077 Sales | 02 8306 0099 Fax
02 8306 0088 Support
02 8306 0055 Administration
1300 13 WILD (9453) National | 1300 88 WILD (9453) Fax



Wild Internet & Telecom, ABN 98 091 470 692
Finance - Ground Floor, 265/8 Lachlan Street, Waterloo NSW 2017
Sales - Level 16 , 1604/6 Lachlan Street, Waterloo NSW 2017
Telephone 1300-13-9453 |  Facsimile 1300-88-9453
http://www.wildit.com.au
DISCLAIMER & CONFIDENTIALITY NOTICE:  The information contained in this email 
message and any attachments may be confidential information and may also be the 
subject of client legal - legal professional privilege. If you are not the 
intended recipient, any use, interference with, disclosure or copying of this 
material is unauthorised and prohibited.   This email and any attachments are 
also subject to copyright.  No part of them may be reproduced, adapted or 
transmitted without the written permission of the copyright owner.  If you have 
received this email in error, please immediately advise the sender by return 
email and delete the message from your system.


--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] ctrl-alt-left ctrl-alt-right under VNC (OT?)

2005-09-28 Thread Raphael Kraus

G'day,

(Warning: some readers may consider this mail off-topic for the list!)

I'm sure I used to be able to do this (although I'm wondering now).  
Using gnome desktop, you can ctrl-alt-left and right between desktops. 
However, viewing the desktop via VNC this isn't possible (or at least 
not conveniently possible). (VNC client is running under WinXP.)


Does anyone know a work-around?

Thanks

Raphael.

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Your top-ten linux desktop apps

2005-09-27 Thread Raphael Kraus
G'day Gottfried and all...

> does someone can suggest a good and fast(!) image browser for linux? sth 
> like acdsee for windows?
> 
> i cannot try f-spot because my debian box cannot resolve some depencies.


gThumb image view under gnome is the most acdsee like...

# apt-get install gthumb

should do it!

All the best.

Raphael
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Presentation Mind Control - 21st September 2005

2005-09-22 Thread Raphael Kraus

G'day...


The words themselves were said with a smile and tounge-in-cheek. :)


You have a bile sense of humour.




Green and dripping... yeah probably...

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: LVM and software RAID

2005-09-22 Thread Raphael Kraus

Matt,


it unused.  Make partitions on the rest of both HDDs and mark them as being
used for RAID (can't remember the exact wording, but you get to it by
selecting the option which, by default, says "Ext3 filesystem").  Return to


I think you'll be suprisingly disappointed if you try this in Debian 
Sarge's installer. :(



Raphael.

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: LVM and software RAID

2005-09-22 Thread Raphael Kraus

Matt,

Rafael Kraus wrote:
So... you've done this in Debian Sarge...? hrmm... and pray tell exactly 
how?




Probably a bit over-reactive here. Apologies.

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Presentation Mind Control - 21st September 2005

2005-09-22 Thread Raphael Kraus

G'day Oscar,

>> Have you grown too technical and too old to have a sense of humour
>> anymore?
> You've nothing to say constructive, so don't say it here.


The words themselves were said with a smile and tounge-in-cheek. :)

I think I did have something constructive to say - having a sense of 
humour and being able to relate to a sense of humour is an important 
part of our profession that is often lacking. (It challenges me when I 
get the grumps with some upstart on a mailing list!) ;) :)


(As an exercise, try smiling and listening to your boss - hard as it may 
be sometimes doing so will often mean he'll listen to you that time when 
you think you need another Linux server.)


Anyway the point is that Paul J Fenwick and Jacinta Richardson make 
/significant/ /unpaid/ and /voluntary/ contributions to the Open Source 
community. (Really, you should follow their movements if you don't 
believe me!)


Paul's presentation was, as per usual, excellent. AFAIR, it is also a 
shorter version of the talk he is giving at the Open Source conference 
later this year.


It's a pity you weren't there to see it.

Raphael.

--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] LVM and software RAID

2005-09-22 Thread Raphael Kraus
G'day,

We're trying to set-up a host with software RAID (mirroring) and LVM at
work (for a backup server).

Just trying to install Debian with RAID partitions is proving painful.

Anyone done this before? Any recommendations, tips, suggested methods?

I'm thinking I should install to one drive putting a plain boot
partition and then a LVM on it, leaving the second alone. Once install
is complete create the RAID with the second drive marked as failed. (I
can remember doing this a while back, but I think RAID has changed on
Linux now.)

Sorry if I'm answering my own questions here. I've struggled with
Debian's installer and seen my colleague spend longer on it, whilst
being head down in programming. Consciousness seems to have clicked a
bit more since relaxing.

Look forward to seeing you all again on Friday week.

Raphael
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html