Re: [SLUG] Bi-directional rsync?

2013-05-13 Thread Daniel Solsona
Are you forced to use rsync? If not you can use csync2 which I think would
do the trick for you.


On 14 May 2013 09:16, DaZZa dazzagi...@gmail.com wrote:

 Learned ones!

 To overcome some crappy web design (two servers, load balanced, no
 shared storage), I need to implement rsync to synchronise a directory
 between two servers.

 Trouble is, it's got to be a two way sync.

 I.E. File uploaded to web server 1 (don't ask - I told you it was crap
 web design) needs to be available to download from web server 2 if
 necessary for future sessions - I.E. I've got to copy the file from
 server 1 to server 2 in short order (sub 5 minutes).

 I've ALSO got to do the same from server 2 back to server 1 - so if
 the incoming upload goes to server 2, it's got to be copied back to
 server 1.

 Currently, I've got rsync running on both servers every 5 minutes and
 synchronising files from 1 to 2, and from 2 to 1. I'd like to just do
 this in once process - run rsync on server 1, have it connect to
 server 2 and pull/push all files until the directories match.

 I don't know if this is even possible with rsync - and if so, I don't
 know the options required to make it work.

 The current command being used is

 rsync -v -rlt directories

 Can anyone suggest a better option set which could make this a two way
 sync by just running on the one server?

 Thanks.

 DaZZa
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] sorry, you must have a tty to run sudo

2012-06-21 Thread Daniel Solsona
On 22 June 2012 09:14, li...@sbt.net.au wrote:





 how do I modify so I only pass directory names, NOT file names in $j

 thanks for any pointers

 *1-
 #!/bin/bash
  case $# in
  1)
  DOMAIN=$1
  ;;
  *)
echo Usage: $0 domain.name 12
echo this archives $1  Maildirs 
echo all mail over 45 days is gzipped
exit
;;
  esac

 if [ -s /var/mail/vhosts/$1 ] ; then

 cd /var/mail/vhosts/$1

 for j in *@*
 do
  sudo -u#5000 /usr/local/bin/archivemail -d45 $j
 done

 exit
 fi


This should work

for j in $(ls -d /var/mail/vhosts/$1/); do

do_things_with $j (or use ${j%%/} if you don't want the last '/')

done

If you need to check if something is a dir, you can also do

[[ -d /var/mail/vhosts/$1 ]]  echo It's a dir!


 echo no such domain $i
 exit
 -


 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Open source log analyser tools (or: Alternatives to Splunk)

2012-05-09 Thread Daniel Solsona
Hi,

It really depends on what are your needs.

I don't know much about the fancy things from splunk, but you can do some
cool things with logstash.

There is also a nice ui for logstash, kibana (
https://github.com/rashidkpc/Kibana)


On 9 May 2012 17:04, Mark Walkom markwal...@gmail.com wrote:

 We're looking at 2G a day, which is AUS$30K a year. And were on the S end
 of SME so it's a hell of a lot.

 The only way we could cut this amount down would be is if we wrote a
 customer parser that read the application logs that cut out all the
 replicated crap (mostly environment variable stuff) and spat out the logs
 to a separate dir for splunk to read.
 But then we need to deal with extra storage requirements.

 Again, when you are a small operation with a small budget, money rules.
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Open source log analyser tools (or: Alternatives to Splunk)

2012-05-09 Thread Daniel Solsona
With logstash you have an agent in each server. There you can configure
inputs, filters and outputs.

Inputs: file, syslog, etc
Filters: grep, regexp. Here you can do magic.
Outputs: file, elasticsearch, redis, amqp, etc (lots of possibilities)

Check the docs to see if there is any filter/output that works for you.

I reckon you can do what you need with logstash. But you can probably do
the same with some scripts.
On May 9, 2012 5:26 PM, Mark Walkom markwal...@gmail.com wrote:
-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Wiki software

2011-11-21 Thread Daniel Solsona
gitit.net also looks very good.

I'm using mediawiki myself, but I'll probably try gitit.

El 21 de novembre de 2011 9:17, onlyjob only...@gmail.com ha escrit:

 My favourite are dokuwiki (PHP) and ikiwiki (Perl).
 Both do not use DB backend.
 Ikiwiki rely on git with some of its functionaly, like another
 interesting one - git-wiki (Ruby).

 I'm sure you know about wikimatrix.org web site where you can find
 heaps of information about many wiki.
 However at the moment I can't open this useful site :(

 Good luck - please let us know about you choice when you make it.

 Regards,
 Dmitry.

 2011/11/21 Marghanita da Cruz marghan...@ramin.com.au:
  Any opinions on the good, bad, ugly Wiki software?
 
  I'm playing with PHPwiki (because it is tiny).
 
  Marghanita
  --
  Marghanita da Cruz
  http://ramin.com.au
  Tel: 0414-869202
 
 
  --
  SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
  Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html
 
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html

-- 
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] dyndns address to CNAME ?

2011-08-09 Thread Daniel Pittman
On Mon, Aug 8, 2011 at 19:03, Voytek Eymont li...@sbt.net.au wrote:

 dumb DNS Q:
 what's a way to have a dyndns name mapped to a host within own domain.tld ?

Either pay them for their custom domain support, or have your domain
expose a CNAME that points to their dynamic name.

 can I map a dyndns host to a CNAME as a hostname on mydomain.tld ?

CNAME is like a symbolic link, and it works the other way around: you
point to the dyndns host from the CNAME on your own domain.

 so, I have a machine at home with not-fixed ip, on dyndns, as
 'myhomepc.dyndns.com'

 if I set a bind record as:

 myhomepc     CNAME   myhomepc.dyndns.com.

 to have myhomepc.mydomain.tld

 that works, BUT, will that work once ip changes, will that propagate to my
 CNAME...?

So, again, symlink: the way that a name is resolved is that when you
hit a CNAME you read the target of it, and then go ahead and start the
process from scratch with the new one.  So, the end user system
pretends that they typed in myhomepc.dnydns.com. instead of
myhomepc.your.domain. and all.

Make sense?  Any IP changes matter only for the target name, and the
CNAME is more or less unrelated.

Daniel
-- 
♲ Made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Log for timed out SSH connections

2011-03-20 Thread Daniel Pittman
On Sun, Mar 20, 2011 at 21:41, Simon Males s...@sime.net.au wrote:

 We have just switched Internet connections (from ADSL2 to [SH/B]DSL)
 and I'm finding that SSH connections to the Internet are timing out.

I bet whatever device is doing NAT or firewalling on the outside of
your network is dropping the idle connection; turning on TCP or
protocol level keep-alive messages in SSH will probably solve your
problem.

 Normally SSH sessions could be open all day. I've logged a fault with
 the provider and they are looking into it.

Odds are, if they control the firewall, they will be unwilling to fix
this on their end.  Not that I am bitter or anything.

 I'm wondering if I can get some timings from SSH to look for any
 patterns. Though a timed out session and a nicely exited one look all
 the same in auth.log

I would grab a tcpdump on both ends of a session that fails, and
determine what is being seen by both ends.  That should conclusively
prove or deny my thesis.

Daniel
-- 
⎋ Puppet Labs Developer – http://puppetlabs.com
✉ Daniel Pittman dan...@rimspace.net
✆ Contact me via gtalk, email, or phone: +1 (503) 893-2285
♲ Made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread Daniel Pittman
On Wed, Jan 12, 2011 at 16:11, onlyjob only...@gmail.com wrote:

 Regarding virtual loopback it seems that in standard builds of
 VServer-enabled 2.6.32 kernels available from Debian repositories this
 problem do not exist. I'm not too sure though but I don't remember
 experiencing it.

It was definitely there is the stable releases of Debian before Lenny,
when they presumably decided the experimental VServer patches were
sufficiently stable or whatever.  Anyway, nice they have solved it. :)

 Besides it is possible to change localhost address in /etc/hosts

...but not to remap 127.0.0.1, or easily create another interface
named lo, or bind to 0.0.0.0, all of which a surprisingly large number
of packages assumed would always be present and operational in Debian
(and, for which, Ubuntu is generally worse. :)

 Absence of network virtualization in VServer is deliberate and for a
 good reason.

I can't find much useful information on this on their website about
why, but wikipedia claims that this is based on isolation rather than
virtualization to avoid overhead, which seems relatively bogus to me
given the in-kernel network namespace support is pretty much directly
isolation based.

Do you know of a good source of information on the VServer side?  I am
curious to know what the technical differences are (and if that is,
indeed, the correct argument from their side) so I can better
understand what the trade-off here might atually be.

[...]

 I don't see how kernel support relevant to RHEL upstream - a year ago
 there was no OpenVZ support for 2.6.32 whatsoever. And frankly this
 was one of the reasons I've chosen VServer for a machine hosting
 around 20 VMs.

...er, OK.  So, the use of the RHEL kernel is relevant because RedHat
invest substantially in stabilizing the kernel and backporting newer
drivers to it.  This means that unlike the Linus 2.6.18 kernel (for
example) you can get support for modern Intel NICs and other
controllers, so it doesn't suffer nearly the trouble running on modern
hardware that the version number might suggest.

Given that, in many cases, security and driver support are the primary
motivators for using a newer kernel this can, indeed, help with
(though not universally solve) the issue that the OpenVZ kernel
version lags the upstream kernel version available in distributions.

 Obviously 2.6.32 have a number of important features notably KSM which
 makes a lot of sense for virtual host and also EXT4.

Er, as of 2.6.36 the KSM feature still scans only memory specifically
registered to it by the application.  So far as I can see the VServer
patches don't do anything special to mark memory as possible to share
for code running in them, and pretty much nothing other than KVM/qemu
does out in user-space, so I wouldn't have thought that KSM made much
difference.

As to ext4 ... *shrug*  I think that is a religious debate that would
distract from the real discussion at hand; I regard anyone running
ext4 on a production system as vaguely strange since it is still so
young and experimental compared to more tested code but, obviously,
you don't share that reluctance to move. :)

[...]

 *more performant: I agree with you that difference in network
 performance between VServer and OpenVZ is not terribly different.
 Perhaps it can be manifested with some sort of artificial testing.
 However here I was quoting Herbert Poetzl (VServer developer).
 While performance difference is not too big there is another thing
 which I believe is equally important - simplicity. If the same result
 can be achieved easier, without even little virtualization overhead it
 is certainly better, more maintainable, probably has less bugs etc.
 Simplicity matters.

*nod*  Part of my view on the subject is that VServer made some bad
technical decisions that kept their kernel code simple in exchange for
adding a huge amount of complexity to every container; part of that
(lo virtualization) they have obviously decided can be corrected these
days.

So, I agree, but I think that whole system complexity is a much more
important metric than just kernel complexity.  (OTOH, I also think
that a bunch of the OpenVZ bits – like UBC – are disasters of
complexity and I am very glad they will not make it in to the mainline
kernel. :)

Anyway, thanks for sharing your experiences and discussing this.  I
like to talk through these issues so I can better understand where
things stand – and I already learned some useful stuff I didn't know
from you. :)

Regards,
Daniel
-- 
✉ Daniel Pittman dan...@rimspace.net
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-12 Thread Daniel Pittman
On Wed, Jan 12, 2011 at 20:10, dave b db.pub.m...@gmail.com wrote:

 Also has anyone looked at LXC :P ?

In my previous job, where we did the other bits, we did some testing
on developer systems; our conclusion was that LXC was at least a
couple of years from being useful in the real world based on a pretty
much endless collection of shortfalls and bugs in testing.

My very strong hope is that it will stabilise and one of the
implementations built on the kernel tools (because the libvirt LXC and
plain LXC projects are entirely different user-space code) will become
the standard for doing this inside Linux.

 Having run identical kvm guests, I found that KSM actually wasn't that
 much of a benefit as a 'cpu user' (while it doesn't seem to use much
 ... potentially the cpu could down clock to save power instead of
 running KSM).

We never saw much benefit, and were pretty happy that we got better
memory use overall from OpenVZ containers rather than KVM machines
even where they were identical.

Regards,
Daniel
-- 
✉ Daniel Pittman dan...@rimspace.net
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which Virtualisation, and why?

2011-01-11 Thread Daniel Pittman
On Tue, Jan 11, 2011 at 16:24, onlyjob only...@gmail.com wrote:

 No, no, please not OpenVZ. It is certainly not for beginners.
 Better use VServer instead.
 I used both, first OpenVZ (but was never really happy with it) and then 
 VServer.

Have VServer added network virtualization yet?  Last time I used it
they hadn't, so your containers didn't have, for example, the loopback
interface, or a 127.0.0.1 address they could use.

That made for a constant, ongoing pain in the neck compared to OpenVZ
which *did* look like that.  Every single distribution package that
assumed, for example, that it could talk to 'localhost' would do the
wrong thing.

Ah.  I see the experimental releases do add support for a virtualized
loopback adapter, along with IPv6, which is nice, and probably
addresses my biggest operational issue with VServer.

 There are number of benefits of VServer over OpenVZ:

 * GPL License

http://openvz.org/documentation/licenses
The OpenVZ software — the kernel and and the user-level tools — is
licensed under GNU GPL version 2.

It is also notable that a bunch of the upstream, in-kernel code *is*
from OpenVZ, including a bunch of the namespace support that underpins
the LXC implementations and, these days, OpenVZ itself.

Can you tell me where you got the impression that OpenVZ was not GPL?

 * Better kernel support:
 OpenVZ kernel 2.6.32 become available only recently.
 VServer supported 2.6.32 for a while - much much longer. OpenVZ's
 adoption of new kernels is quite slow - perhaps just too slow...

FWIW, because their upstream kernel is based on the RHEL kernel
releases, we often found that they had sufficiently recent drivers
despite the older core version.  This is a genuine drawback, however,
and makes it hard to have upstream support if you are not using RHEL
as your base system (eg: Debian, Ubuntu.)

Er, also, am I looking at the right place?  I went to check out the
feature equivalent stuff because I am quite interested in keeping
up, and the linux-vserver site tells me that the latest stable release
is vs2.2.0.7 for 2.6.22.19 – they have an *experimental* patch for
2.6.32, but I presume there must be some other stable release for the
.32 series or something?

[...]

 * more performant:
  Linux-VServer has no measureable overhead for
  network isolation and allows the full performance
  (OpenVZ report 1-3% overhead, not verified)

Our measurements show pretty much identical performance cost for
either tool, FWIW, and we generally found that either of them was able
to exhaust the IOPS or memory capacity of a modern server well before
they could make a couple of percent of CPU overhead matter.  (KVM-like
tools were far worse for this, of course, because of their increased
IO and memory overheads.)

[...]

 * Easier.

I don't agree here: other than the more modern set of front ends (like
Proxmox) for OpenVZ, I never found there to be a detectable difference
in tools overheads between VServer and OpenVZ, and OpenVZ was actually
a bit easier to fiddle with outside the rules and all.

Regards,
Daniel
-- 
✉ Daniel Pittman dan...@rimspace.net
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Upgrading OS RAID

2011-01-09 Thread Daniel Pittman
On Sun, Jan 9, 2011 at 14:05, Kyle k...@attitia.com wrote:

 it appears one of the disks in my s'ware RAID is failing. So I've come to
 SLUG for some consensus and confirmation.

 1. How do I go about rebuilding the RAID with ALL brand new disks (obviously
 no longer the same disks, but now newer spec larger disks) such that I don't
 lose not only the data but don't have to rebuild the whole machine again?

Replace the disks one at a time, allowing the rebuild to fully
complete between each disk.  Once you have the entire array running on
the larger disks, mdadm can increase the size of the array, and you
can resize your file system inside it.  (...and check you *can* do
that before you go to the trouble.)

My advice is to read the mdadm manual page to find out how; if that
doesn't make sense, don't tackle this replacement yet, but keep
learning until it does make sense - because a slip here can cost you
all your data.

It is highly advisable to have good backups, though, while doing this.
 The most likely time to discover that you have a bad block on a
second disk is during a rebuild, and that could result in the array
going away.  This is not unrecoverable, but it is hard. :)

Obviously, create a new array and copy data works too, and isn't a bad
choice if you can manage it.  Practical issues like space in the case
might make it hard though.

 2. I'm better sticking with linux s'ware RAID rather than setting up a
 m'board BIOS supported RAID aren't I?

Absolutely.  Unless you have a real hardware RAID card, use Linux
software RAID.  Even then it isn't a bad choice. :)  The built-in
stuff on your motherboard isn't hardware RAID, but software RAID with
a bit of BIOS glue to support booting from it.

 3. It's been a while since I delved into h'ware etc. So SATA II disks will
 simply plug into, and function correctly, SATA plugs, yes or no? Or are we
 now at a stage where I also have to worry about whether or not the m'board
 will actually support the disks I want to put in?

Yeah, it should be just fine.

Regards,
Daniel
-- 
✉ Daniel Pittman dan...@rimspace.net
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] website questionnaire software?

2011-01-06 Thread Daniel Pittman
On Thu, Jan 6, 2011 at 19:40, Sonia Hamilton so...@snowfrog.net wrote:

 Can anyone recommend software for setting up website questionnaires?

Google Docs.  Seriously, their form stuff is pretty reasonable, and
gives a spreadsheet - which is often how folks actually want that data
in the end..  Otherwise, SurveyMonkey and few other services will do
an OK job of presenting this.

Regards,
Daniel
-- 
✉ Daniel Pittman dan...@rimspace.net
⌨ dan...@rimspace.net (XMPP)
☎ +1 503 893 2285
♻ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] DD images and checksums

2010-12-22 Thread Daniel Pittman
On Thu, Dec 23, 2010 at 13:07, Robert W robertw...@gmail.com wrote:

 I've searched the Ubuntu forums for an answer and couldn't find one, so I
 figured I'd ask here instead.

 If I use DD to back up my USB drive to a file, should I expect the checksum
 of the image file created to be the same as the checksum produced by the
 device (for instance, /dev/sdb)?

 I've been getting different checksums and I wanted to know if it was because
 of an error writing the image file, or because of something else.

Assuming that you are not *using* the USB device at the time, you
should get the same values; I often use that to verify that a copy of
a disc is correct.

Regards,
Daniel
-- 
✣ Daniel Pittman            ✉ dan...@rimspace.net            ☎ +61 401 155 707
              ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Converting a Hard Drive to a Virtual Machine

2010-11-18 Thread Daniel Pittman
david da...@kenpro.com.au writes:

 I have an Ubuntu box running which I would like to be able to clone into a
 virtualbox VM.  I don't want to shut the hardware down, or play with it too
 much because it's a live server.

 Is it possible to use MondoRescue or some other software to clone the server
 hard drive, preferably without shutting it down, and then create a virtual
 machine from the resulting image?

Not robustly, for cloning the raw disk, because you will end up with a really
messed up file system.  If you could stop file system access for a few hours
it would work, but that has ... probably the same issue. :)

What you *could* do is boot a LiveCD in the VM, and use rsync or something to
clone the files into the new machine.  Then, you can keep the block level
consistent and only risk file level oddities, which are much less nasty.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


XecureBrowser - looks like snake oil to me. (was Re: [SLUG] Browsers for banking)

2010-11-10 Thread Daniel Pittman
Mada R Perdhana mrp@gmail.com writes:

How interesting.  It looks pretty much like snake-oil, a scam intended to
scare folks who don't know much about security, to me.


The problems start with their lack of presence: the main bits of presence
are a FaceBook page, a twitter account, and a Yahoo Group with barely
coherent writing about their content.

They do, though, do the scam-focused thing: waffle vaguely about security
issues, claim (but not prove) they are more secure, then tell you that you
are a bad person if you don't instantly convince your friends to use their
software.


They do have an email address, apparently attached to some Google Apps
hosting, and a website with links to their 2008 security forum, and a copy of
the same information about security (eg: none) as their FaceBook page
provides.


They start with the *technical* issues by claiming that techniques of
cracking the SSL implementation are widespread, but provide no evidence about
what those techniques are - or why they are, for example, not being widely
reported since that would be huge security news.

If we generously assume that they mean that attackers are running software on
your machine to intercept content *without* having to violate the
cryptographic security of the SSL/TLS protocol then they have a huge burden of
proof in the form of demonstrating their software actually does anything.

Which, of course, they don't deliver.


Meanwhile, if we look to their writing on the facebook page they have some
excellent advice for you: you can keep the software safe by keeping the
original zip file around, and if you ever have a doubt (sic) you can just
extract the executable again.

Because, y'know, an attacker would never, ever think of being able to attack a
bit of software every time it ran, or to fiddle with an executable inside a
zip file.  That would be, y'know, hard!



They also explain that in the next couple of versions they will be working to
fix security problems like hijacking of your laptop - so, y'know, if this
issue has not been addressed in this version then, hey, apparently our
generous assumption earlier was inaccurate.

They *can't* be claiming that they secure the system against local attacks,
leaving *only* that these hackers are breaking the SSL/TLS protocol.  Oh,
well...


Their public don't help, either.  The top hits contain claims like this:

As you know, break-ins money can through hypnosis, ATM card fraud, and
phishing. Phishing is a cunning technique to obtain sensitive information
while transacting through Internet Banking. They stole your information
such as the username, password, credit card numbers and so on-depending on
the form of phising

I know that one of my huge security concerns, which a secure web browser could
help with, is that I might be subject to hypnosis or ATM card fraud!  Those
damn hackers and their hypnotic virus powers!


So, MRP: this looks convincingly like something that is at best snake-oil, and
at worst outright fraud.  Care to respond?

Daniel


 Try XecureBrowser, it's a browser design for ibank transaction,
 protect from ssl injection or anything which relate with ibank crime
 type.

 regards,
 mrp

 On 11/10/10, Jeremy Visser jer...@visser.name wrote:
 Jim Donovan said:
 Commonwealth opens extra windows but only logs off in one of them;
 you have to close the others by hand. Not that they will work after
 logoff but it's lousy security.

 I don't know what browser you use, but in Chromium I just typed
 'netbank.com.au', logged in, and not a single browser window was opened.
 The NetBank interface just opened in the same browser window.




 -- 
 Linkedin : http://id.linkedin.com/in/mrpbpp
 PGP ID : 0xDC3A483A
 PGP Fingerprint : FCBE 697C 3C47 89D2 C28F  6C94 E607 7E99 DC3A 483A
 See http://www.keyserver.net or any PGP keyserver for public key

 Never Trust an Operating System You don't have the Source for...
 Closed Source for device Driver are ILLEGAL and not Ethical... act!
 Isn't it, MS Windows a real multitasking OS?, Why? 'Cause It can boot and
 crash simultaneously!

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: XecureBrowser - looks like snake oil to me.

2010-11-10 Thread Daniel Pittman
Voytek Eymont li...@sbt.net.au writes:
 On Thu, November 11, 2010 11:38 am, Daniel Pittman wrote:

 It almost sounds like you are trying to disagree with me here, which seems
 strange, given that the issue in the article was that it turns out people
 with low levels of technical experience are vulnerable to social
 manipulation.

 I can't quite tell if you are disagreeing or not, though, which is
 strange. :)

 agree with you, and, appreciate your concise summary of the XecureBrowser,
 likewise, I followed up the XecureBrowser link.  yes, that's what I meant,
 here's another example

Ah.  Sorry for the confusion, then. :)

 sorry for not making it more obvious.

I don't know I would, personally, assume that it was your side that slipped
here.  I am happy to say that this is not my most ... focused of days. ;)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Re: XecureBrowser - looks like snake oil to me.

2010-11-10 Thread Daniel Pittman
Mada R Perdhana mrp@gmail.com writes:

 I think, it is too careless if this is just a scam, because the developers
 also threw a request to the public (the information security community) to
 perform tests on their application.

This is a pretty certain sign of ... well, not a scam, but a sign that this is
snake oil - something that doesn't really do what it claims.

The most important this is a sign of that is that it is an effort to prove
that something is secure by demanding other people take the trouble to prove
it is *not* secure.

Which fails disastrously: if no one in the security community actually
bothers to test it, is it secure, or just untested?

It also fails disastrously because it proves that a set of people, if they
actually bother, can't break it.  That proves *nothing* about the ability of
other folks to do so.


This /sounds/ like proof of security, but isn't (even if done as intended),
which is a classic sign of snake oil.

 from existing web (https://www.xecureit.com/xb/), we could also seen that
 they had an affiliation with ISACA and CISSP certification, which in my
 personal opinion it is to reckless to drag this two bid name into, since
 it would make a big reaction from the information security communities.

No, there wouldn't, for several reasons.  The most important one is that
offering CISSP training has nothing at all to do with the secure browser
they are offering, and they make absolutely no claim that it is connected.

The second, and pretty much equally important reason, is that the information
security community doesn't really give a damn about a fight between Cisco and
some tiny little company over the inappropriate use of a Cisco certification.


Now, you might make an argument that they were trying to conflate the presence
of those things with any sort of actual security of the product - which would
be supporting evidence that they were selling snake oil, not evidence against.


 May be some of security experts in here could also do some test with that
 thing, to prove whether ,xb just a scam or it is really works to secure ib
 transaction.

Why on earth would I spend my time trying to prove something like that, rather
than just recommend things that are known and understood to work?

You seem to be assuming that the burden of proof is on the Internet security
community to prove that this is a bad thing.  That isn't really how actual
security stuff works: there, the burden of proof is on the claimant.

If you want us to believe that XB is worth something, prove it.  Show the
proof that it actually, measurably improves user security.


 anyway, again.. everything returns to the user, to determine which are the
 most secure (or convenience?) way to conduct an ib transactions.

Are you trying to argue that popularity is a good way to identify the security
of a product?  Didn't FireSheep show us that was ... hard to support?  (Not to
mention that we have decades of other proof that security is not a pressing
concern for users, but whatever. :)


...and, frankly, that claim only serves to convince me that the term you want
is sucker, not user, here: the audience are people who are convinced that
there is some secret security sauce in the product without actually
understanding anything about why it might make them more secure.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] chroot - landing/ cd'ing into home dir?

2010-11-02 Thread Daniel Pittman
Zenaan Harkness z...@freedbms.net writes:

 This has stumped me for a few days now.  When I chroot into my chroot dir, I
 am always started at the / (root) directory.

Uh-huh.

 Just this morning I re-read the bash man page, and finally inserted cd
 $HOME at the start of /etc/profile.

That may surprise various (poorly written) scripts out in the wild which make
assumptions about where they start; putting it in the per-user script would
have likely been a better choice.

 Is there a more elegant way?

Not really, not.

 Eg, should I be trying to run /bin/login or something instead of bash?

It wouldn't hurt, I guess, but in essence the chroot thing assumes you know
what you mean to do, so unless the contained code DTRT you get this default
location.  There isn't any more blessed or correct way to do what you want
inside the chroot.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Is Windows XP much faster under hardware virtualization?

2010-10-31 Thread Daniel Pittman
Jeremy Visser jer...@visser.name writes:
 Jon Jermey said:

 I'm currently running Windows XP for work in VirtualBox under Mint, and
 it is usable but sluggish. I'm in the market for a new PC and one of the
 options is to get one with hardware-assisted virtualization. Can anyone
 comment from personal experience on whether this will make a) a
 spectacular difference; b) a moderate difference; c) no difference at
 all to the speed of virtualized XP?

 From personal experience, I can say that cannot notice any (perceptible)
 performance difference whatsoever in VirtualBox whether hardware
 virtualisation is turned on or off. None at all.

Almost no one will - the performance difference is real, but mostly shows up
in memory bandwidth, page-mapping performance, and context switching
performance.

That means you need to be doing something that pushes one of those to the
limit *without* hitting other limitations of the VM to actually see any
change.

[...]

 Are there different types or levels of hardware virtualization available
 off the shelf, or it is one-size-fits-all?

 Well there’s Intel VT-x and there’s AMD-V, which are the duopoly’s
 equivalents. Both are supported by VirtualBox, VMware, KVM, etc.

 Personally, I think that if you buy a new PC with hardware virtualisation,
 the performance benefit you will see will be coming from the faster hardware
 more than the VT-x/AMD-V support.

*nod*  Also, keep in mind that one of the biggest factors in VM performance is
going to be I/O for most users.

That means that the performance of your paravirtualized devices is the key for
getting better performance - and that usually just means picking a VM solution
with appropriate guest drivers and all.

(Unless you plan on mapping physical hardware into the VM, in which case VT-d
 or the AMD equivalent makes a difference.)

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Is Windows XP much faster under hardware virtualization?

2010-10-31 Thread Daniel Pittman
justin randell justin.rand...@gmail.com writes:
 On 1 November 2010 09:16, Daniel Pittman dan...@rimspace.net wrote:

 Are there different types or levels of hardware virtualization available
 off the shelf, or it is one-size-fits-all?

 Well there’s Intel VT-x and there’s AMD-V, which are the duopoly’s
 equivalents. Both are supported by VirtualBox, VMware, KVM, etc.

 Personally, I think that if you buy a new PC with hardware virtualisation,
 the performance benefit you will see will be coming from the faster hardware
 more than the VT-x/AMD-V support.

 *nod*  Also, keep in mind that one of the biggest factors in VM performance 
 is
 going to be I/O for most users.

 That means that the performance of your paravirtualized devices is the key 
 for
 getting better performance - and that usually just means picking a VM 
 solution
 with appropriate guest drivers and all.

 (Unless you plan on mapping physical hardware into the VM, in which case VT-d
  or the AMD equivalent makes a difference.)

 yep, I/O is normally a killer. at work, all dev machines have (at least) two
 physical drives, so VMs can be given a disc separate from the host OS. we
 find that's the simplest, best bang-for-buck way to get good VM performance.

Only if you have PV disk drivers - performance with the emulated hardware,
even SAS and SCSI controllers, is generally way lower than the PV stuff
because of the overheads of emulation.[1]

FWIW, using KVM we reproduced the widely reported native performance using
direct I/O and an LVM slice results; those later should work for any raw block
device, not just LVM.  (Which matches your comment, of course. :)

Daniel

Footnotes: 
[1]  Really good non-PV drivers and good hardware can reduce it to the
 equivalent of only a few transitions into kernel mode, but they can't be
 as efficient as the one or less that PV drivers manage.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] An apt-get question.

2010-10-28 Thread Daniel Pittman
wbenn...@turing.une.edu.au writes:

 I don't understand this.  I have the original download file from Freshmeat
 on my desktop.  The file is CptCrosswords-1.2.Linix.x86.tgz

[...]

 [I know I know. I can probably use this, but I'd still like to know which
 memory hole the .tgz file fell in.]

At a guess your web browser and/or their web server colluded to notice that it
was compressed, and to automagically decompress the gzip bit for you, with
only the tar part left behind.

(My next guess is that your web browser hates you deeply, and removed the gz
 extension without decompressing it. :)

 I intend to use
 ./configure  make  make makeinstall

 Will the fact that its a tarball make any difference, or will I have to
 de-tar it first?

Well, you need to extract the files from the tar archive, yes.

Perhaps this will be made clearer with a little explanation, and please
forgive me if this is obvious to you:

A tar file is a single stream of bytes that holds a whole lot of files and
their metadata (permissions, directory, etc), one after another.

Unlike the zip format and other things popular on non-Unix platforms this is
just the files - no compression or anything.

To compress these for transmission most Unix folks use a compression filter,
which takes one stream of bytes (the tar file) and outputs another (the
compressed tar file).  gzip and bzip2 are popular here.

So, a 'tgz', or 'tar.gz', file means:

  A set of files ...
...wrapped into a single stream using tar...
  ...fed through the 'gzip' compression filter...
...and handed to you.

So, if something removed the gz part and left only the tar part you still need
to unpack the tar bits.

(In 'tar xfz foo.tar.gz' the 'x' means extract, the 'f' means from a file, and
 the 'z' means decompress the gzip part.  Omit that, for only 'xf', and you
 will unpack a tar file that is not compressed.)

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] debmirror: gpgv not finding debian key

2010-10-15 Thread Daniel Pittman
Zenaan Harkness z...@freedbms.net writes:

 I had this working 3 weeks or so ago, but can't figure out what's gone
 wrong.

At a guess the key changed; I know they were rotating the Debian archive keys,
though I thought it was going to be longer...

[...]

 # cd $GNUPGHOME  ls -l
 total 76
 -rw--- 1 justa justa  5735 2010-10-15 16:47 pubring.gpg
 -rw--- 1 justa justa  5735 2010-10-15 16:47 pubring.gpg~
 -rw--- 1 justa justa 0 2009-10-19 18:46 secring.gpg
 -rw--- 1 justa justa  1200 2010-10-15 16:47 trustdb.gpg
 -rw--- 1 justa justa 23690 2009-10-19 18:52 trustedkeys.gpg
 -rw--- 1 justa justa 23690 2009-10-19 18:52 trustedkeys.gpg~

apt and friends use a private keyring; use apt-key to maintain it.

If you import that key into the apt-key store your problem should be resolved.

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] On buying a laptop.

2010-10-14 Thread Daniel Pittman
Tony \H.G\ Candito blindra...@gmail.com writes:
 On Thu, Oct 14, 2010 at 2:44 PM, Luke Yelavich them...@themuso.com wrote:
 On Thu, Oct 14, 2010 at 02:41:39PM EST, Luke Yelavich wrote:

 Coming from a point of view of audio stack maintainer in a distro, I'd say
 stay away from Toshiba. There have been many issues surrounding hda audio
 and Linux in the past few years that myself and colleagues have tried to
 help sort out. I also believe the kernel maintainers for Ubuntu also
 recommend users stay away from Toshiba. I can't give any more information
 than that, but I personally trust the kernel maintenance team of Ubuntu
 when they suggest that people look at other manufacturers.

 I should add, that such audio issues were hard to diagnose, due to some
 ACPI oddities. I could dig for the bug refs if people are really
 interested.

 OK, so now that we've learn't what isn't safe. What is?

ThinkPad, mostly.  Their very bottom end stuff is really IdeaPad inside,
which is not the same quality as the ThinkPad hardware.  See
http://www.thinkwiki.org/ for details on what does or doesn't suck.

Apple, after six months.  Not because of OS-X, mind you: because there are
about two dozen types of mac hardware *total*, compared to however many
thousands of combinations of non-mac laptop hardware, you can be pretty
confident that it will all be fairly rapidly supported, and solid once it is.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] On buying a laptop.

2010-10-13 Thread Daniel Pittman
wbenn...@turing.une.edu.au writes:

 So, I wish to buy a laptop.  Choice suggests the Toshiba Satellite LP500.

 I recall some time ago a discussion on whether the AMD processors were
 better at Linux than those of Intel. As I also recall, majority opinion was
 that there was 0 in it. Is this still the case?

They are pretty much indifferent, other than the basic performance stuff that
any benchmark highlighting CPU performance will show you.[1]

 Has anyone had experience (+ or -} with the Toshiba that's worth airing?

Their warranty support was a PITA when I had to deal with them.  If that
matters to you, another vendor might be a better choice ... but the only
vendors I have dealt with who didn't suck were IBM (ThinkPad, and Lenovo have
been much more ... mixed), and Apple.

Other folks I know say Dell service was good, though my experience wasn't.

Daniel

Footnotes: 
[1]  That is: don't just compare the speed, compare the performance.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Greylisting Sendmail

2010-10-12 Thread Daniel Pittman
Nigel Allen d...@edrs.com.au writes:

 Customer of mine currently has their own in-house mail server. Machine runs
 CentOS 4.8 with Sendmail, spamassassin and mimedefang. RBL's are covered in
 sendmail.mc with spamhaus and SORBS.

 Can anyone think of any reason why this combination would not play nice if
 they also implement greylisting?

Nope.

 Any gotchas anyone is aware of?

Some providers, typically big ones (*cough* Telstra, Optus) tend to play
poorly with greylisting: they have a separate pool of senders if the first
attempt is not accepted, and can delay up to 24 hours before sending.[1]

Some greylisting tools have a set of exceptions for dealing with this, others
don't.  The later ... hurt more. :)

Daniel

Footnotes: 
[1]  Well, could, when last I managed a system with greylisting, about nine
 months ago.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: Subject: [SLUG] change to linux and google chrome

2010-10-10 Thread Daniel Pittman
Martin Visser martinvisse...@gmail.com writes:
 On Sat, Oct 9, 2010 at 6:30 PM, Brett Mahar brett.ma...@gmail.com wrote:

 You might like to check out SW Iron instead of Chrome, it is the same
 thing but without the many Google tracking devices built-in:

 http://www.srware.net/en/software_srware_iron_chrome_vs_iron.php

 You just gotta love that SRware are obviously down on Google tracking yet
 are quite willing to take their money by offering Google Ads on both the
 side and bottom of their page.

Yeah.  FWIW, once you start to poke at that the whole thing ... aw, heck.
Let me just add to this an excerpt from the Debian mailing lists when this
same thing came up:

http://forums.debian.net/viewtopic.php?f=20t=50358

At the time SRWare also used a service provider that used pop-under
advertising to make money.

So, yeah.  As far as I can see this is, in essence, an implementation of the
principal that if you pander to paranoia then you can make money from ... the
folks who care enough to worry about privacy, but not enough to actually do
the research themselves.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Listing local wifi access points?

2010-10-04 Thread Daniel Pittman
Erik de Castro Lopo mle+s...@mega-nerd.com writes:
 darrin hodges wrote:

 The aircrack-ng http://www.aircrack-ng.org/ suit contains a utility that
 will list AP's.

 Sorry, any program whose getting started tutorial [0] has a first
 step of Determine the chipset in your wireless card is going in
 the wrong direction.

 I have absolutely no interest in cracking other peoples wifi. What
 I'd like to do if figure out which channel in my local area is the
 least congested so I can park my AP on that channel.

Cool.  Don't forget that if you are using 2.4GHz there are *three* channels,
no matter what the card says: only 1, 6, and 11 are non-overlapping, so you
will get a performance hit from anything that crosses over between them.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Hosting e-mail?

2010-09-26 Thread Daniel Pittman
Amos Shapira amos.shap...@gmail.com writes:

[...]

 Could someone suggest a better mail service we can rent separately from
 web24 and point the MX records for that domain?

Fastmail.fm are good, but...

 just something which will reliably forward e-mail to our various gmail
 mailboxes.

...would the free Google Apps for Domains meet your needs?  It is limited to
50 mailboxes, and is separate from your current gmail account, but should give
you equal reliability and capacity.  (Plus you are obviously happy with their
overall service and all...)

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Hosting e-mail?

2010-09-26 Thread Daniel Pittman
Amos Shapira amos.shap...@gmail.com writes:
 On 27 September 2010 15:03, Daniel Pittman dan...@rimspace.net wrote:
 Amos Shapira amos.shap...@gmail.com writes:

 ...would the free Google Apps for Domains meet your needs?  It is limited to
 50 mailboxes, and is separate from your current gmail account, but should 
 give
 you equal reliability and capacity.  (Plus you are obviously happy with their
 overall service and all...)

 Thanks. Will propose it to her.

No worries.  My next choice would definitely be fastmail.fm; we recommend and
resell them here at work, and I know some of the folks who work for them.
They are solid, if all you want is great IMAP mail support. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] copying binaries from one Centos to another?

2010-09-18 Thread Daniel Pittman
Voytek Eymont li...@sbt.net.au writes:
 On Sat, September 18, 2010 3:37 pm, Daniel Pittman wrote:
 Voytek Eymont li...@sbt.net.au writes:

 That will not help unless you rebuild Cacti to use the older version of
 RRDTool.  (Also, are you /sure/ it is RRDTool and not something else in the
 Cacti stack that causes the performance problems?  RRDTool is
 crazy-efficient, so I wouldn't pick it as the first point to suspect.
 OTOH, if you measured it and found it was the cause then you measured it
 and all. :)

 Cacti has a setup screen where path to binaries like rddtool can be entered;

That should be sufficient, then, to use your custom version.

[...]

 I thought the issue was again lack of math co-pro, and, will 'go way' on
 the P4, initially

Yeah, and naturally it didn't because there isn't anything Intel make that
doesn't have a nicely fast math coprocessor these days (and, in fact, you have
to go back to the days of the i486 to get something that didn't ship one
on-die and all... :)

 Cacti was using RRDTool 1.2x, then after a yum update, it got 1.4, I 'fixed'
 the path in setup, but, a screen draw is painfully slow now, based on what I
 found on the Celeron system, it seems it's similar issue

A screen draw?  Are you running X locally on the system, or is this on your
client?  Neither one necessarily points to RRD 1.2 vs 1.4, although it could
be involved in either.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] copying binaries from one Centos to another?

2010-09-18 Thread Daniel Pittman
Voytek Eymont li...@sbt.net.au writes:
 On Sat, September 18, 2010 7:17 pm, Daniel Pittman wrote:
 Voytek Eymont li...@sbt.net.au writes:

 A screen draw?  Are you running X locally on the system, or is this on your
 client?  Neither one necessarily points to RRD 1.2 vs 1.4, although it
 could be involved in either.

 sorry, I meant lag to generate /populate graph images in the browser, to
 load the cacti graph page in a browser on a client over ethernet

OK.  So, just to check: can you reproduce this performance problem with a page
that contains only one single graph?

One of the big performance ... well, not problems any longer, but issues with
our use of munin in-house is that the default view for a host contains
something like 60 or 80 graphs[1].

Using the CGI or FastCGI drawing model this, by default, means that we fired
off something like 120 processes on the machine when we visit that page.  That
caused some performance delay in getting results, because they all took about
the same length of time to draw, and were all CPU hogs...


So, it might be worth checking if your problem isn't RRD, but rather the fact
that you have lots of instances running concurrently on an old and slow
machine[2] that just take ages to start delivering results.

Daniel

We fixed it my limiting concurrency, so you get less overall delay, and faster
initial results, out of the system.

Footnotes: 
[1]  We love us some metrics, and how!

[2]  Heck, with a P4 or Celeron it wasn't a fast machine even back in the day,
 thanks to Intel's foolish micro-architecture decisions and all.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] copying binaries from one Centos to another?

2010-09-17 Thread Daniel Pittman
Voytek Eymont li...@sbt.net.au writes:

 I have a Centos 5x system with RRDTool 1.2x as in

 # ls
 bin  include  lib  share
 # pwd
 /usr/local/rrdtool-1.2.27

 I want to use this on another Centos 5x, do I need to copy anything else
 beside recursive /usr/local/rrdtool-1.2.27?

Maybe.  That setup /looks/ like a complete install of rrdtool under that path,
but unless you actually know that there is no assurance it is true.

 or is it a 'bad idea' to copy from one system?

Generally, maybe.  Technically, no, it should work fine by and large.  You
might run into issues having to get older versions of libraries that the
binary depends on, and if the glibc versions are far enough apart their could
be trouble, but it *should* work.

However, outside the strictly technical you are probably inviting even *more*
effort and suffering later using a hand-managed version of that package.

 (I've setup a Centos 5 to run Cacti on Celeron 1.7GHz, but, it seems to
 really struggle, maxing to 100% (guess it wants arithemetic processor that
 Celeron doesn't have ?);

It sure does.  Whatever the problem was, it wasn't that.  (Probably just that
Cacti needs more CPU than it could get, possibly just in sudden bursts. :)

 so, I found a P4, and, setup Cacti, that was better, till yum update updated
 RRDTool from 1.2 to 1.4, now, cacti web page load really lags, hence I
 thought I'll backgrade to RRDTool 1.2)

That will not help unless you rebuild Cacti to use the older version of
RRDTool.  (Also, are you /sure/ it is RRDTool and not something else in the
Cacti stack that causes the performance problems?  RRDTool is crazy-efficient,
so I wouldn't pick it as the first point to suspect.  OTOH, if you measured it
and found it was the cause then you measured it and all. :)

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] monitor IP address

2010-09-10 Thread Daniel Pittman
Amos Shapira amos.shap...@gmail.com writes:
 On 11 September 2010 14:44, james j...@tigger.ws wrote:

 I've changed ISP to iinet and I now have a dynamic IP.  I use a router/adsl
 modem.  Just brainstorming in case I've missed the obvious. I need to
 monitor my IP given to the router.

 What do you need it for? Maybe the context will let us give more useful
 answers.

*nod* FWIW, there are two reasons this usually comes up in my experience: the
first is general interest, for which I would also recommend whatismyip, or
just logging in to the modem and letting it tell you.

The second is running services, for which I highly recommend using any one of
the dynamic DNS update services that pretty much every ADSL modem has a
built-in client for these days.  That way the modem can take care of all the
hard work for you, and you just sit back and use the DNS name.

(Don't run an SMTP server on a dynamic IP, though.  It is a really bad and
 painful way to discover how much lost and misdirected email sucks, since
 I pretty much promise that the one time you are *depending* on that email
 getting through your IP changes and someone else with an SMTP server gets the
 address, then sends back a 5XX to your correspondent.)


 My first idea is to have the router log remotely, monitor the log, and
 surely there will be some IP and IP changed info.

 How about whatismyip.org? (as opposed to .com, which gives a larger and more
 complex page):

I would just script getting it out of the modem, if the dynamic DNS option
wasn't satisfactory.  Much lower overhead, and zero cost to both you and the
International transit links. :)

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: fedora 13 qemu guest (Peter Miller) (Daniel Pittman)

2010-09-09 Thread Daniel Pittman
Brett Mahar brett.ma...@gmail.com writes:

 That sounds suspiciously like you are either running fully emulated (eg: no
 KVM support) on Slackware, or that you don't have virtio on that platform.

 It would be interesting, to me, to know if either of those guesses were
 true, because I have a current need to improve my skills diagnosing KVM
 performance issues and feedback helps with that. :)

 I only thought it was that because I could not think what else it could
 be. In the next few weeks I hope to have some time to investigate further,
 and will let you know.

Well, cool.  FWIW, checking for virtio should be as simple as lspci inside the
guest, and checking if you have a virtio device or not.

 Is the backing store for the virtual machine also on the USB disk?  If so
 it would point a finger toward guest I/O performance is the pain point, and
 the introduction of USB would just be making a painful service slower.

 In all cases the virtual hard drive was on an (real) external usb drive (not
 the same one as the host OS was sometimes on). So I don't think the guest
 I/O is the culprit.

Yeah, that would eliminate that by ensuring it was identical.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] fedora 13 qemu guest (Peter Miller)

2010-09-08 Thread Daniel Pittman
Brett Mahar brett.ma...@gmail.com writes:

 I'm trying to use qemu to host a Fedora 13 guest.

 On that note, be aware there are vast differences in the speed of
 applications running on qemu virtual machines, depending on the host
 system. Eg, BOINC on Debian or Suse running inside a virtual machine, with
 an Ubuntu 10.04 host is about 10 times faster than on Slackware 13 host
 (MIPS measured by BOINC software).

That sounds suspiciously like you are either running fully emulated (eg: no
KVM support) on Slackware, or that you don't have virtio on that platform.

 I am guessing this is due to Ubuntu kernel being built with kernel same page
 merging, and Slackware without, but am not sure.

That seems unlikely: other than the cost to scan for pages to merge (which
hurts Ubuntu) that would only make a difference if you were pushing the limits
of your host memory capacity *and* KSM shared enough to avoid I/O in the
guest.

It would be interesting, to me, to know if either of those guesses were true,
because I have a current need to improve my skills diagnosing KVM performance
issues and feedback helps with that. :)

 Also inexplicably, having the host OS boot from an external usb hard drive
 causes major drop in the speed of the virtual machine, far beyond what would
 be caused by disk access times. I am not sure why this happens.

Well, it would slow down the host OS some, because USB is a fairly slow
and high latency protocol for disk access, but otherwise it shouldn't
fundamentally alter the running kernel.

Is the backing store for the virtual machine also on the USB disk?  If so it
would point a finger toward guest I/O performance is the pain point, and the
introduction of USB would just be making a painful service slower.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Travelling Overeseas - USB Modems

2010-09-05 Thread Daniel Pittman
Rick Phillips r...@greyheads.net writes:

 I presume some people on this list travel to the UK and Europe.

 We will be travelling in Europe on a guided tour where the use of free
 internet in coffee lounges is not an option and my experience of a
 couple of years ago was that buying Internet time in the hotels where we
 stayed was horrendously expensive - made worse by the exchange rate.

 Are USB 3G modems a reasonable alternative?  If so, what would members
 recommend?

Buying a service in the country you are wanting to use it in, since no one
anywhere has reasonable International data roaming rates.  Most countries
offer an inexpensive pre-paid SIM that you can slot into an existing 3G / GSM
modem and use on their network.

If you are travelling to Japan things are harder, but pretty much all of
Europe is fine for picking up a pre-paid data SIM without trouble.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] [slightly OT] video upload site with API?

2010-09-04 Thread Daniel Pittman
Sonia Hamilton so...@snowfrog.net writes:

 [slightly OT]

 Can anyone recommend a video upload site (like Youtube, DailyMotion)
 that has an easily scriptable API, for uploading a lot of videos, that
 also allows passwording of the videos?

YouTube?  [1]

OTOH you explicitly asked for this mentioning YouTube so I assume that API
can't meet your needs somehow — in which case, how doesn't it work for you?

Daniel

Footnotes: 
[1]  
http://code.google.com/apis/youtube/2.0/developers_guide_protocol_direct_uploading.html#Direct_uploading

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] High I/O file systems

2010-08-31 Thread Daniel Pittman
Max Wright maxwrigh...@gmail.com writes:

 Has anyone implemented any of the high I/O filesystems which have been added
 to the kernel?

Well, we use XFS widely because the performance is enormously better than
ext3, and they have worked hard to redress the few metadata heavy workloads
that they used to perform poorly at.

 We have some busy databases which put ext3 under stress, I am wondering
 about Oracle's fs for instance.

That would be a universe more experimental than /I/ would be comfortable with
in production at this point in time.  (Assuming you mean btrfs rather than
luster or one of their other cluster file systems. ;)

Regards,
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] flashing motherboard with no floppy drive

2010-08-29 Thread Daniel Pittman
Martin Visser martinvisse...@gmail.com writes:

 It looks like according to this all you need to make sure is that a
 barebones DOS session is running. According to this,
 http://www.pcstats.com/articleview.cfm?articleid=1605page=4 , if you boot a
 DRDOS CD that should work.

These days FreeDOS is what many vendors ship, and it should work with more or
less anything for flashing firmware.

 Maybe be even using DOSEMU in Linux might work?

Generally not, although I never really understood why.
Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualisation and DRBD

2010-08-25 Thread Daniel Pittman
Nigel Allen d...@edrs.com.au writes:

 We're investigating both virtualisation of servers and High Availability at
 the same time.  Currently looking at Linux-HA and DRBD (amongst others).

Keep in mind that both things — HA and virtualization — are actually pretty
hard to get working smoothly.  Just ask me how it sucks when things don't work
right, after another ten hour day fighting a data corruption bug. ;)

 The idea of DRBD appeals to both me and the client as it means (or seems to
 at least) that we could add a third (off-site) machine into the equation for
 real DR.

DRBD is two-machine in pretty much any reasonable setup, and the performance
for an off-site spare is going to be ... interesting.  If you don't have a
very low latency, high bandwidth link between the locations then you can
expect significant pain and suffering.

(The safe DRBD protocol means suffering in performance, the unsafe ones
 insulate you from pain until you depend on that warm-spare and find out if it
 actually got all corrupted or not before the failure.)

 What happens when we then introduce Virtualisation into the equation
 (currently have 4 x servers running Centos  Windoze Server - looking at
 virtualising them onto one single box running Centos-5).

Keep in mind that performance absolutely requires that you have
paravirtualized drivers for those kernels.  That means picking something where
you have good disk and network virtual drivers ... and that probably means
not KVM, which sucks.

 I suppose the (first) question is: If we run 4 virtualised servers (B,
 C, D, and E) on our working server A (complete with it's own storage),
 can we also use DRBD to sync the entire box and dice onto server A1
 (containing servers B1, C1, D1, and E1) or do we have to sync them all
 separately?

Yes.  Specifically: you can do either, or both, depending on how you set it
up, and on the capabilities of whatever management software you layer over the
basic virtualization tools.

 Will this idea even float? Can we achieve seamless failover with this.

Maybe.  You have to be very, very clear on which two of the three attributes,
consistency, availability and partition tolerance you need, and make
absolutely, without question certain that you deliver on that.

To be clear: that means you must absolutely deliver availability (of your HA
solution) and non-partitioned connectivity, because you can't live with
inconsistency of data between the machines.

This is much harder than it sounds: you can easily work out that you have a
network cable pulled and have the entire ball of wax fall apart if you are not
very, very careful.

 If not, how would you do it.

We just moved from this to delivering iSCSI storage in the back-end, with
execute nodes that are going to start shedding local disk.  This uses KVM as
the virtualization technology, but anything that can talk to raw partitions
should be fine on top of this.

This gives us two advantages: first, we can scale as broad as we want (and
have great performance) by virtue of deploying cheap storage nodes with 6TB of
usable disk, tolerance of any two disk failures, and 8.5GB of disk cache
between the nodes and the spinning rust.

Adding another of those and moving some of the load to another system is
relatively inexpensive, and can fairly easily grow with our needs; we can also
scale up storage node performance by throwing a more expensive SAS array or
server into the mix.  (...or just more cache memory, at $100 a GB or so.)


The second is that we can deliver reliability without complicating the storage
nodes: a virtual machine can use Linux software RAID to mirror, stripe, or
otherwise combine multiple iSCSI LUNs inside the virtual machine.

This gives us similar performance and redundancy benefits to using local
storage to back those devices, including the ability to lose a storage node
and have the machine continue to work.

This does mean that, unlike a design where replication is directly between
storage nodes, we send writes out N times for N targets from the processing
node — but that isn't any more bytes over the network overall, and we are not
short on network capacity.  (Plus, dedicated storage links on a bunch of the
busier machines make this work without interference with public services.)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Backups

2010-08-22 Thread Daniel Pittman
DaZZa dagi...@gmail.com writes:
 On Sun, Aug 22, 2010 at 4:54 PM, james j...@tigger.ws wrote:

 I've spent many hours with pencil and paper, I certain, but am asking in
 case someone older-n-wiser can offer sage words:

[...]

 Yup, that's the biggest failings with most commercially acceptable backup
 regimes.

 It's an offset of cost (in tapes) against reliability. If you want to be 99%
 guaranteed[1] to be able to recover any file which was saved, your only
 option is to take a daily full backup. Any grandfather/father/son schema
 will eventually lead to the possibility for files going missing.

*nod*  For what it is worth, this is why my usual backup deployment is
wonderful in the common case (backup server is alive), and very, very slow in
the worst case (backup server isn't).

Specifically, I deploy BackupPC or some similar online, disk-based,
deduplicating backup system to provide the vast majority of operations.

This gives me good responsiveness to most failures: lost file or machine?
No problem, since the data is there on disk, and you are not very far from
recovering it to whatever target you need.


For disasters tape is still good: take regular full dumps of the backup
filesystem (or with incrementals, if the cost/benefit works for you) to allow
you to recover the backup server if you lose it.

That makes a full recovery harder: you have to rebuild the backup server, then
build whatever servers are critical.  OTOH, it also makes for faster
performance in that second phase of recovery because you are no longer limited
by tape restore speed after that one machine. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: GRUB2 + RAID-1 on debian?

2010-08-11 Thread Daniel Pittman
Mr A Tomlinson an...@allsaints.nsw.edu.au writes:

 I have no experience with Debian, RAID-1 (software) and Grub, but I use
 it on numerous low end server Fedora installations.

[...]

 This site/article shows how to install grub to the 2nd hdd:
 http://lists.us.dell.com/pipermail/linux-poweredge/2003-July/008898.html

 While this one shows the hdd replacement method to rebuild your arrays:
 http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array

Those are both grub 1 documents; grub 2 is a ground-up reimplementation that
substantially changes things.

One of the changes, pleasantly, is that it finally understands things like
software RAID internally to grub, so that sort of issue is much reduced.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] A distro which recognises Wi-fi on Asus eee 1005p? -- thanks for suggestions

2010-08-11 Thread Daniel Pittman
Jon Jermey jonjer...@gmail.com writes:

G'day Jon.  It would be *really* nice if you could follow the same quoting
style as the rest of the thread; it makes it much easier to foll.w

 *nod* The real problem is not that Linux is hard to install: Windows is
 just as damn hard, and just as painful, for non-technical users.

 The problem is that when you buy your machine it comes with Windows
 installed for you, by someone else, and you don't have to worry about it.

 To me the issue goes somewhat deeper: from my limited perspective as a
 hardware novice, I just can't see why different drivers are even necessary.

Let me educate you: because the hardware is different, which is because the
vendors are lazy, and because ...

 As far as I can tell one wi-fi card does exactly the same things as any
 other wi-fi card:

... there are some pretty big variations in what the cards can do, to date,
and networking is one of those areas where performance differences are still
found, and where the activities of the hardware are complex.

 is is too much to ask that the hardware should be constructed so that it
 works with old software which is already out there and functioning perfectly
 well?

Obviously, yes.  Otherwise there would be a standard interface for wifi
drivers, like we have in USB with the UHCI, OHCI, EHCI, and other
standards. :)


 I accept that when there are technological breakthroughs the drivers need to
 change; but what we seem to have at the moment is a rerun of the old
 Apple/Apricot/Microbee/Commodore/Amiga hardware wars with each (wi-fi card,
 scanner, printer...) manufacturer refusing to run software -- i.e. drivers
 -- just because it was made for a different piece of hardware.

Neither your statement about the old ... hardware wars, or about drivers, is
accurate, so there isn't much response possible to this.

 Surely making a new piece of hardware that will fit into an existing system
 should be the responsibility of the hardware manufacturer, not the user OR
 the operating system manufacturer.

Why?  Seriously: why should the hardware vendor be responsible for paying out
the significant amount of money required to support your boutique little OS?

If you want to run whatever software you please, why shouldn't you pay
directly for the cost of driver development?[1]


 Isn't it a bit like the printer manufacturers trying to stop you using
 generic ink by sneaking microchips into their cartridges?

In some ways, but not very many of them, and not very similar.

A much better analogy would be that a vendor who refuses to distribute
hardware programming documentation is like a microchipping ink vendor:

In that case both vendors are actively trying to stop you using their system
the way you want, rather than just giving you all the information about the
hardware and a hearty you want a driver, you know where to find GCC.

Daniel

Footnotes: 
[1]  ...and you do, even in the FOSS universe: there, you pay by waiting
 longer to get drivers, rather than by paying more money to get them.[2]

[2]  ...or waiting longer *and* paying more money, as many commercial OS
 implementations show is quite possible. :)

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] A distro which recognises Wi-fi on Asus eee 1005p? -- thanks for suggestions

2010-08-10 Thread Daniel Pittman
Ken Foskey kfos...@tpg.com.au writes:
 On Wed, 2010-08-11 at 06:08 +1000, Martin Visser wrote:

 Jon's experience probably really demonstrates why Linux isn't going to go
 mainstream anytime soon. While I would say 90% of people are going to have
 hardware that just works with the most current release of most distros, it
 is the 10% that have issues that really stings.

[...]

 Installed windows on non-mainstream machines lately.  You have to find
 drivers,  have conflicts of dlls and other things.

*nod*  The real problem is not that Linux is hard to install: Windows is just
as damn hard, and just as painful, for non-technical users.[1]

The problem is that when you buy your machine it comes with Windows installed
for you, by someone else, and you don't have to worry about it.

Daniel

Footnotes: 
[1]  Heck, it sucks for technical folks too, a lot of the time.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] making ip adress stick rather than DHCP

2010-07-30 Thread Daniel Pittman
Voytek Eymont li...@sbt.net.au writes:
 On Fri, July 30, 2010 2:24 pm, Ben Donohue wrote:

[...]

 I guess I need to disable dhclient, is that what caused the problem ?

Well, if it is running it will keep renewing a lease, so yeah.  Kill it if you
have the static configuration ready and all.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Multiple server roles on one box

2010-07-29 Thread Daniel Pittman
dave b db.pub.m...@gmail.com writes:
 On 29 July 2010 14:40, Daniel Pittman dan...@rimspace.net wrote:
 dave b db.pub.m...@gmail.com writes:
 On 28 July 2010 12:23, Matthew Hannigan m...@zip.com.au wrote:
 On Tue, Jul 27, 2010 at 04:04:05PM +1000, Ben Donohue wrote:
 [  ]
 How about a DNS, squid and web server with multiple name based
 virtual domains on the same box?

 Is doing the above really dangerous on a fully patched and up to
 date system?
lso depends on the webapp.
 I'd be more comfortable with java (especially with security
 manager on) which is after all another form of vm.
 Java is like php, there are also language flaws coming out to bite you real
 soon. /me mutters something about OH MY THEY ESCAPED FROM THE JVM.

 Do you have a reference for that?

 Here is a recent example :)
 http://blog.cr0.org/2009/05/write-once-own-everyone.html
 You can finder older examples as well :)

Thanks.  That saves me searching around to try and find the same information
myself. :)

 ...but why?  What actual security value does that add, compared to the 
 vanilla
 kernels which do, oh, everything listed in their bullet point feature list,
 and out of the box covers over eighty percent of them?

 Good :) - but not chroot break out prevention, further aslr improvements etc.

Mmmm.  Most of the non-merged features are the ones that are fairly heavily
disputed WRT security value, though, are they not?

Anyhow, actually telling people why you recommended this was my core point,
not to argue about the actual value of the individual items, so if you don't
feel super-enthused about responding neither do I. :)

 Pro tip: asserting that an RBAC system will increase security is silly 
 without
 actually understanding how it will be used; people can do things just as 
 badly
 with RBAC as without.

 Sure, but grsecurity also has some other features :)

That was a subset list of the issues, but a fair response to what I wrote. ;)

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] making ip adress stick rather than DHCP

2010-07-29 Thread Daniel Pittman
Voytek Eymont li...@sbt.net.au writes:

 I have am old clunker with Centos 5.x, it appears it developed NIC probs,
 couldn't reach it, when screen plugged in, screen blank

 on power up, it gave some eth0 messages;

 I removed the 3COM PCI NIC, starred at it for a while whilst nodding my
 head, then, put it back in.  machine booted OK, BUT, is now on DHCP address,
 not what it was, fixed IP

Strange; I would almost guess that it automatically reconfigured it or
something on boot, which is rather strange...

 I've 'ifconfig eth0 192.168.1.5' ; 'route add -net...'
 that returned it desired IP OK, but, on reboot, it reverts to DHCP
 what do I need to make it stick ?

Edit the network settings in /etc/sysconfig to set it with a fixed address
rather than DHCP.  Possibly use the RH GUI tool rather than the command line,
if you prefer, to do it for you. :)

 (tried editing some /etc/sysconfig files, but, it was telling something
 about hardlinks, so I didn't)

I got nothing. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Multiple server roles on one box

2010-07-28 Thread Daniel Pittman
dave b db.pub.m...@gmail.com writes:
 On 28 July 2010 12:23, Matthew Hannigan m...@zip.com.au wrote:
 On Tue, Jul 27, 2010 at 04:04:05PM +1000, Ben Donohue wrote:
 [  ]
 How about a DNS, squid and web server with multiple name based
 virtual domains on the same box?

 Is doing the above really dangerous on a fully patched and up to
 date system?
lso depends on the webapp.
 I'd be more comfortable with java (especially with security
 manager on) which is after all another form of vm.

 With php (wordpress, drupal, moodle, or home grown) definitely not 
 confortable.

 While I also dislike php, what did the last pm's website use ? drupal.  What
 the president of the united states use - drupal. I can keep on going :)

Which government website had a big security hole from their CMS?  *grin*

Not that you are wrong: using tools like WordPress, Drupal, Moodle, and Joomla
is reasonably safe, regardless of the language they are written in; their
security is primarily dependent on the quality of the authors.


One of the PHP problems is that there is so much home grown written in it
that, today, it lives in the same place that Perl did years back: as the go-to
language for people who don't know enough to write a secure application, so
build insecure things.[1]


 Java is like php, there are also language flaws coming out to bite you real
 soon. /me mutters something about OH MY THEY ESCAPED FROM THE JVM.

Do you have a reference for that?


 Also, if you really care about the security of the system - install
 grsecurity[1]. You shouldn't be taking any chances :P

...but why?  What actual security value does that add, compared to the vanilla
kernels which do, oh, everything listed in their bullet point feature list,
and out of the box covers over eighty percent of them?


Pro tip: asserting that an RBAC system will increase security is silly without
actually understanding how it will be used; people can do things just as badly
with RBAC as without.

Daniel

Footnotes: 
[1]  Yes, IMO, the PHP language design encourages extremely poor practice,
 and the almost religious avoidance of database abstraction makes SQL
 injection issues more common, but fundamentally this isn't the cause of
 the problems — since Perl with DBI wasn't that much better, frankly, on
 the SQL injection front.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Multiple server roles on one box

2010-07-27 Thread Daniel Pittman
Ben Donohue donoh...@icafe.com.au writes:

 just a question...  I'm looking at the possibility of consolidating some
 servers for a client.

 If an internal box with DNS and squid were combined, would this be a security
 risk... as in risky way above normal?

No.

 How about a DNS, squid and web server with multiple name based virtual domains
 on the same box?

No.

 Is doing the above really dangerous on a fully patched and up to date
 system?

Yes, because you have world-facing services, which is really dangerous.

However, it is kind of unavoidable; in this case your biggest risk is going to
be the web server, which if it hosts any sort of dynamic content is going to
run the risk of compromise.

The DNS and Squid servers are basically irrelevant here, and putting them on
the same systems is unlikely to increase your practical risk.


 Any advice please?

Do a proper risk assessment, and then work out if putting those on the same
system meets your acceptable risk/value trade-off or not, as a business.

Regards,
Daniel

No, seriously, even if it was more risky running them on the same machine, it
might be worth $4,000 a month to the business in rack space and power to
consolidate them, and so worth doing anyway.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Task bar not working

2010-07-24 Thread Daniel Pittman
Jon Jermey jonjer...@gmail.com writes:
 On 22/07/10 21:17, Chris Allen wrote:

 I am using Ubuntu 10.4
 Since yesterday my task bar (@ bottom of Gnome screen) has been unworkable.
 It is still there but is always black on black.  When I minimise a window, I
 see it shrink and disappear into the task bar but after that I see no trace
 of it.  It almost impossible to recall it again from the task bar.

 Is there any explanation / fix for this?

 http://ubuntuforums.org/showthread.php?t=140473

Ouch!  Delete all your settings, just to restore the one application?  That
would really hurt.

You would probably be much better off firing up gconf-editor and looking
through the Windows Registry ^W^W gconf database for the menu settings, or
hunting for the GNOME panel settings under ~/.gnome and deleting only those.


Assuming that is the problem the OP faces, and not rather that some bug
somewhere has broken his system without his actually doing anything.

Daniel

I would consider turning off compositing in your window manager and see if
that helps.
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Samsung Galaxy S i9000

2010-07-18 Thread Daniel Pittman
Simon Rumble si...@rumble.net writes:
 On 18 July 2010 21:01, Daryl Thompson daryl.francis.thomp...@gmail.comwrote:

 i am looking at Android powered mobiles and was wondering has any one
 synchronised the Samsung Galaxy S i9000 with Linux ubuntu as yet and if
 there was any problems

 What do you synchronize with a desktop?

Anything you want, if you install an appropriate sync provider in Android 2.1
or newer.  There is at least one commercial implementation of this, although
not focused on Linux at all.[1]

So, you could either write some FOSS code to implement a sync provider for the
desktop, find one that someone else wrote, or ...

 Android is really designed to have all your stuff in the cloud.  Google
 Calendars, Gmail etc.  Anything else is going to be working against the
 grain.  It might work but it'll be hard work.

... just sync with something in the cloud, like Google, Facebook, MySpace[2],
and perhaps soon Twitter.

Daniel

Footnotes: 
[1]  http://www.markspace.com/products/android/missing-sync-android.html

[2]  IIRC, they have a contact provider or something, but I didn't really
 check since I don't use this service myself.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] today's scary thought

2010-07-16 Thread Daniel Pittman
Jake Anderson ya...@vapourforge.com writes:
 On 15/07/10 16:14, Daniel Pittman wrote:

[...]

 We cant be the first people to come across this branch office scenario.

Nope.  Lots of people have, and wished there was a good solution, but it is a
*really* hard problem.  The difficulty curve in fixing it looks like a
backward L, basically: the most trivial bit is trivial, then the problem more
or less instantly gets insanely hard.

 My goal is to have the branch office get a copy of all the files (think MS
 office) without hitting performance at either end.  something like this
 rsync thing, with a distributed lock manager would be the solution to 99% of
 the problem

...only then you pay the cross-WAN latency cost for every file open, at least,
plus have to deal with the problem of disconnected operation, so still need
conflict resolution, plus...

 The only problem I can see is if person A in Newcastle wants person B in
 Sydney to look at their file, they press save and then person B opens it
 before the lazy copy has moved it over, Perhaps maintain a write lock on the
 file until its synched? with user definable behaviour in the case of failure.

 At the moment the branch office is going to be working over a VPN back to
 the main office, with all the files etc sitting inside VM's, the images of
 the VM's will get rsynced nightly.  Which all in all is a fairly craptacular
 solution to be honest.

Mmmm.  For what it is worth, the least-worst solutions that I have found are:

1. Fire up a WebDAV server in each office to store their files, and make
   sure that it can be accessed through a fully public DNS name.[1]

   Some document management solutions offer WebDAV as part of their feature
   set, and might be a good addition to this.  IIRC, SharePoint, in an MS
   environment, is one of 'em.

2. Go buy a copy of http://www.netdrive.net/ for every user that you have.
   (...or just use a Mac, since they do WebDAV OK too. :)

3. Use it to mount the WebDAV share for your users, because unlike the native
   Win32 WebDAV support, it doesn't suck.[2]  Specifically, it works even if
   you are using a program that *isn't* Microsoft Office.

That gives reasonable performance, akin to HTTP, for reading the remote file,
plus some local caching, and it works right no matter where on the Internet
your users are because they access a public URL, not a private CIFS share.


However, not perfect, especially the server options, and not exactly
replicated between sites.  I don't know what the latest NetDrive offers in
terms of offline operation, either.

Daniel

Footnotes: 
[1]  Add SSL, authentication, etc to taste, of course.

[2]  Disclosure: this might not be true in Windows 7, or the latest Vista
 service packs, but because I have never used them I can't actually say.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] today's scary thought

2010-07-15 Thread Daniel Pittman
Jake Anderson ya...@vapourforge.com writes:
 On 15/07/10 14:10, Matthew Hannigan wrote:
 On Wed, Jul 14, 2010 at 04:06:17PM +1000, Peter Chubb wrote:

 You could do this with inotify, with `just a few' scripts around it.

 Related: http://code.google.com/p/lsyncd/ drives rsyncing with inotify.

 Actually that looks like a fairly handy tool, I have been trying to work out
 the best way of keeping files in two offices in sync and drbd seemed like
 overkill

Keep in mind that using rsync like that has absolutely *zero* conflict
resolution support, so you are inviting the data-loss fairy to visit when
there are concurrent modifications.

DRBD, meanwhile, is useless without a cluster file-system on top of it, since
you otherwise can't mount the data at both sites at the same time.


Sadly, I can't right now advise a better solution than these, however, since
it is the main problem I face in trying to bridge two data-centers and provide
coherent and sensible file access.

The best I can offer, right now, is xtreemfs[1] which will give you fair
performance but no local caching, so no disconnected operation.

Regards,
Daniel

Footnotes: 
[1]  http://www.xtreemfs.org/

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] today's scary thought

2010-07-15 Thread Daniel Pittman
Morgan Storey m...@morganstorey.com writes:

I bet the manual part of that synchronization doesn't win any points with
the users. :)

Daniel

 I know it is a bit un-maintained but what about Unison

 --
 Regards
 Morgan Storey

 On Thu, Jul 15, 2010 at 4:14 PM, Daniel Pittman dan...@rimspace.net wrote:

 Jake Anderson ya...@vapourforge.com writes:
  On 15/07/10 14:10, Matthew Hannigan wrote:
  On Wed, Jul 14, 2010 at 04:06:17PM +1000, Peter Chubb wrote:
 
  You could do this with inotify, with `just a few' scripts around it.
 
  Related: http://code.google.com/p/lsyncd/ drives rsyncing with inotify.
 
  Actually that looks like a fairly handy tool, I have been trying to work
 out
  the best way of keeping files in two offices in sync and drbd seemed like
  overkill

 Keep in mind that using rsync like that has absolutely *zero* conflict
 resolution support, so you are inviting the data-loss fairy to visit when
 there are concurrent modifications.

 DRBD, meanwhile, is useless without a cluster file-system on top of it,
 since
 you otherwise can't mount the data at both sites at the same time.


 Sadly, I can't right now advise a better solution than these, however,
 since
 it is the main problem I face in trying to bridge two data-centers and
 provide
 coherent and sensible file access.

 The best I can offer, right now, is xtreemfs[1] which will give you fair
 performance but no local caching, so no disconnected operation.

 Regards,
Daniel

 Footnotes:
 [1]  http://www.xtreemfs.org/

 --
 ✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155
 707
   ♽ made with 100 percent post-consumer electrons
 --
 SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
 Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] today's scary thought

2010-07-15 Thread Daniel Pittman
Nick Andrew n...@nick-andrew.net writes:
 On Thu, Jul 15, 2010 at 04:14:38PM +1000, Daniel Pittman wrote:

 Sadly, I can't right now advise a better solution than these, however,
 since it is the main problem I face in trying to bridge two data-centers
 and provide coherent and sensible file access.

 Try GlusterFS with mirroring and preference for the local filesystem.

It turns out this has a major problem, in the current iteration, for WAN use:

GlusterFS currently requires synchronous communication with all replicas for a
non-trivial number of operations, so you will pay a cross-WAN latency cost for
a whole bunch of (read-only) operations regardless — *and* have to wait for
the client to write over the WAN to the remote replica anyway.

(Also, AFR you can cause the server to consume unbounded memory and die by
 writing local data faster than the WAN can flush it, if you don't require it
 to be synchronous in writing to remote replicas.


RedHat are funding development of solid, WAN capable asynchronous replication
support, expected to land some time later this year, around the same time that
XtreemFS expect to have read/write mirroring of objects implemented.

Regards,
   Daniel

As it happens, I have just been investing non-trivial time investigating the
options available here. :)

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] today's scary thought

2010-07-15 Thread Daniel Pittman
Morgan Storey m...@morganstorey.com writes:

 What do you mean? it is a manual initial sync to get the files in sync (just
 copy the different files either way and work out which ones you need to
 trash or merge) Then startup your unison scripts and let the servers build
 there indexes then sync.

Ah.  Yes, that is fair, you can do that.  What conflict resolution strategy
does it use in that mode?  I presume some variant on preserve the lot, which
is what I would use.

Um, and yeah: you are dead right it can do that effectively.
Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] today's scary thought

2010-07-15 Thread Daniel Pittman
Jamie Wilkinson j...@spacepants.org writes:
 On 14 July 2010 23:14, Daniel Pittman dan...@rimspace.net wrote:

 Sadly, I can't right now advise a better solution than these, however,
 since it is the main problem I face in trying to bridge two data-centers
 and provide coherent and sensible file access.

 I think you're going to be out of luck without some fat short pipes to
 satisfy fast atomic commits to both sides.  The cloud way is to have your
 applications understand there's a replication delay and know how to deal
 with conflict resolutions, drop the atomicity and integrity constraints to
 gain some speed.

*nod*  Factors I am well aware of, but thank you for being explicit about
them.

 I suspect from your mention of file access then you're not dealing with
 *an* application, but *all of them* and your storage layer API is just
 POSIX, in which case I wish you well in your pipe procurement endeavours.

Our needs vary wildly; in some cases we do want POSIX style file access,
with a remote mirror for read-mostly speed-of-access purposes, or where we
have a write-mostly application inside the one data center, with fail-over
to the other.

In others we are quite happy with an eventual-consistency.  I mostly focused
on files here because suggesting that the OP investigate Reak or Cassandra
probably wouldn't fly when they wanted Microsoft Office to access it. ;)


 Random tangential brainstorm: if your application knew that your POSIX
 filesystem was being slowly replicated between two DCs, and knew to look in
 *both* for the same data, and was robust enough to handle the loss of one
 DC, then it ought to be able to pick up where it left off in the other DC
 modulo some journalling.  Again I suspect this isn't going to help you in
 the slightest, not knowing anything about your app :)

Actually, that is pretty much the model I expect we will use for the most
legacy application in our stack, in which we are probably stuck for the next
year or so with nothing but POSIX.

For the more agile applications my hope is that we can avoid that by, indeed,
having the applications aware of the replication issues and all, and using a
simple eventual-consistency or last-update-wins vector-clock approach.

Daniel

Also, lots of different apps, so I might well end up with multiple
solutions.  A good distributed POSIX FS with replication, eventual
consistency, some sensible conflict resolution model, and data center
awareness would have been easy enough to use though.

If I could have my pony. ;)

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] today's scary thought

2010-07-15 Thread Daniel Pittman
Nick Andrew n...@nick-andrew.net writes:
 On Fri, Jul 16, 2010 at 12:31:42PM +1000, Daniel Pittman wrote:

 Also, lots of different apps, so I might well end up with multiple
 solutions.

 This seems likely. Databases have different consistency requirements to
 people.

One of the attractions of Cassandra is that it allows the client to specify
the consistency level required, from none, through to every node ever, or
quorum, or whatever.

I need to look further at Riak to work out how well their model expresses the
same, although as they don't do cross-WAN out of the box it has a lesser
problem to contend with.


 A good distributed POSIX FS with replication, eventual consistency, some
 sensible conflict resolution model, and data center awareness would have
 been easy enough to use though.

 Conflict resolution is the problem. The less of that you want, the more
 synchronous your filesystem has to become - or expose more non-POSIX
 filesystem behaviour to applications.

*nod*  Very true.  I think, for most people, the Dropbox model of conflict
resolution would be great to have in a file system:

Find a conflict, generate two documents, one with each version.  Viola, you
just punted the hard problem up to a human.

Less good for machines, naturally, although a similar process can help.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] today's scary thought

2010-07-14 Thread Daniel Pittman
Jeff Waugh j...@perkypants.org writes:
 quote who=Del

 Someone asked me today, as they often ask me about things Linux, if I had a
 Linux replacement for their favourite journal app that they run on their
 (windows) PC.  I asked what that journal app did, and was told:

 You can set it to track when you open files of various types [in other
 applications] and how long they are open for..  Further quizzing revealed
 that you can set it to record when those files were opened, saved, closed,
 and when and where any saved and backup copies were stored.

Wow.  What a useful tool for tracking what you do!

 I mentioned the security impacts of such an application, or even the fact
 that such an application was possible, and left it at that.

 Look around for Zeitgeist. :-)

...or snapshot 'ls -l /proc/[0-9]*/fd/' on a regular basis, or better still
use one of the task notification hooks that I understand are floating
around[1] to capture task creation and exit automatically.

Also, what on earth security implications did you see, Del, in being able to
see what files you have opened yourself?  It isn't like your applications
couldn't record this anyhow...

Daniel

Heck, my Emacs does keep a long history of the files I have opened, since I
like to be able to do backward-isearch in an LRU list to get at things I
worked on in the last few days...

Footnotes: 
[1]  ...in that I have seen occasional discussion of 'em on the kernel list,
 so presume they have floated out to have a user-space interface by this
 point, but no nothing beyond that.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] linked folders show different files

2010-07-04 Thread Daniel Pittman
Troy Rollo t...@parrycarroll.com.au writes:
 On Thursday 01 July 2010 19:00:34 Daniel Pittman wrote:

 On which file system, without causing it serious errors? I can't say
 categorically that nothing allows it, but it is considered to be a serious
 violation of the semantics of the system by ... well, most things.

 You certainly used to be able to do it on the historical UNIX filesystems.

Do you mean you could, but it broke things, or you could, and it worked
fine, though?

I agree you could do it, just like you could unlink a directory.  The results
were not healthy, though, on any (historic) platform I am aware of.

[...]

 ...while you used to be able to break things as root, on some platforms, it
 wasn't ever considered a healthy thing to do — at least, to the best of my
 knowledge on the topic.

 It was always considered a bad idea, although in some limited circumstances
 useful.  Apparently some backup tools on OSX (or whatever it's called this
 week) do it.

TimeMachine, and apparently it does.  Rumor varies about it being possible to
do this without being TM under Snow Leopard...  Still, a modern system that
allows a cyclic graph in the file system.  Their VFS cache implementers must
love them for putting that into the system. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] linked folders show different files

2010-07-01 Thread Daniel Pittman
Ben Donohue donoh...@icafe.com.au writes:

 I really don't understand what is going on here...

 I have a folder structure that is replicated several times by linking the
 base folder

A hard-link, or a soft-link?  I am guessing the later, since you can't
hard-link a directory, but it isn't entirely clear.

If you mean cloned by hard-linking all the files then the answer is rather
different to what I am about to give you.[1]

 Some folders down within the master folder I have two files called 1.jpg (a
 picture of a computer mouse), and 2.jpg (a picture of a computer)

 When I look at the linked folders via konqueror and with thumbnails on, I see
 the files correctly.

 When I delete 1.jpg and then rename 2.jpg to 1.jpg, some folders show a
 picture of the mouse, others show the picture of the computer.

 What's going on here?

It could be that you are seeing a cache effect, where the system has a copy of
the file in either an HTTP cache (if you are accessing this via the web) or a
local icon cache.

 Everything should be a picture of a computer.  Why are the linked
 folders/files not updating with the changes of the file name?

Assuming they are soft-linked directories, so the change actually happens to
the one single real directory, cache effects.

 (browsing to the linux box via a windows computer show the correct computer
 image in all the linked structure)  I've turned off caching in konqueror.

Maybe it needs more of a kick to do the right thing?

 The reason I'm asking is that I'm importing the images into a joomla site
 and the images are now incorrect. I've traced it so far to the linking I
 think...

Given your description, it sounds suspiciously like HTTP caching, but it is
hard to be sure.  Does the access log for Joomla show that the image in being
downloaded by the client, not just a not modified response?

Does Joomla maintain an internal cache of, say, rescaled images that might
need to be flushed?

Daniel

Footnotes: 
[1]  Specifically, when you delete a file with hard links, you only remove
 that one name associated with it.  The other links are unaffected.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] linked folders show different files

2010-07-01 Thread Daniel Pittman
Ben Donohue donoh...@icafe.com.au writes:

 I'm embarassed to say that I don't know whether the files/folders are hard
 or soft linked! How to tell?

OK.  I am going to start from the very basics here, so please don't be annoyed
if I am telling you what you already know — I figure better cover those
quickly in case you don't than make you ask again. ;)

In Unix there are three parts to a file: one is the raw data itself, which is
uninteresting.

The second is the inode, which contains things like the permissions, and a
list of where the data is found, and all.  The inode does *NOT* have a name.

The third is a directory entry, which is a record in a directory somewhere
that points to the inode, and which has a name.

A file actually has zero or more directory entries pointed to the inode,
which are used to keep track of it.  When there are zero it can still hang
around while the file is open, though, because the file handle references the
*inode*, not the directory entry.


Anyway, a hard-link is when you have more than one name pointing to the same
inode.  You can see this in the link count field, which is the second field
in 'ls -l', or in 'stat filename', or whatever.

A soft-link, or symlink, is more like a pointer: it is an inode that
contains a path string.  When you try to open it the kernel will read that
path string, then refer to that object instead.


So, the easy way to tell if your image is hard-linked or soft-linked is to
look at it with ls:

] ls -l example[12].jpg
-rw-rw-r-- 1 daniel daniel 0 Dec  7  2007 example1.jpg
-rw-rw-r-- 2 daniel daniel 0 Dec  7  2007 example2.jpg

In this case example1 has only one link, but example2 has two. :)


If you look at a normal file (eg: added by the software, not manually deleted
or changed) you should find out quickly if it has another hard-link or not. :)

[...]

 I'm trying to simplify what I'm doing to try to find out why this is
 happening. But it seems that Joomla is picking up the cached copy when an
 image is uploaded rather than the changed file.

FWIW, my guess about hard-links would be that you had something that looked
like this:

  /foo/bar/baz.jpg = inode 12
  /host/whatever/baz.jpg   = inode 12

If you then replace the /foo/bar/baz.jpg *name* with another file, by deleting
it and renaming something to the same name then /host/whatever/baz.jpg is not
at all changed by this.

Which it doesn't really sound like is happening.

 If I start in a different area with a new file it works correctly. It's only
 when an existing image is overwritten with the same name that the old image
 shows up in Joomla but ALSO in Konqueror. Which makes me think it's not
 Joomla but something in Linux underneath somewhere. Really bizarre!

*nod*  That is very odd.  If you were actually overwriting it you *should* see
that content change, at least in the current directory. :/

Does it reflect changed content if you open it in some image display tool
/other/ than Konqueror there?  How about if you get the md5sum of the image
before and after?

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] linked folders show different files

2010-07-01 Thread Daniel Pittman
Troy Rollo t...@parrycarroll.com.au writes:

 You can hard link a directory (historically done as root using /etc/link or
 /sbin/link rather than /usr/bin/ln or /bin/ln).

On which file system, without causing it serious errors?  I can't say
categorically that nothing allows it, but it is considered to be a serious
violation of the semantics of the system by ... well, most things.

Certainly all the common file system drivers in Linux, as well as by the core
VFS; see http://lxr.linux.no/linux+v2.6.34/fs/namei.c#L2423 for vfs_link:

2445if (S_ISDIR(inode-i_mode))
2446return -EPERM;

So, you could, but only if you go back far enough that unlink('/') worked,
along with those other nasty ways to break the file system and all. :)

 Doing so traditionally requires (1) root; and (2) a direct call to the
 link system call (which is what the old /etc/link and /sbin/link did).

...while you used to be able to break things as root, on some platforms, it
wasn't ever considered a healthy thing to do — at least, to the best of my
knowledge on the topic.

 However if this is indeed what has been done, then I can speculate as to
 what might have happened. I seem to recall that the Linux kernel caches the
 file system hierarchy

Nope.  That isn't even close to accurate; the way data is cached varies wildly
between different systems, but it isn't especially hierarchical; file data is
usually in the page cache indexed by inode.

 in which case there is might be a chance that the kernel cached hierarchy
 does not reflect what is on disk.

Well, if the file system was fundamentally broken it would certainly break
caching, but usually in vastly more nasty ways than this.

 Hard linking directories is a sufficiently unusual thing to do that it is
 possible such a cache was never tested against it.

Actually, it is impossible in anything resembling a recent Linux kernel as
noted, so this categorically *cannot* be the cause of the issue.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-16 Thread Daniel Pittman
Adrian Chadd adr...@creative.net.au writes:

 You won't be able to. If you configured them up as a stripe (ie, no
 mirroring) with interleaving every x megabytes on each disk, you'll
 basically end up with a virtual hard disk with holes evenly spread out
 across 1/3rd of the image. I don't know of any (easy) tools to recover from
 that.

LVM, by default, is a boring old linear mapping, so he probably has two disks
worth of data ... starting a third (or whatever) of the way through the file
system.  So, no superblock on whatever.

 I can think of what I'd write to try and recover -something- but it'd
 involve writing a whole lot of rather hairy looking filesystem-scraping
 code. I'm sure there are tools to do this kind of partial data recovery
 but they're bound to be -very- expensive.

The 'testdisk' package available in Debian, and fully OSS, can do quite a lot
of data recovery without a file system.  It just block-scrapes the device.

[...]

 On Wed, Jun 16, 2010, Gerald C.Catling wrote:

 Many thanks to all that responded to try to solve this LVM problem.
 I could not recover any data from the crashed system. I could not find any
 method of mounting drive 2 or 3 as individual drives and the system would
 not create a volume group without the now non-existant first drive.  Once
 again, many thanks.  I will have to try RAID 1.

I strongly suggest that using some sort of RAID is absolutely worth the money.

I would also encourage you to try the testdisk tools to recover some of the
content of your device: you will probably get back more than nothing back, and
that might be worth the hassle.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-16 Thread Daniel Pittman
Amos Shapira amos.shap...@gmail.com writes:
 On 16 June 2010 16:26, Daniel Pittman dan...@rimspace.net wrote:

 LVM, by default, is a boring old linear mapping, so he probably has two disks
 worth of data ... starting a third (or whatever) of the way through the file
 system.  So, no superblock on whatever.

 Why no superblock? The ext3 filesystem (and I guess the most usual suspects)
 write multiple copies of the superblock across the entire data partition
 just for such cases.

Actually, they do.  My mistake.

[...]

 I can think of what I'd write to try and recover -something- but it'd
 involve writing a whole lot of rather hairy looking filesystem-scraping
 code. I'm sure there are tools to do this kind of partial data recovery
 but they're bound to be -very- expensive.

 The 'testdisk' package available in Debian, and fully OSS, can do quite a lot
 of data recovery without a file system.  It just block-scrapes the device.

 Right. I used it a couple of times and can tell that without file names it's
 a chore to wade through all the data and try to find which is important and
 which isn't. file is handy to do that (as in, e.g., run find ...| xargs
 file | grep ... and move all files of each type to their own directories).

No question there.  Thankfully, my only use for it has been recovering photos,
and those have nice internal meta-data to help with the process. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-14 Thread Daniel Pittman
Gerald C.Catling gcsgcatl...@bigpond.com writes:

 I am a PCLinuxos user and I have seen references to LVM here ( at SLUG)
 I have 3 drives LVM'd to give me 1.3TB of storage space on my server.
 The first drive of this set has died.

I am guessing that by LVM'd you mean concatenated together, no redundancy,
right?  So, basically, you lost one disk and you have lost (more or less) a
third of the data under the file system, etc.

I further assume that first means the one that contains the superblock, as
in the linearly first space on the disk.

 I was wondering if any of you Guru's could suggest a method of getting any
 remaing data from the LVM drives, that is drive 2 and 3, that are left.

I can identify three approaches:

One: Get the dead drive working long enough to actually recover the content
from the file system with all the data around.

That should work provided died means has a bunch of bad sectors rather
than will not respond to SATA commands.


Two: Use something that scans the disk and looks for file content, then
extracts it.  This is unlikely to bring much joy, but might be better than
nothing.

I have used some of the tools packaged in Debian before, especially
'testdisk', with reasonable success, on *simple* cases like recover JPEG/RAW
images from CF cards.  For a complex case like a 1.3TB file system, I
wouldn't hold much hope for getting a *lot* of content back.


Three: talk to the upstream file system developers, and see if they can help
identify a mechanism that might recover data without the first chunk.


I suspect those are in decreasing order of what you get back, and other than
the first that will be very little.


Er, and there is another option: pay a data recovery company to do this.  It
shouldn't cost more than a few thousand dollars for a fairly simple case, and
might have a better recovery rate than the alternatives if, say, disk one
*isn't* responding, but they can get it back talking for a bit without too
much trouble.

 I have tried rebuilding the set, wg0, but the system want to reformat the
 drive wg0, just created. Is this formatting going to format the real drives
 and rather that just the LVM component?

All LVM does, in this case, is rewrite the write command so that it talks to
the appropriate bit of the underlying physical device.  So, yes, because there
is no difference between the two.



Anyway, for the future: if you concatenate drives, which is all that LVM does,
you *increase* the chance of total data loss in your system by the number of
devices; in your case — three disks, triple the chances you lose.

So, the take-away lesson is that if you intend to do this take one of these
three approaches:

1. Format each device as a separate file system, rather than concatenating
   them, so that loss of one device only takes away one set of data, not all
   three.  Penalty: you now have a PITA job using all that space.

2. Keep good backups, so that when (and it is when, not if) you lose a
   device you recover much more gracefully.

3. Use some sort of redundancy: software RAID is a pretty decent choice, and
   is pretty inexpensive these days.  Certainly, I bet that the extra few
   hundred dollars for a second set of disks is less than the cost of trying
   to recover all that data.


Um, and sorry: it sucks that you are now probably going to lose all that data.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] LVM

2010-06-14 Thread Daniel Pittman
james j...@tigger.ws writes:
 On Tuesday 15 June 2010 10:00:03 slug-requ...@slug.org.au wrote:

  I am a PCLinuxos user and I have seen references to LVM here ( at SLUG) I
  have 3 drives LVM'd to give me 1.3TB of storage space on my server.  The
  first drive of this set has died.

 I am guessing that by LVM'd you mean concatenated together, no
 redundancy, right?  So, basically, you lost one disk and you have lost
 (more or less) a third of the data under the file system, etc.

 The stuff below is interesting and a reference, but this highlights my
 favourite rant: Seagate's 'ATA more than an interface' says multiple disks
 in a machine *will* result in a higher failure rate, maybe much higher.

Without needing reference to a vendor study, you can work this out yourself
with some very basic probability math, and the MTBF values from the hard
disks.

Er, and watch out that things like bad batches of disks can result in failures
that are *not* independent events in probability terms.

 So raid is a less worse option than LVM.

They serve entirely different purposes, and have some cross-over; you would
see the same problem with a RAID0, or RAID-concatenated-space system as with
an LVM concatenated space system — or the same redundancy if you used the
LVM/DM RAID1 target as if you used the MD version of the same.

So, it isn't as simple as saying RAID is better than LVM without talking
about the additional details.

(...and, FWIW, my preference is *both* RAID and LVM. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: LD_PRELOAD

2010-06-12 Thread Daniel Pittman
james j...@tigger.ws writes:
 On Sunday 13 June 2010 10:00:04 slug-requ...@slug.org.au wrote:

  I can't find any info on this, so it's prodly so obvious that only
  dumbkorfs ask :-)
  
  I have a program using a web cam. All is perfect except that I need to
  export LD_PRELOAD=/usr/lib/libv4l/v4l1compat.so
  before running it.
  
  How can I compile the links into the code, rather than a PRELOAD env
  setting?

[...]

 I do have a wrapper, that does work, so thanks
 What I was trying to learn is how skype did do exactly that.
 2.0 had to be wrapped, but 2.1 (beta) uses the cam without preload.

Oh.  The answer is probably pretty simple: I bet that Skype built their new
version with support for the V4L 2 API rather than the long-deprecated version
one API.

Alternately, they could have implemented the same work-around internally, or
even linked in the compatibility code[1], or written dynamic detection for the
two API versions, or

So, lots of ways.  This being a closed application you can't really know.

Regards,
Daniel

Footnotes: 
[1]  ...because that is, y'know, open source.  Except they can't actually link
 that without breaking the license, so the situation is less clear.
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Postfix and PCRE filtering

2010-06-06 Thread Daniel Pittman
Erik de Castro Lopo mle+s...@mega-nerd.com writes:

 I'm messing postfix-pcre version 2.6.5-3 from Debian testing. In the
 /etc/postfix/pcre_table I have the following rule:

[...]

 However, even with all the above, I'm still getting emails with 'casino' in
 the subject line.  Anybody have any idea why PCRE is only working with
 postfix some of the time?

IIRC, header_checks see only raw headers, so anything encoded with RFC 2047
syntax might look nothing like that.  For example, 2047 base-64 encoded
headers seem popular in spam, and those could contain 'casino' without,
y'know, containing 'casino'.

Daniel

I suggest you use something like amavisd-new which pushes the email through
all the relevant normalization before it applies rules like that.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Postfix and PCRE filtering

2010-06-06 Thread Daniel Pittman
Jeremy Visser jer...@visser.name writes:
 On 06/06/10 20:10, Erik de Castro Lopo wrote:
 If I send email from a gmail account with the word 'casino' in the
 subject line, the email is rejected and the gmail account gets a
 'Delivery Status Notification' message.

 However, even with all the above, I'm still getting emails with
 'casino' in the subject line.

 Anybody have any idea why PCRE is only working with postfix some
 of the time?

 I have never set up Postfix filtering before, so this is probably a
 stupid question, but is it possible the rules you added are only
 matching when sent with your e-mail address in the “To:” header (rather
 than the “Delivered-To:” header)?

They are not in either: the {header,body}_checks in Postfix are very blunt
tools, inside the MTA.

[...]

 Sorry in advance if I’m sending you on a wild goose chase.

It might have helped if you had a better understanding of the tool you were
giving advice about...

 Other things you could try is piping the mail through a hex editor to see if
 there are any funny U+ characters lurking in there.

...and possibly about the data transfer formats, too, since this is extremely
unlikely: having a NUL byte embedded in the word would, y'know, show up (and
the U+ notation means the Unicode code point with the 16-bit value 0.)

I suspect you actually meant check if the data is encoding in UCS2/UTF16, in
which you would see ASCII characters as a series of regular characters
separated by NUL bytes, because it is a 16-bit[1] encoding of Unicode.

...which you wouldn't try and transit through an email system even if it /was/
8-bit clean, which most of them are not.

Daniel

Footnotes: 
[1]  ...more or less, leaving aside the complications of the variable width
 encoding used.

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Oh I hate Shredder

2010-05-31 Thread Daniel Pittman
Peter Rundle pe...@aerodonetix.com.au writes:

 I did the right thing and updated my Linux Mint box with all the latest
 patches (yes I'm a security conscious goody two shoes). But unfortunately in
 my haste (orright laziness) I selected all software Thunderbird had been
 updated to the latest abomination called Shredder which I detest. How can I
 punt this latest version off my box and go back to one which doesn't think
 that it's a good idea to cover most of my screen in some non-shrinkable box
 full of message headers that I don't want to see.

Usually, by downloading the older package and installing it by hand.  Most
distributions make this reasonably practical, more or less.

Depending on the architecture you may also have access to older versions
through whatever command line package management tool Mint uses; many of them
can specify a specific version, and some have access to older versions.


That said, I suggest you learn how to resolve the problem: most distributions
don't support anything but (close to) the most recent releases of Mozilla
products, because upstream are not very helpful if you don't do that, and
because they are so incredibly complex and painful that doing it inside the
distribution is a losing game.

Daniel

In other words: even if you can temporarily work around it, at some point you
will have to bite the bullet and either move to a less awful mailer, or accept
the latest release.
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Error in Time() command

2010-05-23 Thread Daniel Pittman
Glen Turner g...@gdt.id.au writes:
 On Mon, 2010-05-24 at 09:02 +1000, Peter Chubb wrote:

 Actually it doesn't give the whole answer.

 Wow, thanks heaps Peter.

 tenzero: so there are 1000 (CONFIG_HZ) samples per second.

250, on my Debian/sid system running 2.6.32-5-openvz-amd64.  FWIW.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Virtualization - Whither goes thou?

2010-05-12 Thread Daniel Pittman
Nigel Allen prosm...@jamesons.com.au writes:

 I need to formulate a DRP for a customer and thought that I would ask the
 slug for it's collective wisdom.

[...]

 First thought was to max out the memory on two of the servers, one for normal
 running and one as a hot or warm standby, and the virtualize all of the
 servers onto the two machines.

If you can having two machines in an active/active setup is much better: it
means that you have to spend less money on idle hardware, and it also means
that you don't face a *sudden* load on the passive machine.

The reason that last point is valuable is that the time you discover, say, a
disk is having trouble is when you start putting load on it, not when it sits
idle.  Guess when you really don't want to find out you have disk issues on
your second machine?

 An external consultant has already suggested doing this with VMware,
 installing the ESXi hypervisor on the two main servers and installing a NAS
 shared between the two systems (hot and cold) so that if the hot server
 fails, we can simply switch over to the cold server using the images from
 the NAS.

This would let you load-balance also, which is quite nice.



 Couple of things concern me about this approach. The first is using VMWare
 rather than a GPL solution.

*shrug*  You say you plan to run Win32 under this; you are going to need
binary PV drivers for the disk and network to get acceptable performance
anyway, so you are already looking down the barrel of non-GPL software.

 The second is where we would install the NAS. Physically, the office space
 is all under one roof but half the building has concrete floors and half has
 wooden. (The hot server is in the wooden main office, while the cold
 server was to go in the concrete floor area. There is also a firewall (a
 real one) in between the two areas).

In your server room, connected by at least one Gigabit link to the servers.

Your replicated NAS, of course, lives in your DR location, wherever that is,
since you don't want a DR solution that works as long as the server room never
catches fire[1].


 Questions:

 1) Can anyone offer any gotcha's, regardless of how obvious they may seem to
you?

ESXi hardware support is exciting, make sure you have capable hardware.

Pay for commercial support on whatever solution you end up with.  At the end
of year one, think about dropping it, but keep it until then.

Test.  If you don't test this stuff routinely it will never, ever work when
you need it to.

You need PV disk and network drivers to get the performance you expect.

You don't need a PV kernel under Linux, though it probably doesn't hurt:
almost all the cost comes from the disk and network, and almost everything has
PV drivers for those.


Make sure you understand what happens if you pull the network from the (or a)
active machine without otherwise turning it off — what happens.

Make sure you don't spend millions on the best servers, the best NAS, then
connect them together through a single network cable that gets cut, bringing
the entire thing to a grinding halt.


 2) Is there a GPL solution that fit's this scenario? Even if it's not a bare
metal hypervisor and needs an O/S. Remember it has to virtuaize both Server
2003 and CentOS

KVM can do what you want, but I don't believe there are PV disk drivers
available that are open source.  You need those.

 3) What's the minimum connection we would need between the NSA and and the two
servers sharing it?

A 9600bps GSM modem, provided your users have very low expectations. ;)

More seriously: assuming your disk I/O use is low enough you could get away
with 100Mbit, but you really want Gigabit — and, given that you want to live
through failure, you want several Gigabit links between the NAS and the
servers so that a single *cable* failure doesn't take down your entire system.


 4) What kind of speed/bandwidth should we be looking at for the off-site
replication.

That depends entirely on how much data you write during normal operation, and
how big your acceptable window for data loss is.  Talk to the vendor of your
NAS for details.

Generally, though, you want something with very low *latency* more than you
want something with very high bandwidth: having a *safe* write means that both
the local and DR NAS have acked the write as being on disk.

If your latency is 10ms you have at least 10ms delay for every safe write.
If your latency is 20ms you double that, and cut your write performance in
half...

 I'll happily take anything else anyone would like to throw at this -
 suggestions, reading matter etc - it's not an area of great expertise for us
 having only paddled around the edges with Virtualbox.

This is *hard*.  Harder than it sounds.  Imagine it being as hard as you
think, and then it will likely be harder than that.

Daniel

No, seriously, still harder.  Don't forget to test it, and expect to find
things go pear shaped and die *anyway* during normal running

Re: [SLUG] new ubuntu 10.4

2010-04-30 Thread Daniel Pittman
Jim Donovan j...@aptnsw.org.au writes:

 I'm still trying to fit 700MB of it onto a 699MB blank CD-ROM.

https://help.ubuntu.com/community/Installation/FromUSBStick

Assuming you have a 700MB+ USB stick, of course, but those seem to be fairly
common these days.

HTH, HAND, etc.
Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RAID5 Woes

2010-04-17 Thread Daniel Pittman
Kevin Fitzgerald k...@tcgtech.com.au writes:

 I need advice. In a nutshell I have a customer with a 5 Disk Software RAID 5
 on an old Fedora Box (The Archive Box). Recently, after a power failure, Two
 of the drives did not re-join the Array when the machine came back up. DMESG
 tells me that they are Not Fresh.

Take an image of the disks, because if you mess this up you lose the data.
The cost of three new disks and copying them won't hurt that much.

[...]

 Assuming the disks are there and OK, How do i Freshen them up and re-build
 the software RAID array?

From mdadm(8) in the 'Assemble' section:

  -f, --force
 Assemble  the  array even if the metadata on some devices appears to
 be out-of-date.  If mdadm cannot  find  enough  working  devices  to
 start the array, but can find some devices that are recorded as hav‐
 ing failed, then it will mark those devices as working so  that  the
 array can be started.  An array which requires --force to be started
 may contain data corruption.  Use it carefully.

 I need to be careful as I dont want to do more damage while trying to
 re-build.

Image the disks first.  There is no guarantee of safety in this.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: [chat] yet another reason to advocate Linux

2010-04-15 Thread Daniel Pittman
David Lloyd lloy0...@adam.com.au writes:

 Don't let this become a Linux only thing - whether one thinks Free as in
 FSF, open source as in the OSI or supports proprietary Microsoft...child
 labour is abhorrent regardless and has nothing to do with the freedom of
 software.

 It is, plain and simple, wrong.

...and don't mistake that this actually has much to do with those vendors: the
hardware you run Linux on is probably made in similar conditions, as are many
of the other facets.

(Also, don't mistake that just closing sweat-shops is the best answer, since a
 terrible job might be better than no job, and sometimes that is the result.
 Which really sucks, because those conditions *should* not exist, IMO.)

Daniel

Complexity, ain't it grand. :(

 -Original Message-
 From: meryl gnu...@aromagardens.com.au
 To: slug@slug.org.au
 Subject: [SLUG] Re: [chat] yet another reason to advocate Linux
 Date: Thu, 15 Apr 2010 16:40:54 +1000

 ... the point of the article was that MS  Apple are using exploitative
 child labour in sweatshop conditions. 

 It's an interesting article, well worth the read. 

 Meryl

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] slow webmail

2010-04-15 Thread Daniel Pittman
Jobst Schmalenbach jo...@barrett.com.au writes:

[...]

 this is not a flamebait
 I know that people like long uptimes, but rebooting **DOES** solve some
 problems  cause you have 888 days, have you considered???
 /this is not a flamebait

It would probably help this suggestion if you provided some supporting
evidence for your position.  My first question, as you might imagine, is
which problems does this solve?

My usual follow-up to that is: are you actually solving the problem, or just
the current pain — taking a pain killer rather than fixing the bad back — and
probably destroying your chance to find the real cause of the problem in the
process?


Anyway, without your providing something to back up your statement it is as
meaningless as someone asserting the opposite: that rebooting is never the
right answer. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


[SLUG] Re: Time Pedantry

2010-04-06 Thread Daniel Pittman
Jamie Wilkinson j...@spacepants.org writes:
 On 1 April 2010 16:56, Daniel Pittman dan...@rimspace.net wrote:
 Nick Andrew n...@nick-andrew.net writes:
 On Thu, Apr 01, 2010 at 03:39:00PM +1100, Daniel Pittman wrote:

 If it was my call, I would probably do the same thing.  Way too many
 developers get simple things like this day has no 2:30AM or this day has
 two 2:00AMs wrong.

 That's why Daylight Savings is fundamentally evil. Too much time data is
 stored in non-canonical formats.

 ...but the real question is if we love or hate the GMT/UTC difference, and
 23:59:61?

 *cough* :60 *cough*

Well, I am glad someone was on the ball enough to notice that. ;)

IIRC, :61 is actually a possible but extremely unlikely time value, to account
for two leap-second adjustments required in a year, but a quick look around
suggests that memory was wrong.  So, :60 it is.

[...]

 (And, finally, for anyone who really wants to despair at the whole thing,
  I give you The Long, Painful History of Time, which is the best write-up
  I know of about the engineering difficulties of the topic:
  http://naggum.no/lugm-time.html
  )

 I for one am glad such pages exist.  I wish the inventors of time_t had read
 it.

I wish that an awful lot of people had spent an awful lot more time looking at
what other systems around them were doing, so that I didn't have this terrible
feeling that we are finally dragging our system up to the 1980s for the second
or third time.

Ah, well.
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: Time Pedantry

2010-04-06 Thread Daniel Pittman
Robert Collins robe...@robertcollins.net writes:
 On Wed, 2010-04-07 at 12:27 +1000, Daniel Pittman wrote:

  ...but the real question is if we love or hate the GMT/UTC difference, and
  23:59:61?
 
  *cough* :60 *cough*
 
 Well, I am glad someone was on the ball enough to notice that. ;)
 
 IIRC, :61 is actually a possible but extremely unlikely time value, to 
 account
 for two leap-second adjustments required in a year, but a quick look around
 suggests that memory was wrong.  So, :60 it is.

 http://www.opengroup.org/onlinepubs/009695399/basedefs/time.h.html

 The range [0,60] seconds allows for positive or negative leap seconds.
 The formal definition of UTC does not permit double leap seconds, so all
 mention of double leap seconds has been removed, and the range shortened
 from the former [0,61] seconds seen in previous versions of POSIX.
 

Oh, /that/ was where it came from.  Thanks. :)
Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Best API/abstraction?

2010-04-06 Thread Daniel Pittman
Jeff Waugh j...@perkypants.org writes:
 quote who=Jamie Wilkinson

 I for one am glad such pages exist.  I wish the inventors of time_t had
 read it.

 So which language / library has a great abstraction for time and date stuff,
 helping you deal with the intricacies of this craziness?

None of them.  Even the good languages have nasty side-bits like a don't be
broken switch, and even a perfect language would still have the pain of
dealing with political, not technical, issues like timezone-associated dates.

Oh, and the fact that date math is *not* simple, since you can't convert
between various durations; a question such as how many seconds in a week can
only be answered it varies...


FWIW, the link I posted earlier was about time handling in Common Lisp, which
gets this less wrong than most platforms ... but that document was written
because the standard was imperfect.

Perl, meanwhile, has good support in various non-core library modules, but
many of those have things like a $Class::Date::DST_ADJUST value to determine
which behaviour you want for math involving DST and/or leap-seconds.


As an example of a well-documented set of complications and how they are
handled, the DateTime module does well:

http://search.cpan.org/~drolsky/DateTime-0.55/lib/DateTime.pm#How_Datetime_Math_is_Done

After you read through the 250 lines of warnings, complications, caveats, and
examples of how you can have two completely correct, valid, sensible results
that are absolutely in contradiction with each other...

Heh.

Time.  Boy, does it suck. :)
Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] OT: windows xp mpeg-2 decoder

2010-04-05 Thread Daniel Pittman
Peter Miller pmil...@opensource.org.au writes:

 Sorry about the off-topic post, but a member of my household has
 Stockholm Syndrome and chooses to use M$ products.  It transpires that
 XP doesn't include an MPEG-2 decoder as standard.  Yes, I know, broken
 by design.

 Can anyone tell me a safe and malware-free URL to download a free
 decoder?

http://wiki.multimedia.cx/index.php?title=Ffdshow

Not the direct link, because there is ... history in that project, but it
should do more or less what you want.  As a bonus: all the Windows video stuff
should now be much closer to being on par with Linux for ease of use.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: SLUG Membership decline

2010-04-03 Thread Daniel Pittman
elliott-brennan elliottbren...@gmail.com writes:
 On Friday 2 April 2010 at 23:59:32, Daniel Pittman wrote a lot.

 I would just like to say Yeah. What he said.  Thanks Daniel.
 Saved me a lot of work. Do you mind if I put this in my magazine?

Happy to help.  Anyone can print my why a Linux magazine is hard rant
anywhere they like, so long as they send me a copy.  The irony of that would
be ... so worth it. ;)

Daniel

Yes, I know it was a joke.
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] SLUG Membership decline

2010-04-02 Thread Daniel Pittman
James Polley ja...@polley.org writes:
 On Fri, Apr 2, 2010 at 9:38 AM, Heracles herac...@iprimus.com.au wrote:

 Also, SLUG should consider producing a magazine for members filled with
 articles from members including tutorials, reviews of open source software
 and code snippets, updates on what members are working on and some
 basics. It could become a benefit of membership. It could be emailed as a
 pdf to financial members at their slug.org.au email address.

 Interesting idea.

 I see one major problem and one major ideological issue though.

 The problem is the same one we have with talks: you can't have content
 without someone taking the time to produce the content. We have enough
 trouble now just trying to round up two people to give a talk every
 month; I can't imagine getting written content would be any easier.

 On the other hand, a short article might be easier to produce than a
 45-minute talk - and there's no public speaking required, which no
 doubt would make it easier for some people to participate.

For what it is worth[1], my experience is that getting people to produce a
*good* written article is much, much harder than getting a good talk out of
them.  (A good talk is hard to get, incidentally.)

When you present you have feedback, and can see if your audience is falling
asleep because you are telling them what they already know, or if they are
confused because they don't understand a single word you say.[2]


In writing you start with tighter constraints on presentation, since written
language is much less forgiving and much more formal than spoken language.

Add to that the need to use a suitable typesetting system, a vocabulary that
benefits from hypertext and demands complex presentation[3], and you are
setting a pretty high bar to get a good result.


So, yeah, it makes it easier for some people to participate — but I don't
think that you get much benefit from that.  Writing complex and explanatory
documents is hard work for people who do it regularly, let alone those of us
who mostly write in less format settings.


 If we could get the content I like this idea - except for the Emailed as a
 PDF bit. I think there are much better ways we could present this: for
 instance, an area of the SLUG website only accessible by financial members;
 or even a simple private mailing list.

Doing the later seems, to me, to be duplicating the strategy of LWN.  They do
OK with it, apparently, which is a positive.  The negative, of course, is that
their potential paying audience is about an Internet-million[4] times
bigger...


If I was pitching this sort of idea I would probably approach Linux Australia
with the idea of running it as an Australia-wide publication under their
banner.

Then you could approach all the different LUGs around the country, and perhaps
also NZ, to expand both your contributor-base and your audience.


Of course, if I was pitching it I would also go spend a lot of hours talking
to people who had published tech magazines over the years.  Incidentally,
these days none of them are.  Every single tech magazine they worked for went
bust in the end...


Finally, keep in mind that having the magazine is going to demand an editorial
staff to produce it.  That is a *huge* amount of hard work, even for something
that only sees electronic distribution.

[...]

 There are compromises of course; I believe the SAGE-AU mailing list archives
 used to be members-only for 6 months and then released to the public
 (although now it seems the archives are completely members-only). We could
 perhaps investigate something similar.

SAGE-AU have been facing something of the same crisis y'all did here, although
more serious since they are aiming at higher goals of business-relevance.

Their exec are nice people, though, and will almost certainly tell you about
their experience with the changes, and why they made the decisions they did.

Daniel

Incidentally, before someone comes to the clever idea: if y'all do decide to
do a magazine, and you do decide to reuse content from the mailing list, make
sure you get appropriate licensing.

Which, in turn, means you get a real copyright lawyer to OK your strategy.

Nothing sucks worse than republishing, as an example[5], my content, then
finding out that I deliver a demand for significant licensing fees from you,
backed up with my own lawyer and legal threats.[6]

Footnotes: 
[1]  ...especially as I am nothing but a freeloader on the mailing
 list, and only there because I find the variety of discussions in
 multiple LUGs more likely to produce something interesting.

[2]  Ever *bad* presenters have a vague awareness of this, although they might
 not recover well or change approach to reflect it.

[3]  Doing *good* layout of either GUI or command-line examples is hard.
 Good layout for things like configuration ... ouch.

[4]  In other words: much bigger, but I don't know how much.

[5]  ...using me as the subject

[SLUG] Re: Time Pedantry

2010-04-01 Thread Daniel Pittman
Peter Hardy pe...@hardy.dropbear.id.au writes:
 On Thu, 2010-04-01 at 16:56 +1100, Daniel Pittman wrote:
 ...but the real question is if we love or hate the GMT/UTC difference, and
 23:59:61?

 Also, do we hate the earthquake that changed the length of the day for 
 messing
 with our time-keeping?
 http://www.sciencedaily.com/releases/2010/03/100302084522.htm

[...]

 None of this would be a problem if we'd just switch to decimal time in a
 single timezone and call it a day.

Actually, we would still have to deal with the changing length of the day, and
with the increasing difference between our stellar and earth-based times. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Re: Time Pedantry

2010-04-01 Thread Daniel Pittman
Rick Welykochy r...@praxis.com.au writes:
 Nick Andrew wrote:

 Indeed. The Earth's rotational period does vary slightly (effect of
 earthquakes notwithstanding). One reason time is hard to deal with
 sensibly is our insistence on synchronising it to the mean solar day.

 // off topic Easter Time time ramblings
 Isaac Asimov figured it out years ago. From memory ...

 Create a new calendar with 52 weeks of 7 days = 364 days.  Add one extra
 day, called World Day, at the end - 365 days.  World Day does not have a day
 of the week. In this way, every date falls on the same day of the week in
 every year.

Note that this doesn't address the GMT/UT[C01] issue, which is all about the
relationship between local time and time-as-seen-by-astronomers, or perhaps
more clearly, time as it relates to the actions of things other than our own
planet rotating.

 For leaps years, add an extra Leap day after World Day. It too has no day of
 the week. To make things precise, every 100 years, there is no Leap Day, but
 every 400 years there is.

 That pretty well matches up the solar year to the earth's rotation.

I don't find this convincing, FWIW, since it doesn't address issues like
every five days in a terribly meaningful day.  All it does is translate
those into one of two problems:

Either you have every five days, except once a year when it is six or seven
days between instances, or every five days, but which day changes every
year.

Unfortunately, we can't just stop the world for world day, which means that we
still have unpredictable day/date matching.

[...]

 I don't recall Asimov dealing with the tetchy problem of daylight time.

IIRC he thought it was a silly idea, as were the politically motivated time
zones.  Both views are ... arguably true. :)

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Which bank doesn't use Linux servers?

2010-03-31 Thread Daniel Pittman
Jake Anderson ya...@vapourforge.com writes:
 Jim Donovan wrote:
 I noticed the following on the Commonwealth netbank site this morning:

 NetBank, Mobile Banking and Telephone Banking will be unavailable between
 2am and 5am EST on Sunday 4 April 2010 to allow for the changeover from
 Australian Eastern Daylight Savings time to Australian Eastern Standard
 time. Please take this timeframe into consideration when completing your
 banking. For updates during this change, please visit:
 www.commbank.com.au/update. Please press NEXT to access NetBank.

 Assuming it wasn't an April Fool joke, perhaps it means their databases use
 local time and the logic won't permit transactions to be entered out of
 order such as might appear to be if one happened just before the changeover
 time and another less than an hour later.

 How quaint! I remember hearing once that Commonwealth Bank servers were
 always rebooted on Sundays so they'd be less likely to go down during the
 week.

 Odds are its more to do with their internal applications which are probably
 written on cobalt running on CP/M machines or something equally modern.
 Your probably lucky they even know time zones exist ;-.

My money would be on the very boring option, paranoia:

If you shut down as many of these systems as possible during the change over,
then those systems *can't* go wrong — because they are doing nothing.

If you leave them running then, hey, maybe something breaks.

So, if you want to look at the cost/benefit analysis the cost of a few hours
outage overnight is pretty low, especially if you can schedule it well in
advance, and even more so if you can do some other maintenance work at the
same time.

Meanwhile, no risk of things going wrong during the change-over, which is
always a huge PR fiasco even if nothing really bad happens.

Daniel

If it was my call, I would probably do the same thing.  Way too many
developers get simple things like this day has no 2:30AM or this day has
two 2:00AMs wrong.
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Time Pedantry (was Re: [SLUG] Which bank doesn't use Linux servers?)

2010-03-31 Thread Daniel Pittman
Nick Andrew n...@nick-andrew.net writes:
 On Thu, Apr 01, 2010 at 03:39:00PM +1100, Daniel Pittman wrote:

 If it was my call, I would probably do the same thing.  Way too many
 developers get simple things like this day has no 2:30AM or this day has
 two 2:00AMs wrong.

 That's why Daylight Savings is fundamentally evil. Too much time data is
 stored in non-canonical formats.

...but the real question is if we love or hate the GMT/UTC difference, and
23:59:61?

Daniel

Also, do we hate the earthquake that changed the length of the day for messing
with our time-keeping?
http://www.sciencedaily.com/releases/2010/03/100302084522.htm

(And, finally, for anyone who really wants to despair at the whole thing,
 I give you The Long, Painful History of Time, which is the best write-up
 I know of about the engineering difficulties of the topic:
 http://naggum.no/lugm-time.html
 )

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] ubuntu network manager dns

2010-03-27 Thread Daniel Pittman
Ashley Maher ashley.ma...@didymodesigns.com.au writes:

 I am used to putting in the IP of a dns server into resolv.conf and all
 good. It was very frustrating to find that most of the listed tesltra
 dns servers failed.

 host allowed me to check what servers are working quickly.

FWIW, I got very sick of this years and years ago, so I took to installing the
distribution packaged version of bind on my machine and configuring the system
to use localhost for name resolution.

That gives a pretty solid resolver that just works(tm) anywhere, and while
there is a tiny increase in first-access latency to sites I can't say it has
any significant practical drawbacks.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] ubuntu network manager dns

2010-03-27 Thread Daniel Pittman
Nick Andrew n...@nick-andrew.net writes:
 On Sun, Mar 28, 2010 at 11:23:35AM +1100, Daniel Pittman wrote:

 FWIW, I got very sick of this years and years ago, so I took to installing
 the distribution packaged version of bind on my machine and configuring the
 system to use localhost for name resolution.

 dnscache works well for that too. See http://cr.yp.to/ or http://tinydns.org/
 or http://github.com/nickandrew/djbdns

Oh, cool.  Someone talked DJB into relicensing dnscache so that it can be
distributed patched, rather than requiring the original.  That must make life
better for folks like you who want to use it.


Anyway, I have a bit of an aversion to the software: aside from the (formerly)
awful license conditions, I had a couple of clients who ended up with really
nasty problems from dnscache.

(Executive summary: probably not a problem any more. ;)


Way back when a 256/64 kbit ADSL connection was fast and expensive, a couple
of places I supported used software that incorporated dnscache, and had the
ability to use DNS RBLs for inbound email.

So, it turns out that dnscache had a fixed ten second timeout for a response
from the upstream DNS server.  If it receives a reply outside that window it
will reject the reply; it also resends queries if they time out.


Apparently, though, if you manage to list enough RBLs you can get in a
situation where dnscache is sending requests, which all time out because the
link RTT is more than ten seconds — just from the load of sending
retransmitted queries.

Which meant that two or three emails inbound at once could result in a
situation where their network link got saturated, and stayed that way until
dnscache gave up on all the queries.

Gosh, was that fun.


Anyway, these days that is unlikely to be a problem: either the code will be
patched to play nice, or the increase in bandwidth makes the odds of breaking
pretty slim.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] ubuntu network manager dns

2010-03-27 Thread Daniel Pittman
Martin Visser martinvisse...@gmail.com writes:

 Also be aware that there some ISPs tend to a few things with DNS

 1. Some do block access to their DNS from outside of their network.

You mean *recursive* DNS, not authoritative DNS, right?

[...]

 2. DNS can be used to help direct the subsequent application traffic to
particular managed servers. For instance, Internode provide unmetered
content to their customers, one of which is the ABC iView
service. However at least in some part, probably because of the use of
content-delivery networks, Internode ask to make you use their DNS.

Really?  That bites.  Most CDN networks have arrangements so that this works
regardless of which DNS server you use.

Can you point to an official document stating that?  I checked, and I found
some unofficial comments on Whirlpool, but nothing I consider authoritative
from Internode. :(

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] A little script help

2010-03-16 Thread Daniel Pittman
Amos Shapira amos.shap...@gmail.com writes:
 On 16 March 2010 18:18, Josh Smith joshua.smi...@optusnet.com.au wrote:

[...]

 I plan to use this for work for the (DRN) I have run out of things to
 put as my password and I am not aloud to have the same password twice

 I use pwgen to generate my passwords and a password vault to keep them
 as a backup.

I advise 'apg', which is packaged for most distributions.  While it has some
theoretical weaknesses[1], it is sufficiently strong that it should beat any
non-targeted attack.

The advantage of it, though, is that it generates passwords that follow the
same basic rules of structure that English words do, so they are remarkably
easy to memorise and use in practice.[2]

[...]

 2. Split the loop to three - one which picks a first letter from the
uppercase-only string, one which picks 5 characters from LORD and one
which picks 2 characters from the digits-only string.

Ideally, you would want to rewrite this as a function which takes a candidate
set of characters, picks a random character from it, then returns that.  Then
you could make it easy to template your passwords to whatever length and
complexity you need.

Regards,
Daniel

Footnotes: 
[1]  It uses a generation model that doesn't produce perfectly random results,
 by selecting from the pool of valid results.  This introduces imbalances
 in the generated parts-of-speech in the passwords, allowing you to
 substantially reduce the search space over a fully random password
 generator.

 If you are concerned, and care more about probably-theoretical security
 then there is a Perl implementation of the same idea which uses a
 fully-random password generator, and then tests the generated password
 for compliance to the parts-of-speech model.

 That removes the bias, and is as theoretically secure as a fully random
 password when the same number of bits of entropy are included.
 Obviously, the use of filtering means less bits-per-character. :)

[2]  Every time I tell someone this, they don't believe me, because it doesn't
 look easy to memorise.  In practice, they never fail to be amazed how
 well they did committing it to memory after the fact. 

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] System admin graphing tools

2010-03-12 Thread Daniel Pittman
Aleksey Tsalolikhin atsaloli.t...@gmail.com writes:
 Ken Foskey wrote:

 We all know we should do it.  Provide a monitoring system to see how our
 system loads are going.

 I know you got lots of replies, Ken.  What did you end up using?

 You might want to also check out Splunk - it's pretty nifty and quite powerful
 and will show you how your system loads are going and a whole lot of other
 stuff too.

 Personally, I use Nagios for monitoring/alerting and Zenoss to get the pretty
 graphs of system loads, etc.

 Best,
 -at

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] HD Crash - recover?

2010-03-12 Thread Daniel Pittman
j blrown gonz...@fastmail.fm writes:

 Just had my 5 mth old 1 terrabyte Samsung HDD start clicking like crazy when
 I turned on PC. First and last Samsung HDD - I'll stick to Seagate from now
 on - never had a problem with them

...just for reference, every hard disk vendor out there has had a bad batch at
some time or another — and even without that, there will always be some
hardware that dies in the first six months of use.  Personally, one failure
wouldn't turn me away from a vendor forever, because it simply isn't enough
information for any reasonable conclusion.


 Is there a utility that might recover some of my data? Disk only partially
 used, divided into 2 x 500mb partitions of ext 4 (I think).

Can you actually read the device at all, or is it failing to respond with data
at all?  If the later, you might try putting it in a water-proof container, in
the freezer for a few hours, and then giving it another shot.

That may unstick it long enough to get the data off, or maybe just some of it,
without substantial risk.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Sticky bit on /var/tmp

2010-03-09 Thread Daniel Pittman
Craig Dibble cr...@rootdev.com writes:

 Does anyone have any thoughts on removing the sticky bit on the  /var/tmp
 directory and setting it to 777?

Why would you want to allow unprivileged user to delete temporary files
created by other unprivileged users?

 Something about it doesn't sit quite right with me but I can't so far find
 any negative impact of doing so.

Other than the marginally increased, and probably mostly theoretical in these
days of one-user-per-machine, security risk there isn't much.

 The reason for this is that we have a large amount of data moving through
 that folder, in the order of more than 100GB. We have cleanup scripts which
 need to be able to remove files and folders to reclaim space every time a
 job finishes but the files are created by the user who launched the job, and
 the control process, and hence the cleanup, runs as a different user. And
 there we have a problem as the sticky bit prevents the cleanup from running
 and we have boxes falling over because their disks fill up.

 I'm fairly sure the first response to this will be Use sudo, as that  was
 our first response too, or store the data somewhere else. Both  of these are
 possible but difficult, the latter exceedingly so. We've  tried to think of
 every sensible alternative but the simplest fix  would be to just change the
 permissions and hope there isn't something  which is going to bite us as a
 result.

...er, is there any strong reason to run the cleanup script as some user other
than root?
Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] Sticky bit on /var/tmp

2010-03-09 Thread Daniel Pittman
Craig Dibble cr...@rootdev.com writes:
 Quoting Peter Miller pmil...@opensource.org.au:
 On Wed, 2010-03-10 at 10:07 +1100, Craig Dibble wrote:
 Does anyone have any thoughts on removing the sticky bit on the
 /var/tmp directory and setting it to 777?

[...]

 If you're curious, this is a large render farm controlled by a homegrown job
 scheduler, the users submit jobs and the scheduler takes over - hence our
 current problem.

See, this is why I like tools like Condor, PBS, or the Sun Grid Engine.  You
get to let other people pay the big money to build features like cleaning up
for you, so you don't have to.  You get to play more golf.[1]

Might it be easier to provide a compatible API to users, and replace the
inside with one of these tools?

 We have pre and post hooks available though, so maybe doing a chmod or chown
 on the directory at the start and end of every job would suffice to keep
 everyone happy?

If you can do that, can't you just remove the files as root?  Unless you are
running on some strange platform that allows giving away file ownership to
unprivileged users or something...

Daniel

Footnotes: 
[1]  ...for some value of golf attractive to you, of course. ;)

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] FAX server or gateway or some other solution

2010-03-08 Thread Daniel Pittman
david da...@kenpro.com.au writes:
 Ben Donohue wrote:

 I have a requirement to have a FAX at home.

 I have vmware ESXi with CentOS virtual machines and would like to create a
 FAX machine hooked into a modem to the phone line. However solutions like
 Hylafax require a serial port but VMware does not support serial
 ports. (yet)

 quote
 September 3, 2007. innotek today released VirtualBox 1.5.0 for Windows and
 Linux. snip Version 1.5.0 also adds support for serial ports
 /quote

 That suggests that Virtualbox will do serial ports although it doesn't prove
 that it actually does - I haven't tried.  I do know from experience that you
 can switch an existing vmware virtual machine over to virtualbox and that it
 makes life a lot easier after you do.

I almost suggested something that made a similar assumption to you: that there
was a host Linux or Windows OS that could, for example, run additional
software.

I noted the ESXi designation, which is the minimal bare-metal hypervisor, and
doesn't run on top of another platform — so, moving would be a major, not
minor, shift.

[...]

 I'm just looking for some solution to mostly SEND faxes... less of a must
 having to RECEIVE faxes. But I suppose it should work both ways (in and
 out) if possible.

[...]

 I'm still searching but open to idea's. Perhaps someone has found a perfect
 FAX solution already?

Last time I needed personal fax I used these folks: http://www.utbox.net/

Honestly, dealing with faxing, and Hylafax, for work makes paying someone else
to care still sounds like a *really* good deal to me.

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] mount LVM from Ubuntu live CD

2010-02-20 Thread Daniel Pittman
Michael Chesterton che...@chesterton.id.au writes:
 On 19/02/2010, at 1:41 PM, Daniel Pittman wrote:

 Try booting the kernel with 'init=/bin/bash' on the command line, and then:
 
 ] mount / -o remount,rw
 ] passwd root  # ...and give it a good password
 ] mount / -o remount,or
 ] sync; sync; sync
 # wait thirty seconds, because paranoia never hurts
 ] sync; sync; sync; reboot
 
 That should get you past the problem, at least as far as the next issue.

 i guess that's mount / -o remount,ro

Yup.

 I'm curious about the order of the read-only command, and the syncs. I did
 assume there would be nothing to sync on a read-only file system, but I take
 it sync works below the file system level?

sync instructs the kernel to flush out dirty blocks now; indeed, a read-only
file system generates no dirty blocks, but while you had it mounted read-write
you would have generated them.

Mounting to read-only doesn't necessarily flush all the dirty data, so you
need to manually trigger that.  In theory, one sync should do it; in practice,
this has varied over the years, so the ultra-paranoid version certainly
doesn't hurt. :)

Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] mount LVM from Ubuntu live CD

2010-02-20 Thread Daniel Pittman
Jeremy Visser jer...@visser.name writes:
 On 19/02/10 13:41, Daniel Pittman wrote:

 ] mount / -o remount,rw
 ] passwd root  # ...and give it a good password
 ] mount / -o remount,ro
 ] sync; sync; sync
 # wait thirty seconds, because paranoia never hurts
 ] sync; sync; sync; reboot

 Just be aware that you don't get a lot of nice things like, oh, some
 of the flush on shutdown behaviour that you do in a normal boot.

 Shutting down from the GUI, or typing 'halt' isn't magic. It doesn't
 magically do anything that sync doesn't. How else do you think that the
 logic works when you shut down?

...perhaps my working wasn't clear, as you seem to be restating my point?

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] mount LVM from Ubuntu live CD

2010-02-20 Thread Daniel Pittman
Martin Visser martinvisse...@gmail.com writes:

 I don't get how sync will write anything to a ro filesystem.
 That seems to be to break a fundamental kernel and filesystem principle.

...ah.  Um, it doesn't write anything to the file system, as such.  It
triggers the dirty pages in the kernel cache to write out to their block
devices.

The read only switch in the file system, and to some degree the file system
itself, are at a higher level, so are not involved especially.

(...and once the file system is read only it *should* not generate any more
 dirty blocks, although occasionally bugs could trigger this.)

 I would have thought either the remount would either force a flush of
 dirty blocks before it switches to ro, or alternatively those blocks still
 dirty at the time of the remount end up in the bit bucket.

Nope.  Not least because you *can't* make that assurance: the blocks written
may be in flight on the HBA, or even the network, between the file system and
the magnetic storage.

 Also I have seen this 3 sync incantation before.

It dates back a *long* time, well before Linux.  These days it is seldom, if
ever, necessary, but old habits die hard.

 It seems to me that all you are doing playing snap with processes that might
 have stuff to write but hasn't been flushed.

If there are active processes writing, sure.  It used to be, once upon a time,
that sync didn't (guarantee) to write every dirty block on the system.

(Theoretically, with enough block layers between the magnetic storage and the
 page cache you could perhaps still race to the same conclusion or so, but
 probably not.)

 After any sync and before the final shutdown I presume any running process
 is at liberty to create new dirty blocks that may or may not make it to disk
 in time.

Correct.  If you mount the file system read-only, though, you can be confident
that no *new* dirty blocks are being generated by that file system, absent
bugs.

Daniel

-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


Re: [SLUG] RAID and LVM

2010-02-19 Thread Daniel Pittman
Tony Sceats tony.sce...@gmail.com writes:

 Slower, though ... is a bit of a strange claim.  Not because it is false,
 but because the answer is complex: you can, for example, double read speed
 and halve write speed, using a two disk RAID 1 array ... in the ideal case.

 I must say I'm curious about this, because I have always assumed that for a
 RAID 1 the write speed would be roughly the same as a single disk, not
 halved..

Sorry, you are quite right.  It should write at approximately the speed of a
single disk, and read at twice the speed.  More or less.  I shouldn't post
while I have a cold, because it makes my thinking bits not working.

[...]

 If this is true, I guess the reason would be that the same data travels over
 the same bus twice before the operation can be said to be completed,
 therefore halving your write speed.

This is the only point it actually cuts speed below a single disk, and that
takes more than two disks even on some of the less good modern systems.

Sorry.
  Daniel
-- 
✣ Daniel Pittman✉ dan...@rimspace.net☎ +61 401 155 707
   ♽ made with 100 percent post-consumer electrons
--
SLUG - Sydney Linux User's Group Mailing List - http://slug.org.au/
Subscription info and FAQs: http://slug.org.au/faq/mailinglists.html


  1   2   3   4   5   6   7   >