Re: [gentoo-user] syslog-ng: how to read the log files

2015-05-04 Thread lee
Canek Peláez Valdés can...@gmail.com writes:

 On Sun, Feb 22, 2015 at 6:41 PM, lee l...@yagibdah.de wrote:

 Neil Bothwick n...@digimed.co.uk writes:

  On Wed, 18 Feb 2015 21:49:54 +0100, lee wrote:
 
   I wonder if the OP is using systemd and trying to read the journal
   files?
 
  Nooo, I hate systemd ...
 
  What good are log files you can't read?
 
  You can't read syslog-ng log files without some reading software,
 usually
  a combination of cat, grep and less. systemd does it all with
 journalctl.
 
  There are good reasons to not use systemd, this isn't one of them.

 To me it is one of the good reasons, and an important one.  Plain text
 can usually always be read without further ado, be it from rescue
 systems you booted or with software available on different operating
 systems.  It can be also be processed with scripts and sent as email.
 You can probably even read it on your cell phone.  You can still read
 log files that were created 20 years ago when they are plain text.

 Can you do all that with the binary files created by systemd?

 Yes, you can.

You can predict the next 20 years?

 I can't even read them on a working system.

 If that's true (which I highly doubt, more probably you don't know how to
 read them), then it's a bug and should be reported and fixed.

I read log files with less.  The bug is that systemd uses some sort of
binary files, and they aren't going to fix it.  They even won't fix
their misunderstanding of what disabled means.  So why make bug
reports?


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] syslog-ng: how to read the log files

2015-05-04 Thread lee
Rich Freeman ri...@gentoo.org writes:

 On Sun, Feb 22, 2015 at 6:41 PM, lee l...@yagibdah.de wrote:

 To me it is one of the good reasons, and an important one.  Plain text
 can usually always be read without further ado, be it from rescue
 systems you booted or with software available on different operating
 systems.  It can be also be processed with scripts and sent as email.
 You can probably even read it on your cell phone.  You can still read
 log files that were created 20 years ago when they are plain text.

 Doing any of that stuff requires the use of software capable of
 reading text files.  It isn't like you can just interpret the magnetic
 fields on your disk with your eyes.

Yes, and it doesn't seem very likely that it'll become impossible to
read text files in the next 20 years.

 Sure, there are a lot more utilities that can read text files than
 journal files, but you just need to arrange to have them handy.
 They'll be ubiquitous before long since every distro around will end
 up needing them.

Hopefully not, systemd is a bad thing for many reasons.

 Can you do all that with the binary files created by systemd?  I can't
 even read them on a working system.


 You just type journalctl to read the live system logs.  For offline
 use you just type journalctl --file=filename.  Or you can just run
 strings on the file I imagine if you're desperate.  If it doesn't work
 on a working system then your system isn't working.

See, ppl already claim that when something that comes from systemd isn't
working, then the system isn't working.  Unfortunately, they overlook
that when things systemd don't work by design, it's bad design or a
problem of systemd rather than the system not working.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] syslog-ng: how to read the log files

2015-05-04 Thread lee
Marc Joliet mar...@gmx.de writes:

 Can you do all that with the binary files created by systemd?  I can't
 even read them on a working system.

 What Canek and Rich already said is good, but I'll just add this: it's not 
 like
 you can't run a classic syslog implementation alongside the systemd journal.
 On my systems, by *default*, syslog-ng kept working as usual, getting the logs
 from the systemd journal.  If you want to go further, you can even configure
 the journal to not store logs permanently, so that you *only* end up with
 plain-text logs on your system (Duncan on gentoo-amd64 went this way).

 So no, the format that the systemd journal uses is most decidedly *not* a 
 reason
 against using systemd.

It is only one of the many reasons.  I don't find it advantageous to
have to waste additional resources to be able to read the log files.

 Personally, I'm probably going to uninstall syslog-ng, because journalctl is
 *such* a nice way to read logs, so why run something whose output I'll never
 read again?

If you like it, nobody prevents you from using it.  It's good to have
many options.  Just don't force others to use it as well.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.



Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Dale
Dale wrote:
 Howdy,

   SNIP 

 Dale



 P. S.


 Filesystem Size  Used AvailUse% Mounted on
 /dev/mapper/Home2-Home2  2.7T 1.8T  945G  66% /home





Well, I read replies a few times and I think it is best to just add a
new drive.  Heck, I've already had a 3TB drive to fail.  Anyway, I also
need to look into some sort of backup system.  I used to do this with
DVDs but with this much stuff, that just isn't a good idea, not to
mention that DVDs have their own issues.   I may take a peek into a RAID
setup since really, that is about the best if not only way to do it. 

Thanks all for the replies.

Dale

:-)  :-) 



Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Dale
Neil Bothwick wrote:
 On Wed, 29 Apr 2015 08:13:41 +0200, Alan McKinnon wrote:

 An alternative is to create a new volume group on the new disk and
 mounts PVs at various points in your home directory. That way you get
 the extra space and much of the flexibility without the risk of a
 failure on a single drive taking out data on both. However, if you
 are concerned about data loss, you should be using RAID at a minimum,
 preferably with an error detecting filesystem. 

 I've used that scheme myself in the past. You do get the increased space
 but you don't get much in the way of flexibility. And it get COMPLICATED
 really quickly.

 It certainly can, but for a simple two drive home system it shouldn't get
 out of hand. However, it does avoid the one disk errors kills two
 disks' data problem.

Yea, right now, I'm only using two drives.  One for the OS and one for
/home.  I have a third drive but it isn't in use.  I'm thinking about
moving everything but the videos to that drive, 750GB, and leave just
the videos on the large 3TB drive.  It'll free up a *little* space too.




 To get around the situation of one drive almost full and the other
 having lots of space, folks often use symlinked directories, which you
 forget about and no-one else can figure out what you did...

 I wasn't suggesting symlinks, just LVs mounted at appropriate points. It
 rather depends on the spread of Dale's data. If he just needs extra space
 for his videos, he could get a new drive and mount it at ~/videos.

The bulk of the space is used by the videos.  It's everything from TV
shows to movies to youtube howtos.  I'm using roughly 1.8TB on the drive
and the videos take up roughly 1.7TB of that space.  My camera pics only
use 21GBs of space.  Rest is basically a rounding error.  :/


 It all smacks of the old saw:

 For any non-trivial problem, there is always at least one solution that
 is simple, elegant, and wrong.

 :-)

 I consider what I suggested somewhat simple but far from elegant. Often
 though, it's a lot less work in the long run to go for the initially more
 complex solution. If Dale is worried about the likelihood of disk
 failure, he really should be using RAID - either MDRAID under LVM or one
 of the next-gen filesystems.



I really do need to set up RAID at least for some stuff that I may not
be able to get back.  Some videos I have are no longer available.   What
I wish, I had a second puter in a outbuilding that I could copy to over
ethernet or something.  May help in the event of a house fire etc.

Dale

:-)  :-)



Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Neil Bothwick
On Mon, 04 May 2015 02:39:10 -0500, Dale wrote:

  I wasn't suggesting symlinks, just LVs mounted at appropriate points.
  It rather depends on the spread of Dale's data. If he just needs
  extra space for his videos, he could get a new drive and mount it at
  ~/videos.
 
 The bulk of the space is used by the videos.  It's everything from TV
 shows to movies to youtube howtos.  I'm using roughly 1.8TB on the drive
 and the videos take up roughly 1.7TB of that space.  My camera pics only
 use 21GBs of space.  Rest is basically a rounding error.  :/

You need to separate those anyway, for backup purposes. Anything you
downloaded, you can usually download again, so you only need a list of
the files to be able to find them again.

On the other hand, you photos are irreplaceable and need to be backed up.
 
 I really do need to set up RAID at least for some stuff that I may not
 be able to get back.  Some videos I have are no longer available.

RAID is not a backup solution.

 What
 I wish, I had a second puter in a outbuilding that I could copy to over
 ethernet or something.  May help in the event of a house fire etc.

You have, it's called Amazon S3 :) It's a lot cheaper than a second
computer, and a lot more reliable.
 

-- 
Neil Bothwick

Two rights don't make a wrong, they make an airplane.


pgpId_3n1S2Tt.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Mick
On Monday 04 May 2015 08:46:26 Neil Bothwick wrote:
 On Mon, 04 May 2015 02:39:10 -0500, Dale wrote:

  I really do need to set up RAID at least for some stuff that I may not
  be able to get back.  Some videos I have are no longer available.
 
 RAID is not a backup solution.

Not only RAID 1 isn't a back up solution, because it offers temporary 
redundancy rather than diverse protection, but under certain scenarios you 
have a much higher chance of losing your data when the first drive fails.  If 
you bought two (or more) drives at the same time and built a RAID from them, 
their failure performance due to same construction and age could be quite 
similar.  On many occasions your last healthy drive fails, just as you try to 
rebuild the RAID.

-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Dale
Neil Bothwick wrote:
 On Mon, 04 May 2015 02:39:10 -0500, Dale wrote:

 I wasn't suggesting symlinks, just LVs mounted at appropriate points.
 It rather depends on the spread of Dale's data. If he just needs
 extra space for his videos, he could get a new drive and mount it at
 ~/videos.

 The bulk of the space is used by the videos.  It's everything from TV
 shows to movies to youtube howtos.  I'm using roughly 1.8TB on the drive
 and the videos take up roughly 1.7TB of that space.  My camera pics only
 use 21GBs of space.  Rest is basically a rounding error.  :/

 You need to separate those anyway, for backup purposes. Anything you
 downloaded, you can usually download again, so you only need a list of
 the files to be able to find them again.

 On the other hand, you photos are irreplaceable and need to be backed up.

Well, some videos aren't available either.  I'd hate to know I had to
find some of the ones that are available.  Some take some diggin IF I
can even remember some of them.

My pics I backup to DVDs, two sets just in case.  I keep those in a
outbuilding.  If everything here burns, I'm likely gone anyway.



 
 I really do need to set up RAID at least for some stuff that I may not
 be able to get back.  Some videos I have are no longer available.

 RAID is not a backup solution.

True but at least it would help if a drive fails.  I've been there a
couple times.



 What
 I wish, I had a second puter in a outbuilding that I could copy to over
 ethernet or something.  May help in the event of a house fire etc.

 You have, it's called Amazon S3 :) It's a lot cheaper than a second
 computer, and a lot more reliable.
 



My internet is way to slow for that.  It would take weeks maybe a month
to upload all this stuff.  I have DSL but it is the basic package.  If I
were on cable or had a real fast DSL, maybe.  Thing is, I really don't
want some of my stuff on the internet anyway.  ;-)

I'll come up with something tho.

Dale

:-)  :-)



Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Dale
Mick wrote:
 On Monday 04 May 2015 08:46:26 Neil Bothwick wrote:
 On Mon, 04 May 2015 02:39:10 -0500, Dale wrote:

 I really do need to set up RAID at least for some stuff that I may not
 be able to get back.  Some videos I have are no longer available.

 RAID is not a backup solution.

 Not only RAID 1 isn't a back up solution, because it offers temporary
 redundancy rather than diverse protection, but under certain scenarios
you
 have a much higher chance of losing your data when the first drive
fails.  If
 you bought two (or more) drives at the same time and built a RAID from
them,
 their failure performance due to same construction and age could be quite
 similar.  On many occasions your last healthy drive fails, just as you
try to
 rebuild the RAID.



I think this has happened to folks on this list.  I've read about this
somewhere before.  It makes sense too.  I'd like to have two different
brands of drives if I could.  That should spread things out, maybe.

Dale

:-)  :-)



Re: [gentoo-user] Difference between normal distcc and distcc with pump?

2015-05-04 Thread Fernando Rodriguez
On Sunday, May 03, 2015 11:59:08 PM Walter Dnes wrote:
 On Sun, May 03, 2015 at 02:57:46PM -0400, Fernando Rodriguez wrote
 
  Some packages do custom preprocessing and other weird things during
  the build process that cause problems with pump mode since it caches
  copies of the unmodified headers. If you're lucky it just fails (and
  usually falls back on compiling locally), if you're not then it may
  succeed and you'll get runtime bugs. I haven't find a package yet
  that fails without pump mode as long as your CFLAGS are set properly.
 
   Seamonkey fails during the build process.  Two tries, and the build
 log was 74,046 bytes each time.  I have an Intel x86_64 as the host, and
 an Atom i686 (32-bit only) as the client.  Given your description, I may
 drop pump altogether from my xmerge script.  I'll unmerge
 seamonkey-bin, and try distcc-building seamonkey from source, without
 pump, Monday when I have more time.  Here are a few lines from the
 failed build log, using pump...
 
 Executing: gcc -o nsinstall_real -march=atom -mtune=atom -fstack-protector -
pipe -mno-avx -DXP_UNIX -MD -MP -MF .deps/nsinstall_real.pp -O2 -DUNICODE -
D_UNICODE -Wl,-O1 -Wl,--as-needed host_nsinstall.o host_pathsub.o
 
 /usr/lib/gcc/i686-pc-linux-gnu/4.8.4/../../../../i686-pc-linux-gnu/bin/ld: 
i386:x86-64 architecture of input file `host_nsinstall.o' is incompatible with 
i386 output
 
 /usr/lib/gcc/i686-pc-linux-gnu/4.8.4/../../../../i686-pc-linux-gnu/bin/ld: 
i386:x86-64 architecture of input file `host_pathsub.o' is incompatible with 
i386 output
 
 /usr/lib/gcc/i686-pc-linux-gnu/4.8.4/../../../../i686-pc-linux-gnu/bin/ld: 
host_nsinstall.o: file class ELFCLASS64 incompatible with ELFCLASS32
 
 /usr/lib/gcc/i686-pc-linux-gnu/4.8.4/../../../../i686-pc-linux-gnu/bin/ld: 
final link failed: File in wrong format
 
 

The error on the link that you posted looks like cause by pump mode, this one 
I'm not so sure.

It looks like you're not using the cross compiler on the host as it would not 
generate 64 bit code. Did you add -m32 to your CFLAGS on the client box? Also 
you may need to set the custom-cflags use flag. Can you verify that it is using 
the cross compiler on the host?

I'm not sure exactly what the gentoo recommended distcc/cross compile setup 
but if you do it like I suggested on your other thread (using the host 64bit 
compiler) it should work. Look at the links you got under /usr/lib/distcc/bin. 
All you need to do is create scripts on the host with the exact same names and 
have them execute the compiler that you want with the options you want (I just 
have it execute the 64bit compiler with -m32). Then make sure that distccd (on 
host) finds them before the actual compiler by putting it in the PATH 
environment variable before anything else. For that you may need to modify the 
init script or unit file if using systemd or just start distccd manually.

A simpler hack is to just delete the c++, cc, gcc, and g++ symlinks from the 
/usr/lib/distcc/bin directory. That will force distcc to only trap the 
compiler invocations that use the full compiler name and end up using the 
cross compiler in the host, but if you do this you may end up compiling more 
stuff locally.


-- 
Fernando Rodriguez



Re: [gentoo-user] Difference between normal distcc and distcc with pump?

2015-05-04 Thread Fernando Rodriguez
On Monday, May 04, 2015 5:29:34 AM Fernando Rodriguez wrote:
 On Sunday, May 03, 2015 11:59:08 PM Walter Dnes wrote:
  On Sun, May 03, 2015 at 02:57:46PM -0400, Fernando Rodriguez wrote
  
   Some packages do custom preprocessing and other weird things during
   the build process that cause problems with pump mode since it caches
   copies of the unmodified headers. If you're lucky it just fails (and
   usually falls back on compiling locally), if you're not then it may
   succeed and you'll get runtime bugs. I haven't find a package yet
   that fails without pump mode as long as your CFLAGS are set properly.
  
Seamonkey fails during the build process.  Two tries, and the build
  log was 74,046 bytes each time.  I have an Intel x86_64 as the host, and
  an Atom i686 (32-bit only) as the client.  Given your description, I may
  drop pump altogether from my xmerge script.  I'll unmerge
  seamonkey-bin, and try distcc-building seamonkey from source, without
  pump, Monday when I have more time.  Here are a few lines from the
  failed build log, using pump...
  
  Executing: gcc -o nsinstall_real -march=atom -mtune=atom -fstack-protector 
-
 pipe -mno-avx -DXP_UNIX -MD -MP -MF .deps/nsinstall_real.pp -O2 -DUNICODE -
 D_UNICODE -Wl,-O1 -Wl,--as-needed host_nsinstall.o host_pathsub.o
  
  /usr/lib/gcc/i686-pc-linux-gnu/4.8.4/../../../../i686-pc-linux-gnu/bin/ld: 
 i386:x86-64 architecture of input file `host_nsinstall.o' is incompatible 
with 
 i386 output
  
  /usr/lib/gcc/i686-pc-linux-gnu/4.8.4/../../../../i686-pc-linux-gnu/bin/ld: 
 i386:x86-64 architecture of input file `host_pathsub.o' is incompatible with 
 i386 output
  
  /usr/lib/gcc/i686-pc-linux-gnu/4.8.4/../../../../i686-pc-linux-gnu/bin/ld: 
 host_nsinstall.o: file class ELFCLASS64 incompatible with ELFCLASS32
  
  /usr/lib/gcc/i686-pc-linux-gnu/4.8.4/../../../../i686-pc-linux-gnu/bin/ld: 
 final link failed: File in wrong format
  
  
 
 The error on the link that you posted looks like cause by pump mode, this 
one 
 I'm not so sure.
 
 It looks like you're not using the cross compiler on the host as it would 
not 
 generate 64 bit code. Did you add -m32 to your CFLAGS on the client box? 
Also 
 you may need to set the custom-cflags use flag. Can you verify that it is 
using 
 the cross compiler on the host?
 
 I'm not sure exactly what the gentoo recommended distcc/cross compile setup 
 but if you do it like I suggested on your other thread (using the host 64bit 
 compiler) it should work. Look at the links you got under 
/usr/lib/distcc/bin. 
 All you need to do is create scripts on the host with the exact same names 
and 
 have them execute the compiler that you want with the options you want (I 
just 
 have it execute the 64bit compiler with -m32). Then make sure that distccd 
(on 
 host) finds them before the actual compiler by putting it in the PATH 
 environment variable before anything else. For that you may need to modify 
the 
 init script or unit file if using systemd or just start distccd manually.
 
 A simpler hack is to just delete the c++, cc, gcc, and g++ symlinks from the 
 /usr/lib/distcc/bin directory. That will force distcc to only trap the 
 compiler invocations that use the full compiler name and end up using the 
 cross compiler in the host, but if you do this you may end up compiling more 
 stuff locally.

Or you just replace them (c++, cc, gcc, and g++) with a wrapper to make sure 
it invokes the full compiler name. That's what they recommend here: 
https://wiki.gentoo.org/wiki/Raspberry_Pi_Cross_building

-- 
Fernando Rodriguez



Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Neil Bothwick
On Mon, 04 May 2015 03:23:48 -0500, Dale wrote:

  What
  I wish, I had a second puter in a outbuilding that I could copy to
  over ethernet or something.  May help in the event of a house fire
  etc.  
 
  You have, it's called Amazon S3 :) It's a lot cheaper than a second
  computer, and a lot more reliable.

 My internet is way to slow for that.  It would take weeks maybe a month
 to upload all this stuff.  I have DSL but it is the basic package.  If I
 were on cable or had a real fast DSL, maybe.  Thing is, I really don't
 want some of my stuff on the internet anyway.  ;-)

You only need to upload it once, so it doesn't really matter how long it
takes. After that you do incremental backups. I use app-backup/duplicity
which not only takes care of incremental backups and communicating with
S3, but also encrypts everything with GPG. No one would know you were
uploading goat porn :)


-- 
Neil Bothwick

When there's a will, I want to be in it.


pgpsGYeafVoe0.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Dale
Neil Bothwick wrote:
 On Mon, 04 May 2015 03:23:48 -0500, Dale wrote:

 What
 I wish, I had a second puter in a outbuilding that I could copy to
 over ethernet or something.  May help in the event of a house fire
 etc.  
 You have, it's called Amazon S3 :) It's a lot cheaper than a second
 computer, and a lot more reliable.
 My internet is way to slow for that.  It would take weeks maybe a month
 to upload all this stuff.  I have DSL but it is the basic package.  If I
 were on cable or had a real fast DSL, maybe.  Thing is, I really don't
 want some of my stuff on the internet anyway.  ;-)
 You only need to upload it once, so it doesn't really matter how long it
 takes. After that you do incremental backups. I use app-backup/duplicity
 which not only takes care of incremental backups and communicating with
 S3, but also encrypts everything with GPG. No one would know you were
 uploading goat porn :)




It may be only once but it would be a very large once plus I'm on my
puter a lot.  Uploading slows my surfing to almost a dead stop.  Newegg
is a nightmare for me to surf on.  Slowest thing I ever seen. Newegg
isn't alone tho.

Dale

:-)  :-) 



Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Alan Mackenzie
Hello, Dale.

On Mon, May 04, 2015 at 03:23:48AM -0500, Dale wrote:
 Neil Bothwick wrote:

  What I wish, I had a second puter in a outbuilding that I could copy
  to over ethernet or something.  May help in the event of a house
  fire etc.

  You have, it's called Amazon S3 :) It's a lot cheaper than a second
  computer, and a lot more reliable.

 My internet is way to slow for that.  It would take weeks maybe a month
 to upload all this stuff.  I have DSL but it is the basic package.  If I
 were on cable or had a real fast DSL, maybe.  Thing is, I really don't
 want some of my stuff on the internet anyway.  ;-)

For the stuff you don't want on the internet, encrypt it!  I've recently
started using ccrypt.  It takes MUCH less time to encrypt things than it
does to transmit them over the net to a server - for my ~4.6 Gb backup,
it takes about 3 minutes to encrypt.  Sending it to my backup server then
takes the best par of an hour (at 10 Mbit/s upload speed).

I suspect your upload speed is way less, but if you had a few hundred
megabytes of really special stuff, this route might be useful.

 I'll come up with something tho.

 Dale

 :-)  :-)

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: [gentoo-user] syslog-ng: how to read the log files

2015-05-04 Thread Rich Freeman
On Mon, May 4, 2015 at 2:14 AM, lee l...@yagibdah.de wrote:
 Marc Joliet mar...@gmx.de writes:

 Personally, I'm probably going to uninstall syslog-ng, because journalctl is
 *such* a nice way to read logs, so why run something whose output I'll never
 read again?

 If you like it, nobody prevents you from using it.  It's good to have
 many options.  Just don't force others to use it as well.


Who is forcing anybody to use anything?  Did Lennart break into your
house with an RHEL 7 disk and force you to install it at gunpoint or
something?  You did a great job holding out under the torture - that
would explain your 2.5 month absence from this long-dead thread.
Fortunately, while you were gone nobody treecleaned sysvinit, not that
treecleaning a package prevents anybody from using it.

-- 
Rich



Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Neil Bothwick
On Mon, 04 May 2015 05:40:25 -0500, Dale wrote:

  You only need to upload it once, so it doesn't really matter how long
  it takes. After that you do incremental backups. I use
  app-backup/duplicity which not only takes care of incremental backups
  and communicating with S3, but also encrypts everything with GPG. No
  one would know you were uploading goat porn :)

 It may be only once but it would be a very large once plus I'm on my
 puter a lot. 

You have to sleep some time, your computer doesn't :)

 Uploading slows my surfing to almost a dead stop.  Newegg
 is a nightmare for me to surf on.  Slowest thing I ever seen. Newegg
 isn't alone tho.

As long as you restrict the upload speed to around 80-80% of your
available upstream bandwidth, it shouldn't affect downloading
significantly. It's when you saturate the upstream that your downloads
are affected.


-- 
Neil Bothwick

Computer apathy error: don't bother striking any key.


pgpYPkG7GH7pT.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Rich Freeman
On Mon, May 4, 2015 at 6:31 AM, Neil Bothwick n...@digimed.co.uk wrote:
 On Mon, 04 May 2015 03:23:48 -0500, Dale wrote:

  What
  I wish, I had a second puter in a outbuilding that I could copy to
  over ethernet or something.  May help in the event of a house fire
  etc.
 
  You have, it's called Amazon S3 :) It's a lot cheaper than a second
  computer, and a lot more reliable.

 My internet is way to slow for that.  It would take weeks maybe a month
 to upload all this stuff.  I have DSL but it is the basic package.  If I
 were on cable or had a real fast DSL, maybe.  Thing is, I really don't
 want some of my stuff on the internet anyway.  ;-)

 You only need to upload it once, so it doesn't really matter how long it
 takes. After that you do incremental backups. I use app-backup/duplicity
 which not only takes care of incremental backups and communicating with
 S3, but also encrypts everything with GPG. No one would know you were
 uploading goat porn :)

I tend to use a few strategies.

Typical stuff in /home, /etc: duplicity daily backups to S3.  It is
small, and safe.  Oh, and it is all on RAID too, which reduces the
risk of needing to actually restore it (RAID is primarily about
downtime, not backup).  Encryption keys are burned to multiple CDs and
stored in multiple safe places.

Photos and other valuable media:  Also gets the duplicity S3
treatment, but after every few GB I do a one-time upload to Glacier
and then remove it from my daily backups.  This stuff is write-once,
so backing it up daily is overkill.  When S3 was more expensive I
would burn two copies to DVD and store offsite, but that became a PITA
and Amazon is a lot cheaper now.  If I ever need to restore it it is
unlikely I'd need it all at once, so I can do so slowly and not get
killed by fees.

MythTV recordings, random video from internet, etc:  btrfs raid plus a
second daily rsync to ext4 (still local).  The rsync is only because
I'm still in playing-around mode with btrfs.  Once I trust it fully
I'll drop it and just rely on the RAID.  I'd be annoyed if I lost all
this stuff, but only for a week or two.  Trying to properly back up
multiple TB of media is just way too expensive and this stuff just
isn't valuable enough to care about.

I structure my filesystem around my backup strategy.  All the stuff I
really care about is in /home.  Stuff I don't care so much about goes
outside of /home and is symlinked back in where necessary.  So, I
don't need to play around with too many exclusion rules.

-- 
Rich



Re: [gentoo-user] syslog-ng: how to read the log files

2015-05-04 Thread Tom H
On Mon, May 4, 2015 at 1:57 AM, lee l...@yagibdah.de wrote:
 Canek Peláez Valdés can...@gmail.com writes:
 On Sun, Feb 22, 2015 at 6:41 PM, lee l...@yagibdah.de wrote:

 I can't even read them on a working system.

 If that's true (which I highly doubt, more probably you don't know how to
 read them), then it's a bug and should be reported and fixed.

 I read log files with less. The bug is that systemd uses some sort of
 binary files, and they aren't going to fix it. They even won't fix
 their misunderstanding of what disabled means. So why make bug
 reports?

The systemd developers' use of disable/mask isn't wrong simply because
you disagree with them.

systemctl disable unit is the same as blacklist module: the
unit/module can be loaded manually or as a dependency.

systemctl mask unit is the same as install module /bin/true: the
unit/module can't be loaded.



Re: [gentoo-user] Difference between normal distcc and distcc with pump?

2015-05-04 Thread Walter Dnes
On Mon, May 04, 2015 at 05:29:34AM -0400, Fernando Rodriguez wrote

 The error on the link that you posted looks like cause by pump mode,
 this one I'm not so sure.
 
 It looks like you're not using the cross compiler on the host as
 it would not generate 64 bit code. Did you add -m32 to your CFLAGS
 on the client box? Also you may need to set the custom-cflags use
 flag. Can you verify that it is using the cross compiler on the host?

  I did a large world update this past week.  If all the libs and
programs were going as 64-bit code, to my 32-bit-only machine, Gentoo
should be badly broken by now, to the point of being unbootable.  The
problem appears to be isolated to seamonkey.  I'll first try adding
-m32.  If that doesn't work, I'll drop the pump option.  I'll let
you know how things turn out.

-- 
Walter Dnes waltd...@waltdnes.org
I don't run desktop environments; I run useful applications



Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Nuno Magalhães
Greetings gents.

I may have missed it, but i haven't seen this suggested yet: RAID+LVM.
If you already have a 3TB drive, buy another (or two more) and build a
RAID1 or 5 array on them. Then build your LVM on top of /dev/md0 (or
whatever device your raid is).

Another approach is ZFS with RAID-Z or similar. I don't know how/if
ZFS splits data among the drives, but i assume it's wise enough to do
so in a way similar to a RAID+LVM combo.

If going RAID, make sure the rpm and cache are the same for
performance's sake, and you can mix and match drives from different
vendors (perhaps you should, to add to the redundancy).

I don't know about btrfs, seems like it's still in a testing-phase so
i'm not touching it yet.

Just my 2¢

Cheers,
Nuno



Re: [gentoo-user] Portage metadata cache backend: sqlite or not?

2015-05-04 Thread Marc Joliet
Oh wow, I completely forgot about this open thread.  In my defense, I had to
move out two (or was it three?) days after I started the thread, and didn't have
internet again until over a month later.

Am Wed, 13 Aug 2014 07:30:01 +0530
schrieb Nilesh Govindrajan m...@nileshgr.com:

[...]
 Having tried this feature, I'd advise against it. It takes long time
 to generate metadata after sync and not really that advantageous. Also
 eix has it's own issues in this mode.

Do I understand correctly: you recommend switching away from the sqlite
backend?  Also, can you please elaborate on the issues with eix?  I haven't
noticed anything so far, but some bugs are easy to miss.

-- 
Marc Joliet
--
People who think they know everything really annoy those of us who know we
don't - Bjarne Stroustrup


pgpACkRUyvNxA.pgp
Description: Digitale Signatur von OpenPGP


Re: [gentoo-user] Hard drive storage questions

2015-05-04 Thread Walter Dnes
On Mon, May 04, 2015 at 02:23:55AM -0500, Dale wrote
 Dale wrote:
 
 Well, I read replies a few times and I think it is best to just add a
 new drive.  Heck, I've already had a 3TB drive to fail.  Anyway, I also
 need to look into some sort of backup system.  I used to do this with
 DVDs but with this much stuff, that just isn't a good idea, not to
 mention that DVDs have their own issues.   I may take a peek into a RAID
 setup since really, that is about the best if not only way to do it. 

  How often do you need to refresh your backups?  And how much does a
medium-size safety-deposit box cost in your area?  Would the bank object
to you going into your safety-deposit box once a month?  Here's a plan...

1 computer with a main drive, and 2 backup drives.  The backup drives
are either removable internals, or standalone externals.  In either
case, they would have to fit inside the safety deposit box.

  Month #

1) make duplicate backups of your machine to both backup drives, and
stick 1 into the bank safety-deposit box, and keep the other at home

2) - update your home backup
   - take it to the bank
   - swap it with the other drive
   - bring the other drive home and update that backup immediately

3) all succeeding months... GOTO 2 (rinse; lather; repeat)

  No worry about uploading terabytes of data over a slow ADSL link.
This is basically a reprise of Andrew Tanenbaum's quote...

 Never underestimate the bandwidth of a station wagon full of tapes
 hurtling down the highway.

-- 
Walter Dnes waltd...@waltdnes.org
I don't run desktop environments; I run useful applications



Re: [gentoo-user] Difference between normal distcc and distcc with pump?

2015-05-04 Thread Fernando Rodriguez
On Monday, May 04, 2015 4:36:08 PM Fernando Rodriguez wrote:
 On Monday, May 04, 2015 3:41:54 PM Walter Dnes wrote:
Why is seamonkey the only program (so far for me) that needs -m32?
  Would it need -m64 if it was being cross-compiled on a 32-bit host
  system for 64-bit client?  Is there a wiki that we can contribute this
  info to?
 
 It has to do with my last post. Basicly the makefiles are invoking the full 
 compiler name for the files that are meant to run on the target but it 
invokes 
 just gcc for the files that are meant to run on the host. This is in the 
 context of cross-compiling, not distcc, so the file that's failing is meant 
to 
 run locally (it's a custom build tool). If you where compiling locally (or 
in 
 two machines with the same compiler) it would not matter cause the host and 
 target compiler are the same, but when distcc comes in it builds those files 
 with the host (system) compiler on the host.
 
 Changing the c++, cc, gcc, and g++ symlinks to a wrapper script that invokes 
 the compiler by it's full name as show in the RaspberryPi wiki page *should* 
 fix it.
 

That sounds confusing cause I use host to mean distcc host at one point and 
host compiler on another. I will use server to refer to distcc host on this 
post. When you run a GNU standard configure script you can specify two 
compilers, the host and target compiler. When compiling locally they're both 
the same. But when cross-compiling the host is the system compiler and is used 
for compiling things that will be executed as part of the build process (most 
packages don't do this). On gentoo this is set from the CHOST variable on 
make.conf, but either it's not usually passed to configure scripts by portage 
or some scripts just ignore it and invoke the host compiler as cc, c++, g++, 
or gcc.

When cross-compiling the target compiler is the one for the target 
architecture where the package will be deployed to. This is always invoked by 
the full name (on GNU compliant packages).

Distcc just traps the compiler invocations on the client and perform the same 
invokations on the server. In your case the seamonkey is trying to compile 
something with the host compiler, distcc is trapping it and compiling it with 
the host compiler on the server. Since the host compiler in the server is not 
the same as the host compiler on the server things go bad.

So you don't need -m32 unless you want to use the host compiler on the server. 
Since you want to use a cross-compiler on the server that was an ugly hack 
because you're actually using both compilers on the server. If the version of 
the cross-compiler gets out of sync with the host compiler things can go bad 
easily. So the proper fix in your scenario is to get rid of the -m32 and make 
the host compiler links a wapper script so that everything is compiled with 
the cross-compiler on the server.

-- 
Fernando Rodriguez



Re: [gentoo-user] Difference between normal distcc and distcc with pump?

2015-05-04 Thread Walter Dnes
On Mon, May 04, 2015 at 05:29:34AM -0400, Fernando Rodriguez wrote

 All you need to do is create scripts on the host with the exact
 same names and have them execute the compiler that you want with
 the options you want (I just have it execute the 64bit compiler with
 -m32). Then make sure that distccd (on host) finds them before the
 actual compiler by putting it in the PATH environment variable before
 anything else.

  Much to my surprise, adding -m32 to the client's CFLAGS (and
therefore also CXXFLAGS) results in seamonkey building properly.  I
tried it out on the same video, and cpu load only climbs to 2.5 versus
2.75 with seamonkey-bin.  The build took 1hr and 43 minutes on the Core
2 Duo host, versus 14 hours doing it on the Atom.

  Why is seamonkey the only program (so far for me) that needs -m32?
Would it need -m64 if it was being cross-compiled on a 32-bit host
system for 64-bit client?  Is there a wiki that we can contribute this
info to?

-- 
Walter Dnes waltd...@waltdnes.org
I don't run desktop environments; I run useful applications



Re: [gentoo-user] Difference between normal distcc and distcc with pump?

2015-05-04 Thread Fernando Rodriguez
On Monday, May 04, 2015 3:41:54 PM Walter Dnes wrote:
   Why is seamonkey the only program (so far for me) that needs -m32?
 Would it need -m64 if it was being cross-compiled on a 32-bit host
 system for 64-bit client?  Is there a wiki that we can contribute this
 info to?

It has to do with my last post. Basicly the makefiles are invoking the full 
compiler name for the files that are meant to run on the target but it invokes 
just gcc for the files that are meant to run on the host. This is in the 
context of cross-compiling, not distcc, so the file that's failing is meant to 
run locally (it's a custom build tool). If you where compiling locally (or in 
two machines with the same compiler) it would not matter cause the host and 
target compiler are the same, but when distcc comes in it builds those files 
with the host (system) compiler on the host.

Changing the c++, cc, gcc, and g++ symlinks to a wrapper script that invokes 
the compiler by it's full name as show in the RaspberryPi wiki page *should* 
fix it.

-- 
Fernando Rodriguez