Re: [gentoo-user] vmWare HowTo / best practices

2013-04-21 Thread J. Roeleveld
On Sat, April 20, 2013 18:06, Pandu Poluan wrote:
 On Apr 19, 2013 11:14 PM, J. Roeleveld jo...@antarean.org wrote:i

SNIPPED

 Pandu.

 Do you still use xend on your Xen hosts?
 I thought that was deprecated?


 Ah, sorry. What I meant was xstools daemon. It's necessary to properly
 monitor Linux PV guests on XenServer.

 It's apparently a simple shell script, which commits suicide when it can't
 determine what Linux distro it's running in. Need to add several lines of
 code within the script to enable it to recognize Gentoo and not commit
 suicide.

 XenCenter/CloudStack will.afterwards detect that the machine is running
 Gentoo. Won't affect anything, but better than it reporting unknown
 Linux
 :-)

That's useful to know. I tend to use binary distro's on the XCP-server
myself. But that is because that machine is used to quickly test large
applications where I need to ensure the presence of libraries packaged
with certain Redhat versions.

--
Joost




Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread J. Roeleveld
On Sat, April 20, 2013 17:33, Alan McKinnon wrote:
 On 20/04/2013 17:00, Tanstaafl wrote:
 Thanks for the responses so far...

 Another question - are there any caveats as to which filesystem to use
 for a mail server, for virtualized systems? Ir do the same
 issues/questions apply (ie, does the fact that it is virtualized not
 change anything)?

 If there are none, I'm curious what others prefer.

 I've been using reiserfs on my old mail server since it was first set up
 (over 8 years ago). I have had no issues with it whatsoever, and even
 had one scare with a bad UPS causing the system to experienc an unclean
 shutdown - but it came back up, auto fsck'd, and there was no 'apparent'
 data loss (this was a very long time ago, so if there had been any
 serious problems, I'd have known about it long go).

 I've been considering using XFS, but have never used it before.

 So, anyway, opinions are welcome...


 Virtualization can change things, and it's not really intuitive.

 Regardless of what optimizations you apply to the VM, and regardless of
 what kind of virtualization is in use on the host, you are still going
 to be bound by the disk and fs behaviour of the host. If VMWare gives
 you a really shitty host driver, then something really shitty is going
 to be the best you can achieve.

This can be improved by not using file-backed disk devices.
Not sure if ESXi can do this. XenServer supports LVM-backed disk devices,
which reduces the overhead for the FS significantly.

 Disks aren't like eg NICs, you can't easily virtualize them and give the
 guest exclusive access in the style of para-virtualization (I can't
 imagine how that would even be done).

There is one option for this, but then you need to push the whole
disk-device for the VM. In other words you pass sda to the VM, instead
of a file or partition.
To avoid lousy I/O driver on the host, you could try passing the
disk-controller directly to the VM. But then the VM has it's own set of
disks that can not be used for other VMs.


 FWIW, I have two mail relays (no mail storage) running old postfix
 versions on FreeBSD. I expected throughput to differ when virtualized on
 ESXi, but in practice I couldn't see a difference at all - maybe the
 mail servers were very under-utilized. Considering this pair deal with
 anything between 500,000 to a million mails a day total, I would not
 have considered them under-utilized. Just goes to show how opinions
 are often worthless but numbers buys the whiskey :-)

If Postfix only passes emails through, then it only uses the mail-spool
for temporary storage. For that, it doesn't require a lot of disk I/O.
Most filtering is done in memory, afaik.

My experience with ESXi is that it can have issues with networking when
the versions of the host don't match perfectly or the time is not synched
correctly or VMs are moved around a lot by an Admin that likes to play
with that...
Doing large multi-system installations on an ESXi cluster while the VMs
are moved around can occasionally fail because of a bad network-layer.

--
Joost



 --
 Alan McKinnon
 alan.mckin...@gmail.com









Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread J. Roeleveld
On Sat, April 20, 2013 17:38, Jarry wrote:
 On 20-Apr-13 17:00, Tanstaafl wrote:

 Another question - are there any caveats as to which filesystem to use
 for a mail server, for virtualized systems? Ir do the same
 issues/questions apply (ie, does the fact that it is virtualized not
 change anything)?

 Problem of virtualized filesystem is not that it is virtualized,
 but that it is located on datastore with more virtual systems,
 all of them competing for the same i/o. *That* is the bottleneck.
 If you switch reiser for xfs or btrfs, you might win (or loose)
 a few %. If you optimize your esxi-datastore design, you might
 win much more than what you have ever dreamed of.

If the underlying I/O is fast enough with low seek-times and high
throughput, that handling multiple VMs using a lot of disk I/O
simultaneously isn't a problem. Provided the Host has sufficient resources
(think memory and dedicated CPU) to handle it.

 I have 8 VMs (out of them 6 are Gentoo) hosted on ESXi, intended
 for various tasks (mail, dns, mysql, web, etc), moderately loaded.
 I used hw-raid controller with 2x sata-hdd in raid1 but performance
 was quite dissapointing and I experienced all sorts of i/o jams.

Which hw-raid controller did you use?
RAID-1 (mirroring) isn't actually known for high performance.

 Then I switched hdd for ssd (yes I use 2 of them in raid1, even
 if this is not generally recommended) and performance rocks now!
 I can start now kernel compilation on all 6 VMs at the same time,
 with near-zero performance penalty (depending on cpu/vcpu ratio
 and number of threads used). Unthinkable with hdd-based datastore.

I have HDD-based datastores and can do this on 4 VMs (single quad-core
CPU) without any penalty.

 I would definitely recommend using SSD. Either directly as
 datastore for VMs, or at least as EXSi host-cache. There is
 also possibility of hybrid-raid (1xSSD and 1xHDD in raid1)
 on some raid-controllers. Or if your pocket is really deep,
 you could grab one of those FusionIO-cards to avoid being
 limited by rather slow sata-interface (SSD for PCIe)...

A decent hardware raid-controller with multiple disks running in a higher
raid version is cheaper then the same storage capacity in SSDs.

--
Joost




Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread J. Roeleveld
On Sat, April 20, 2013 18:22, Pandu Poluan wrote:
 On Apr 20, 2013 10:01 PM, Tanstaafl tansta...@libertytrek.org wrote:

 Thanks for the responses so far...

 Another question - are there any caveats as to which filesystem to use
 for a mail server, for virtualized systems? Ir do the same
 issues/questions
 apply (ie, does the fact that it is virtualized not change anything)?

 If there are none, I'm curious what others prefer.

 I've been using reiserfs on my old mail server since it was first set up
 (over 8 years ago). I have had no issues with it whatsoever, and even had
 one scare with a bad UPS causing the system to experienc an unclean
 shutdown - but it came back up, auto fsck'd, and there was no 'apparent'
 data loss (this was a very long time ago, so if there had been any serious
 problems, I'd have known about it long go).

 I've been considering using XFS, but have never used it before.

 So, anyway, opinions are welcome...

 Thanks again

 Charles


 Reiterating what others have said, in a virtualized environment, it's how
 you build the underlying storage that will have the greatest effect on
 performance.

 Just an illustration: in my current employment, we have a very heavily
 used
 database (SQL Server). To ensure good performance, I dedicated a RAID
 array
 of 8 drives (15k RPM each), ensure that the space allocation is 'thick'
 not
 'thin', and dedicate the whole RAID array to just that one VM. Performance
 went through the roof with that one... especially since it was originally
 a
 physical server running on top of 4 x 7200 RPM drives ;-)

 If you have the budget, you really should invest in a SAN Storage solution
 that can provide tiered storage, in which frequently used blocks will be
 'cached' in SSD, while less frequently used blocks are migrated first to
 slower SAS drives, and later on (if 'cold') to even slower SATA drives.

4-tier sounds nicer: 1 TB in high speed RAM for the high-speed layer, with
dedicated UPS to ensure this is backed up to disk on shutdown.

--
Joost




Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Pandu Poluan
On Apr 21, 2013 4:51 PM, J. Roeleveld jo...@antarean.org wrote:

 On Sat, April 20, 2013 18:22, Pandu Poluan wrote:
  On Apr 20, 2013 10:01 PM, Tanstaafl tansta...@libertytrek.org wrote:
 
  Thanks for the responses so far...
 
  Another question - are there any caveats as to which filesystem to use
  for a mail server, for virtualized systems? Ir do the same
  issues/questions
  apply (ie, does the fact that it is virtualized not change anything)?
 
  If there are none, I'm curious what others prefer.
 
  I've been using reiserfs on my old mail server since it was first set
up
  (over 8 years ago). I have had no issues with it whatsoever, and even
had
  one scare with a bad UPS causing the system to experienc an unclean
  shutdown - but it came back up, auto fsck'd, and there was no 'apparent'
  data loss (this was a very long time ago, so if there had been any
serious
  problems, I'd have known about it long go).
 
  I've been considering using XFS, but have never used it before.
 
  So, anyway, opinions are welcome...
 
  Thanks again
 
  Charles
 
 
  Reiterating what others have said, in a virtualized environment, it's
how
  you build the underlying storage that will have the greatest effect on
  performance.
 
  Just an illustration: in my current employment, we have a very heavily
  used
  database (SQL Server). To ensure good performance, I dedicated a RAID
  array
  of 8 drives (15k RPM each), ensure that the space allocation is 'thick'
  not
  'thin', and dedicate the whole RAID array to just that one VM.
Performance
  went through the roof with that one... especially since it was
originally
  a
  physical server running on top of 4 x 7200 RPM drives ;-)
 
  If you have the budget, you really should invest in a SAN Storage
solution
  that can provide tiered storage, in which frequently used blocks will
be
  'cached' in SSD, while less frequently used blocks are migrated first to
  slower SAS drives, and later on (if 'cold') to even slower SATA drives.

 4-tier sounds nicer: 1 TB in high speed RAM for the high-speed layer, with
 dedicated UPS to ensure this is backed up to disk on shutdown.


Indeed! But 1 TB is kind of overkill, if you ask me... :-D

VMware and XenServer can 'talk' with some Storage controllers, where they
conspire in the background to provide 'victim cache' on the virtualization
host. Not sure about Hyper-V.

I myself had had good experience relying on EMC VNX's internal 8 GB cache;
apparently the workload is not high enough to stress the system.

Rgds,
--


Re: [gentoo-user] Removing pulseaudio

2013-04-21 Thread Neil Bothwick
On Sat, 20 Apr 2013 20:15:49 -0500, Canek Peláez Valdés wrote:

 So normal systems require PA. That *you* perhaps don't require PA is
 another thing altogether.

I think the important point from the original post, which appears to have
been lost, is that removing PA is a trivial (by Gentoo standards) task.
Negate one USE flag and update world and the system just keeps working
without PA. I discovered the opposite, that enabling it is just as easy.

That means it does really matter whether it is enabled or not in the
default profiles, because anyone running a completely default system is
wasting their time using Gentoo.

On for desktops and off for servers and other minimal profiles seems
eminently sensible and all of this argument makes bike-shedding seem like
important stuff.


-- 
Neil Bothwick

Why do they call it a TV set when you only get one?


signature.asc
Description: PGP signature


Re: [gentoo-user] OT: parental control software

2013-04-21 Thread Neil Bothwick
On Wed, 20 Mar 2013 17:24:28 -0500, Paul Hartman wrote:

 I think logoutd from sys-apps/shadow can control allowed login windows
 by day-of-week and time-of-day, by user or group. Not sure if it
 translates to the X era or only applies to consoles.

This is only available on PAMless systems, but thanks for the pointer, I
went with using pam_time which does much the same thing and does work
with X as well as consoles.


-- 
Neil Bothwick

How is it one careless match can start a forest fire, but it takes a
whole box to start a campfire?


signature.asc
Description: PGP signature


[gentoo-user] login to gnome fails

2013-04-21 Thread Stefan G. Weichinger

greetings ...

for some days now I fiddle around with an issue on my ~amd64 thinkpad.

Whether with gdm nor with xdm I am able to log in to gnome3 anymore.

This box runs systemd which adds some possibilities to the picture ;-)

The system is rather up-to-date ... I rebuilt stuff like

pam*
all around dbus
gnome-keyring
gnome-shell
gnome-session
xorg-server plus drivers
gdm
xdm
gjs
... and much more ...


... checked for USE-flags (no consolekit, for example, afai understand I
should not have that with systemd and my desktop doesn't have it) ...

... ran revdep-rebuild, python-updater, perl-cleaner ...

compared stuff in /etc/pam.d to the files on my desktop system which
runs pretty much the same setup aside from the encrypted home-partition.

A fresh user isn't able to login as well.

I can login to a plain tty but not to a graphical desktop.

I need that laptop for my work so this gets quite an issue slowly 
and I am running out of ideas and lose track here.

Sure, running systemd *and* gnome-3.8 is pretty unstable ... I perfectly
know. But it works on the other box so it should be possible to get that
fixed on the thinkpad as well.


Some logs:

* xdm

Apr 21 12:40:32 enzo systemd[1]: Stopped X-Window Display Manager.
Apr 21 12:40:32 enzo systemd[1]: xdm.service: cgroup is empty
Apr 21 13:23:09 enzo systemd[1]: Trying to enqueue job
xdm.service/restart/replace
Apr 21 13:23:09 enzo systemd[1]: Installed new job xdm.service/restart
as 634
Apr 21 13:23:09 enzo systemd[1]: Enqueued job xdm.service/restart as 634
Apr 21 13:23:09 enzo systemd[1]: Job xdm.service/restart finished,
result=done
Apr 21 13:23:09 enzo systemd[1]: Converting job xdm.service/restart -
xdm.service/start
Apr 21 13:23:09 enzo systemd[1]: Starting X-Window Display Manager...
Apr 21 13:23:09 enzo systemd[1]: About to execute /usr/bin/xdm -nodaemon
Apr 21 13:23:09 enzo systemd[1]: Forked /usr/bin/xdm as 19059
Apr 21 13:23:09 enzo systemd[1]: xdm.service changed dead - running
Apr 21 13:23:09 enzo systemd[1]: Job xdm.service/start finished, result=done
Apr 21 13:23:09 enzo systemd[1]: Started X-Window Display Manager.
Apr 21 13:23:15 enzo xdm[19069]: pam_lastlog(xdm:session): conversation
failed
Apr 21 13:23:15 enzo xdm[19069]: (mount.c:68): Messages from underlying
mount program:
Apr 21 13:23:15 enzo xdm[19069]: (mount.c:72): NOTE: mount.crypt does
not support utab (systems with no mtab or read-only mtab) yet. This
means that you will temporar

That pam_mount stuff is ok IMO. The encrypted dir is mounted correctly
when I log in to a plain tty. And login doesn't work with a plain new
user without any related encrypted volume.

* gdm

Apr 21 13:27:06 enzo systemd[1]: Trying to enqueue job
gdm.service/start/replace
Apr 21 13:27:06 enzo systemd[1]: Installed new job gdm.service/start as 695
Apr 21 13:27:06 enzo systemd[1]: Enqueued job gdm.service/start as 695
Apr 21 13:27:06 enzo systemd[1]: Starting GNOME Display Manager...
Apr 21 13:27:06 enzo systemd[1]: About to execute /usr/bin/gdm --nodaemon
Apr 21 13:27:06 enzo systemd[1]: Forked /usr/bin/gdm as 20324
Apr 21 13:27:06 enzo systemd[1]: gdm.service changed dead - start
Apr 21 13:27:06 enzo systemd[1]: gdm.service's D-Bus name
org.gnome.DisplayManager now registered by :1.117
Apr 21 13:27:06 enzo systemd[1]: gdm.service changed start - running
Apr 21 13:27:06 enzo systemd[1]: Job gdm.service/start finished, result=done
Apr 21 13:27:06 enzo systemd[1]: Started GNOME Display Manager.
Apr 21 13:27:07 enzo gdm[20324]: Failed to give slave programs access to
the display. Trying to proceed.

This last line make me scratch my head ... googled around already but
nothing really fit the situation ...

I can enter user/pw at gdm-login and then it just sits there and doesn't
do anything.

Here something more tasty:


Apr 21 13:41:55 enzo systemd[1]: Got D-Bus request:
org.freedesktop.DBus.NameOwnerChanged() on /org/freedesktop/DBus
Apr 21 13:41:55 enzo systemd[1]: Got D-Bus request:
org.freedesktop.DBus.NameOwnerChanged() on /org/freedesktop/DBus
Apr 21 13:41:55 enzo systemd[1]: systemd-localed.service's D-Bus name
org.freedesktop.locale1 now registered by :1.225
Apr 21 13:41:55 enzo systemd[1]: systemd-localed.service changed start
- running
Apr 21 13:41:55 enzo systemd[1]: Job systemd-localed.service/start
finished, result=done
Apr 21 13:41:55 enzo systemd[1]: Started Locale Service.
Apr 21 13:41:55 enzo dbus[2966]: [system] Successfully activated service
'org.freedesktop.locale1'
Apr 21 13:41:55 enzo dbus-daemon[2966]: dbus[2966]: [system]
Successfully activated service 'org.freedesktop.locale1'
Apr 21 13:41:55 enzo colord[2981]: Device added: xrandr-Lenovo Group Limited
Apr 21 13:41:55 enzo systemd[1]: Got D-Bus request:
org.freedesktop.DBus.NameOwnerChanged() on /org/freedesktop/DBus
Apr 21 13:41:55 enzo colord[2981]: Automatic metadata add
icc-c4d0e158a8923be59bd2e06674032eb6 to xrandr-Lenovo Group Limited
Apr 21 13:41:55 enzo colord[2981]: Profile added:

Re: [gentoo-user] Removing pulseaudio

2013-04-21 Thread Mick
On Sunday 21 Apr 2013 11:28:56 Neil Bothwick wrote:
 On Sat, 20 Apr 2013 20:15:49 -0500, Canek Peláez Valdés wrote:
  So normal systems require PA. That *you* perhaps don't require PA is
  another thing altogether.
 
 I think the important point from the original post, which appears to have
 been lost, is that removing PA is a trivial (by Gentoo standards) task.
 Negate one USE flag and update world and the system just keeps working
 without PA. I discovered the opposite, that enabling it is just as easy.
 
 That means it does really matter whether it is enabled or not in the
 default profiles, because anyone running a completely default system is
 wasting their time using Gentoo.
 
 On for desktops 

I'm running 'default/linux/amd64/13.0/desktop' on this box and the pulseaudio 
USE flag is inactive as far as portage is concerned.  None of my packages are 
bringing in pulseaudio either:

~ $ euse -I pulseaudio
global use flags (searching: pulseaudio)

[-  ] pulseaudio - Adds support for PulseAudio sound server

Installed packages matching this USE flag: 
app-accessibility/espeak-1.45.04
app-accessibility/speech-dispatcher-0.7.1-r1
kde-base/kmix-4.10.1-r1
kde-base/phonon-kde-4.10.1
media-libs/libao-1.1.0-r1
media-libs/libsdl-1.2.15-r2
media-libs/phonon-4.6.0-r1
media-plugins/gst-plugins-meta-0.10-r8
media-sound/mpg123-1.14.4
media-video/ffmpeg-0.10.7
media-video/mplayer-1.1-r1
media-video/vlc-2.0.5
net-libs/ptlib-2.10.10
net-misc/freerdp-1.1.0_pre20121004-r1
net-voip/ekiga-4.0.0-r1
www-client/chromium-26.0.1410.43

local use flags (searching: pulseaudio)

no matching entries found

I guess that the gnome/kde make.profiles may include this USE flag in their 
defaults?


 and off for servers and other minimal profiles seems
 eminently sensible and all of this argument makes bike-shedding seem like
 important stuff.

-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Re: [gentoo-user] bus error during compilation of gcc

2013-04-21 Thread Mick
On Saturday 20 Apr 2013 20:29:31 the guard wrote:
 Суббота, 20 апреля 2013, 15:25 -04:00 от Forrest Schultz 
f.schul...@gmail.com:
  Doesn't lowering makeopts just reduce the number of parallel
  compilations?
 
 yes, it does. I heard somewhere that bus error is caused by lack of
 sufficient amount of memory during compilations.I also tried to remove
 cflags.

Simplifying cflags to something like: 

  CFLAGS=-march=native -O2 -pipe

may help and also setting makeopts to 1:

  MAKEOPTS=-j1


but none of the above will help if the problem is due to a bug.  Have you done 
the basics like revdep-rebuild and python-updater?

-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] login to gnome fails

2013-04-21 Thread Stefan G. Weichinger

filed a bug now:

https://bugs.gentoo.org/show_bug.cgi?id=44



Re: [gentoo-user] login to gnome fails

2013-04-21 Thread Canek Peláez Valdés
On Sun, Apr 21, 2013 at 6:47 AM, Stefan G. Weichinger li...@xunil.at wrote:

 greetings ...

 for some days now I fiddle around with an issue on my ~amd64 thinkpad.

[snip]

I had a really weird problem that, perhaps, it has something to do
with yours. After reading this:

http://notes.torrez.org/2013/04/put-a-burger-in-your-shell.html

I thought hey, that's a cool idea, so I put a unicode character in
my PS1 environment variable. Afterwards, I was unnable to log into
GNOME; it took me several hours to link the PS1 variable to the
problem, since I did the change inside of GNOME, and I didn't logout
until I upgraded.

I removed the PS1 override, and everything just worked again.

So, just to discard this particular case, you should check your
environment variables.

Regards.
--
Canek Peláez Valdés
Posgrado en Ciencia e Ingeniería de la Computación
Universidad Nacional Autónoma de México



Re: [gentoo-user] Removing pulseaudio

2013-04-21 Thread Dale
Mick wrote:


 I guess that the gnome/kde make.profiles may include this USE flag in
their
 defaults?



I'm on 13.0/desktop/kde profile.  It isn't included here either.

SNIP
local use flags (searching: pulseaudio)

no matching entries found
root@fireball / #

Dale

:-)  :-)

-- 
I am only responsible for what I said ... Not for what you understood or
how you interpreted my words!



Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Tanstaafl

Thanks for the reply Alan...

On 2013-04-20 11:33 AM, Alan McKinnon alan.mckin...@gmail.com wrote:

If VMWare gives you a really shitty host driver, then something
really shitty is going to be the best you can achieve.


The host is a Dell R515 with the Perc H700.

Windows VMs see get an 'LSI Logic SAS', and my gentoo VM gets an 'LSI 
Logic Parallel' controller.


All disks are 15k rpm SAS (6G) drives.


Disks aren't like eg NICs, you can't easily virtualize them and give the
guest exclusive access in the style of para-virtualization (I can't
imagine how that would even be done).

You also didn't mention what mail server you use - implementations vary
a great deal. Gut feel tells me that unless you are dealing with many
1000s of mails in a short period you won't really need XFS's aggressive
caching.


Postfix+dovecot, still debating between maildir or mdbox, leaning toward 
mdbox.




Re: [gentoo-user] login to gnome fails

2013-04-21 Thread Stefan G. Weichinger
Am 21.04.2013 15:10, schrieb Canek Peláez Valdés:
 On Sun, Apr 21, 2013 at 6:47 AM, Stefan G. Weichinger li...@xunil.at wrote:

 greetings ...

 for some days now I fiddle around with an issue on my ~amd64 thinkpad.
 
 [snip]
 
 I had a really weird problem that, perhaps, it has something to do
 with yours. After reading this:
 
 http://notes.torrez.org/2013/04/put-a-burger-in-your-shell.html
 
 I thought hey, that's a cool idea, so I put a unicode character in
 my PS1 environment variable. Afterwards, I was unnable to log into
 GNOME; it took me several hours to link the PS1 variable to the
 problem, since I did the change inside of GNOME, and I didn't logout
 until I upgraded.
 
 I removed the PS1 override, and everything just worked again.
 
 So, just to discard this particular case, you should check your
 environment variables.

hmm, thanks for the suggestion.

$ echo $PS1
\[\033[01;32m\]\u@\h\[\033[01;34m\] \w \$\[\033[00m\]

- some colored but not too fancy prompt, I assume.

And I haven't changed this prompt for months or years, I assume.
Additionally this is the same prompt on my desktop, so I think we can
exclude this.

When I read through the output of env ... nothing really stands out .

S





Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Tanstaafl

On 2013-04-21 5:47 AM, J. Roeleveld jo...@antarean.org wrote:

On Sat, April 20, 2013 17:38, Jarry wrote:

Problem of virtualized filesystem is not that it is virtualized,
but that it is located on datastore with more virtual systems,
all of them competing for the same i/o. *That* is the bottleneck.
If you switch reiser for xfs or btrfs, you might win (or loose)
a few %. If you optimize your esxi-datastore design, you might
win much more than what you have ever dreamed of.


If the underlying I/O is fast enough with low seek-times and high
throughput, that handling multiple VMs using a lot of disk I/O
simultaneously isn't a problem. Provided the Host has sufficient resources
(think memory and dedicated CPU) to handle it.


My host specs:

Dual AMD Opteron 4180 (6-core, 2.6Ghz)
128GB RAM
2x internal SSDs in RAID1 for Host OS
6x 300G SAS 6Gb 15k hard drives in RAID10 for Guest OSs

I allocate each Guest 1 virtual CPU with 2 cores


A decent hardware raid-controller with multiple disks running in a higher
raid version is cheaper then the same storage capacity in SSDs.


Yep... I toyed with the idea of SSDs, but the cost was considerably more 
as compared to even these SAS drives...




Re: [gentoo-user] Removing pulseaudio

2013-04-21 Thread Neil Bothwick
On Sun, 21 Apr 2013 09:24:25 -0500, Dale wrote:

 I'm on 13.0/desktop/kde profile.  It isn't included here either.

I've just checked and it doesn't appear to be enabled in any of the
profiles, which makes me wonder what anyone is complaining about...


-- 
Neil Bothwick

If there is light at the end of the tunnel...order more tunnel.


signature.asc
Description: PGP signature


[gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Tanstaafl

On 2013-04-20 11:00 AM, Tanstaafl tansta...@libertytrek.org wrote:

Another question - are there any caveats as to which filesystem to use
for a mail server, for virtualized systems?


Ok, googling reveals lots of conflicting opinions about using LVM in a 
VM environment.


I was wanting to use it mainly for its snapshot ability (to get 
consistent backups of my mailstore and mysql DBs).


Also it would be very nice to be able to resize things if needed (I have 
adequate storage available).


But I've found lots of opinions that using LVM in a virtualized 
environment can lead to data corruption, and if this is true, I'd rather 
not risk it...


So, LVM or not?



Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Jarry

On 21-Apr-13 18:15, Tanstaafl wrote:


Ok, googling reveals lots of conflicting opinions about using LVM in a
VM environment.

I was wanting to use it mainly for its snapshot ability (to get
consistent backups of my mailstore and mysql DBs).

Also it would be very nice to be able to resize things if needed (I have
adequate storage available).

But I've found lots of opinions that using LVM in a virtualized
environment can lead to data corruption, and if this is true, I'd rather
not risk it...

So, LVM or not?


You can make snapshots from ESXi (btw snapshot is *not* backup),
and you can resize VM-disks as well. So the right question is:
What are the LVM features I need? If I do not need any, then why
should I bother with it?

Jarry

--
___
This mailbox accepts e-mails only from selected mailing-lists!
Everything else is considered to be spam and therefore deleted.



Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Randy Barlow
On Sun, 2013-04-21 at 12:15 -0400, Tanstaafl wrote:
 But I've found lots of opinions that using LVM in a virtualized 
 environment can lead to data corruption, and if this is true, I'd rather 
 not risk it...
 
 So, LVM or not?

This is surprising to me, because at my former employer we used LVM for
all of our virtual machines in Xen. It performed quite well, and we
never experienced any data loss due to that setup. LVM gives a lot of
flexibility in managing virtual machines, so I'd highly recommend it.

I believe LVM devices can be passed to KVM guests as well, though I have
never personally tried that.

-- 
Randy Barlow




Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Randy Barlow
On Sun, 2013-04-21 at 12:32 -0400, Randy Barlow wrote:
 LVM gives a lot of
 flexibility in managing virtual machines, so I'd highly recommend it.

I should mention one specific advantage to using LVM over file-based
images: I believe you will find that LVM performs better. This is due to
avoiding the duplicated filesystem overhead that would occur in the
file-based image approach. If the guest wants to fsync(), for example,
both filesystems need to be involved (the guest's, and the host's). With
LVM, you still have the host processing the LVM bits of that process,
but at least the host's filesystem doesn't need to be involved.

Of course, giving the guest it's own raw block device (a disk, or a
partition) would also have this advantage, so here I'm mostly just
comparing LVM to the easy file-backed disk image approach.

-- 
Randy Barlow




[gentoo-user] Partition layout questions - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Tanstaafl

Next, as to partition layout.

I was considering this partition layout:

/boot (ext2), 100M
/swap, 2048G
/ (ext4), 40G
/tmp (ext2), 2G
/var (xfs), 600G

But doing some reading, I stumbled on some other suggestions, like:

Bind /tmp to tmpfs, ie:

tmpfs   /tmp tmpfs   nodev,nosuid  0  0

Then I read another suggestion to bind /var/tmp to /tmp:

/tmp /var/tmp none rw,noexec,nosuid,nodev,bind 0 0

Which means that both /tmp and /var/tmp are now bound to /tmp?

But, I also read one one of these pages that tmpfs should NOT be used 
for /var/tmp, because it stores files that need to be persistent across 
reboots - is this still true?


My main concerns are security (which dirs should be separate so they can 
be mounted as securely as possible, is, nodev noexec and nosuid mount 
options)?


1. Should I go ahead and make separate smallish (maybe 1 or 2GB) /home 
so I can mount it nodev,noexec,nosuid?


2. Should I make a separate partition for /var/tmp so I can mount it as 
nodev,noexec,nosuid, and bind /tmp to /tmpfs as above? Or does the 
caveat about /var/tmp storing files that need to be persistent across 
reboots no longer apply, and I can bind them both to tmpfs?


3. Dumb question (google didn't give me an answer) - can I mount all of 
/var noexec and nosuid? Assuming not...


4. Since I'm running dovecot with a single user (vmail), and dovecot 
stores sieve scripts in the users 'home' dir, does this mean I can't 
mount that directory with nodev noexec and/or nosuid?


5. Webapps... can I mount the dir where these are installed with 
nodev,noexec,nosuid (I still use webapp-config to manage my website 
installations, and currently these are in /var/www)?


I'm thinking an alternative would be to put all data that can be stored 
on a partition that is mounted nodev,noexec,nosuid, ie:


/virtual

which would contain:

/virtual/home
/virtual/mail
/virtual/www

Maybe I'm overthinking/overcomplicating this, but obviously now is the 
time to make these decisions...


So, comments/criticisms welcome as always...



Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Tanstaafl

On 2013-04-21 12:27 PM, Jarry mr.ja...@gmail.com wrote:

On 21-Apr-13 18:15, Tanstaafl wrote:


Ok, googling reveals lots of conflicting opinions about using LVM in a
VM environment.

I was wanting to use it mainly for its snapshot ability (to get
consistent backups of my mailstore and mysql DBs).

Also it would be very nice to be able to resize things if needed (I have
adequate storage available).

But I've found lots of opinions that using LVM in a virtualized
environment can lead to data corruption, and if this is true, I'd rather
not risk it...

So, LVM or not?


You can make snapshots from ESXi (btw snapshot is *not* backup),
and you can resize VM-disks as well. So the right question is:
What are the LVM features I need? If I do not need any, then why
should I bother with it?


Yes I can't take snapshots with ESXi, but everything I've read says that 
for these to be consistent, they need to be done when the VM is shutdown.


Also, they take a LONG time, whereas an LVM snapshot happens almost 
immediately.




[gentoo-user] Firefox/Adobe Flashplayer: Playback hangs

2013-04-21 Thread meino . cramer
Hi,

I am using Adobes Flashplayer with Firefox (both newest versions).

When playing video, the playback freezes often while data is still
transmitted. Often the playback starts half a minute or so without
intervention, sometimes I have to cycle the pause/play botton several
times.

How can I prevent this ?

Best regards,
mcc






Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Tanstaafl

On 2013-04-21 12:56 PM, Tanstaafl tansta...@libertytrek.org wrote:

Yes I can't take snapshots with ESXi,


Sorry, should have read 'Yes I can take snapshots with ESXi...'



Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Michael Hampicke

Am 21.04.2013 18:32, schrieb Randy Barlow:

On Sun, 2013-04-21 at 12:15 -0400, Tanstaafl wrote:

But I've found lots of opinions that using LVM in a virtualized
environment can lead to data corruption, and if this is true, I'd 
rather

not risk it...

So, LVM or not?


This is surprising to me, because at my former employer we used LVM 
for

all of our virtual machines in Xen. It performed quite well, and we
never experienced any data loss due to that setup. LVM gives a lot of
flexibility in managing virtual machines, so I'd highly recommend it.

I believe LVM devices can be passed to KVM guests as well, though I 
have

never personally tried that.


Correct, in almost every case where I set up a VM Host, I use LVM to 
manage the disks that the guests will get.




Re: [gentoo-user] login to gnome fails

2013-04-21 Thread Stefan G. Weichinger

Might my problem be related to consolekit?

AFAI understand I don't need ck anymore as it is replaced by systemd-logind?

I am confused right now by:

https://bugs.gentoo.org/show_bug.cgi?id=465508

S



Re: [gentoo-user] Removing pulseaudio

2013-04-21 Thread Dale
Neil Bothwick wrote:
 On Sun, 21 Apr 2013 09:24:25 -0500, Dale wrote:

 I'm on 13.0/desktop/kde profile.  It isn't included here either.

 I've just checked and it doesn't appear to be enabled in any of the
 profiles, which makes me wonder what anyone is complaining about...




That was my thinking.  If it is not enabled in the profile, looks like
something or someone else did it.  I run a fairly bloated KDE here so
surely I can't be missing some additional bloat by mistake.  ;-)

Dale

:-)  :-)

-- 
I am only responsible for what I said ... Not for what you understood or
how you interpreted my words!



[gentoo-user] Hows this for rsnapshot cron jobs?

2013-04-21 Thread Tanstaafl
Ok, my goal is to keep 3 'snapshots' per day (11:30am, 2:30pm and 
5:30pm), 7 daily's (8:50pm), 4 weekly's (8:40pm), 12 monthly's (8:30pm), 
and 5 yearly's (8:20pm).


My myhost1.conf has:

intervalhourly  3
intervaldaily   7
intervalweekly  4
intervalmonthly 12
intervalyearly  5

And my /etc/crontab now looks like:


# for vixie cron
# $Header: 
/var/cvsroot/gentoo-x86/sys-process/vixie-cron/files/crontab-3.0.1-r4,v 1.3 
2011/09/20 15:13:51 idl0r Exp $

# Global variables
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
HOME=/

# check scripts in cron.hourly, cron.daily, cron.weekly and cron.monthly
59  *  * * *rootrm -f /var/spool/cron/lastrun/cron.hourly
9  3  * * * rootrm -f /var/spool/cron/lastrun/cron.daily
19 4  * * 6 rootrm -f /var/spool/cron/lastrun/cron.weekly
29 5  1 * * rootrm -f /var/spool/cron/lastrun/cron.monthly
*/10  *  * * *  roottest -x /usr/sbin/run-crons  /usr/sbin/run-crons
#
# rsnapshot cronjobs
#
30 11,14,17 * * * root   rsnapshot -c /etc/rsnapshot/myhost1.conf sync; 
rsnapshot -c /etc/rsnapshot/myhost1.conf hourly
50 20 * * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf daily
40 20 * * 6 rootrsnapshot -c /etc/rsnapshot/myhost1.conf weekly
30 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf monthly
20 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf yearly


Does this look right?

Thanks



Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Tanstaafl

On 2013-04-21 12:38 PM, Randy Barlow ra...@electronsweatshop.com wrote:

I should mention one specific advantage to using LVM over file-based
images: I believe you will find that LVM performs better. This is due to
avoiding the duplicated filesystem overhead that would occur in the
file-based image approach. If the guest wants to fsync(), for example,
both filesystems need to be involved (the guest's, and the host's). With
LVM, you still have the host processing the LVM bits of that process,


???

This doesn't make sense to me.

Unless you're talking about using LVM on the HOST.

I'm not. I didn't specify this in this particular post, but I'm using 
vmWare ESXi, and installing a gentoo VM to run on it.


So, I'm asking about using the LVM2 installation manual from Gentoo and 
using LVM2 for just my gentoo VM...


So, in this case, is it still recommended/fully supported/safe?

Thanks



Re: [gentoo-user] Re: bus error during compilation of gcc

2013-04-21 Thread Nuno J. Silva (aka njsg)
On 2013-04-20, the guard the.gu...@mail.ru wrote:



 Суббота, 20 апреля 2013, 19:56 UTC от Grant Edwards 
 grant.b.edwa...@gmail.com:
 On 2013-04-20, the guard the.gu...@mail.ru wrote:
 
  The package i decided to install required a gcc rebuild so I started
  rebuilding it and got a bus error. I've googled and found suggestions
  to lower makeopts, but it didn't help. 
 
 Every time I've gotten bus errors when building things it turned out
 to be a hardware problem.
 
 Bad RAM, failing CPU, failing motherboard power supply capacitors, bad
 disk controller card (obviously, that was a _long_ time ago).
 
 If I were you, I'd start by running memtest86+ overnight.
 
 
 memtest revealed nothing

Which does not mean there's nothing there ;-)

-- 
Nuno Silva (aka njsg)
http://njsg.sdf-eu.org/




[gentoo-user] Re: [Bulk] Re: Removing pulseaudio

2013-04-21 Thread Nuno J. Silva (aka njsg)
On 2013-04-18, Kevin Chadwick ma1l1i...@yahoo.co.uk wrote:
  ...
  (i) It's a sound server, a description I don't understand.  What
  does it _do_?  Why do I want it?  It seems to be an unnecessary
  layer of fat between sound applications and the kernel.  
 
 If you don't understand the term sound server you probably
 shouldn't be using Gentoo. 
 
 When I'm watching a YouTube video I still want to hear my email
 client go bing or my chat program alert me of my buddy coming online. 
 
 That's not possible if my web-browser has a hard-wired path into my
 soundcard and ain't letting go.

 Just throwing out there that users can or atleast could use alsa plugs
 to have multiple applications. I did that before pulseaudio came along
 to play nfs carbon under cedega and listen to music.

It should be noted that ALSA users can have multiple applications by
doing absolutely nothing other than using ALSA and using the
applications they want to use.

 Also I have never got around to looking into Jackd but isn't it meant
 to be by far the best. I know pro audio users use it and I have heard it
 is not the easiest to set up but is there any reason why it isn't the
 default setup.

 http://en.gentoo-wiki.com/wiki/JACK

 From a quick look at this jack can hook up multiple applications that
 seem to need to be set up individually. What's the scope for Jack

 a./ replacing pulseaudio

 b./ having a compat interface layer to make pulseaudio compatible apps
 talk to jack



-- 
Nuno Silva (aka njsg)
http://njsg.sdf-eu.org/




[gentoo-user] Re: evince - Error printing to PDF

2013-04-21 Thread Nuno J. Silva (aka njsg)
On 2013-04-17, Joseph syscon...@gmail.com wrote:
 On 04/17/13 17:00, Alan Mackenzie wrote:
Hi, Joseph.

On Wed, Apr 17, 2013 at 10:32:23AM -0600, Joseph wrote:
 On 04/17/13 17:05, tastytea wrote:
 Am Wed, 17 Apr 2013 07:57:02 -0600
 schrieb Joseph syscon...@gmail.com:

  When I try to print from evince to pdf file I get an error:

  Error printing - Operation not supported



 You can use File - Sove a Copy...

 This doesn't help me, as there are times where I want to print one or two 
 pages from pdf document to another pdf file, so Save a Copy will not do 
 it.

For what it's worth, my evince (2.32.0-r4) prints without problems.
Could it be you're missing some critical use flag?  Try dumping these out
with

# emerge -pv evince

 I have the same verion,
 app-text/evince-2.32.0-r4  USE=dbus introspection postscript tiff -debug 
 -djvu -dvi -gnome -gnome-keyring -nautilus -t1lib 0 kB

 I can print to printer but not to pdf or ps files.

I'd compare the gtk+ (and maybe pango?) USE flags.

If I'm not mistaken, this is a feature of the gtk+ printing dialog, and
at least in evince I think printing relies on pango.

-- 
Nuno Silva (aka njsg)
http://njsg.sdf-eu.org/




Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Alan McKinnon
On 21/04/2013 16:33, Tanstaafl wrote:
 Thanks for the reply Alan...
 
 On 2013-04-20 11:33 AM, Alan McKinnon alan.mckin...@gmail.com wrote:
 If VMWare gives you a really shitty host driver, then something
 really shitty is going to be the best you can achieve.
 
 The host is a Dell R515 with the Perc H700.
 
 Windows VMs see get an 'LSI Logic SAS', and my gentoo VM gets an 'LSI
 Logic Parallel' controller.
 
 All disks are 15k rpm SAS (6G) drives.

That's very similar to our setups with the exception of the host
hardware - we have R700 series hosts.

I'd be interested to keep track of what you find out and feed that back
to the sysadmins

 
 Disks aren't like eg NICs, you can't easily virtualize them and give the
 guest exclusive access in the style of para-virtualization (I can't
 imagine how that would even be done).

 You also didn't mention what mail server you use - implementations vary
 a great deal. Gut feel tells me that unless you are dealing with many
 1000s of mails in a short period you won't really need XFS's aggressive
 caching.
 
 Postfix+dovecot, still debating between maildir or mdbox, leaning toward
 mdbox.

I always found postfix to be the best choice for an MTA in the large
picture (mostly because everyone else can also figure out what's going
on). I'd also be testing mdbox first just because maildir always leaves
me feeling a bit like there's a little too much disk IO going on and it
could be better


-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Re: [Bulk] Re: Removing pulseaudio

2013-04-21 Thread Kevin Chadwick
 
  Just throwing out there that users can or atleast could use alsa
  plugs to have multiple applications. I did that before pulseaudio
  came along to play nfs carbon under cedega and listen to music.  
 
 It should be noted that ALSA users can have multiple applications by
 doing absolutely nothing other than using ALSA and using the
 applications they want to use.

So are you saying plugs are no longer required or that they are only
needed for certain apps that take over the audio device.


-- 
___

'Write programs that do one thing and do it well. Write programs to work
together. Write programs to handle text streams, because that is a
universal interface'

(Doug McIlroy)
___



Re: [gentoo-user] Hows this for rsnapshot cron jobs?

2013-04-21 Thread Alan McKinnon
On 21/04/2013 20:47, Tanstaafl wrote:
 Ok, my goal is to keep 3 'snapshots' per day (11:30am, 2:30pm and
 5:30pm), 7 daily's (8:50pm), 4 weekly's (8:40pm), 12 monthly's (8:30pm),
 and 5 yearly's (8:20pm).
 
 My myhost1.conf has:
 
 intervalhourly  3
 intervaldaily   7
 intervalweekly  4
 intervalmonthly 12
 intervalyearly  5
 
 And my /etc/crontab now looks like:
 
 # for vixie cron
 # $Header:
 /var/cvsroot/gentoo-x86/sys-process/vixie-cron/files/crontab-3.0.1-r4,v 1.3
 2011/09/20 15:13:51 idl0r Exp $

 # Global variables
 SHELL=/bin/bash
 PATH=/sbin:/bin:/usr/sbin:/usr/bin
 MAILTO=root
 HOME=/

 # check scripts in cron.hourly, cron.daily, cron.weekly and cron.monthly
 59  *  * * *rootrm -f /var/spool/cron/lastrun/cron.hourly
 9  3  * * * rootrm -f /var/spool/cron/lastrun/cron.daily
 19 4  * * 6 rootrm -f /var/spool/cron/lastrun/cron.weekly
 29 5  1 * * rootrm -f /var/spool/cron/lastrun/cron.monthly
 */10  *  * * *  roottest -x /usr/sbin/run-crons 
 /usr/sbin/run-crons
 #
 # rsnapshot cronjobs
 #
 30 11,14,17 * * * root   rsnapshot -c /etc/rsnapshot/myhost1.conf
 sync; rsnapshot -c /etc/rsnapshot/myhost1.conf hourly
 50 20 * * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf daily
 40 20 * * 6 rootrsnapshot -c /etc/rsnapshot/myhost1.conf weekly
 30 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf monthly
 20 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf yearly

Only the last line is wrong - your monthly and yearly are equivalent.To
be properly yearly, you need a month value in field 4.

I'm not familiar with rsnapshot, I assume that package can deal with how
many of each type of snapshot to retain in it's conf file? I see no
crons to delete out of date snapshots.


And, more as a nitpick than anything else, I always recommend that when
a sysadmin adds a root cronjob, use crontab -e so it goes in
/var/spool/cron, not /etc/crontab. Two benefits:

- syntax checking when you save and quit
- if you let portage, package managers, chef, puppet or whatever manage
your global cronjobs in /etc/portage, then there's no danger that system
will trash the stuff that you added there manually.

-- 
Alan McKinnon
alan.mckin...@gmail.com




Re: [gentoo-user] Hows this for rsnapshot cron jobs?

2013-04-21 Thread Tanstaafl

On 2013-04-21 4:32 PM, Alan McKinnon alan.mckin...@gmail.com wrote:

On 21/04/2013 20:47, Tanstaafl wrote:

30 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf monthly
20 20 1 * * rootrsnapshot -c /etc/rsnapshot/myhost1.conf yearly



Only the last line is wrong - your monthly and yearly are equivalent.To
be properly yearly, you need a month value in field 4.


Oh, right (I added that interval myself, rsnapshot only comes with the 
hourly, daily weekly and monthly by default).


So, if I wanted it to run at 8:20pm on Dec 31, it would be:

20 22 31 12 *  rootrsnapshot -c /etc/rsnapshot/myhost1.conf yearly


I'm not familiar with rsnapshot, I assume that package can deal with how
many of each type of snapshot to retain in it's conf file? I see no
crons to delete out of date snapshots.


Correct, rsnapshot handles this.


And, more as a nitpick than anything else, I always recommend that when
a sysadmin adds a root cronjob, use crontab -e so it goes in
/var/spool/cron, not /etc/crontab. Two benefits:

- syntax checking when you save and quit
- if you let portage, package managers, chef, puppet or whatever manage
your global cronjobs in /etc/portage, then there's no danger that system
will trash the stuff that you added there manually.


I prefer doing things manually... so, nothing else manages my cron jobs.

That said, I prefer to do this 'the gentoo way'... so is crontab -e the 
gentoo way?


;)



Re: [gentoo-user] Removing pulseaudio

2013-04-21 Thread Alan Mackenzie
Hi, Neil.

On Sun, Apr 21, 2013 at 05:00:17PM +0100, Neil Bothwick wrote:
 On Sun, 21 Apr 2013 09:24:25 -0500, Dale wrote:

  I'm on 13.0/desktop/kde profile.  It isn't included here either.

 I've just checked and it doesn't appear to be enabled in any of the
 profiles, which makes me wonder what anyone is complaining about...

pulseaudio is enabled in
/usr/portage/profiles/targets/desktop/gnome/make.defaults.

That's the only place I can find where it's enabled.

 -- 
 Neil Bothwick

 If there is light at the end of the tunnel...order more tunnel.

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Pandu Poluan
On Apr 22, 2013 2:05 AM, Tanstaafl tansta...@libertytrek.org wrote:

 On 2013-04-21 12:38 PM, Randy Barlow ra...@electronsweatshop.com wrote:

 I should mention one specific advantage to using LVM over file-based
 images: I believe you will find that LVM performs better. This is due to
 avoiding the duplicated filesystem overhead that would occur in the
 file-based image approach. If the guest wants to fsync(), for example,
 both filesystems need to be involved (the guest's, and the host's). With
 LVM, you still have the host processing the LVM bits of that process,


 ???

 This doesn't make sense to me.

 Unless you're talking about using LVM on the HOST.

 I'm not. I didn't specify this in this particular post, but I'm using
vmWare ESXi, and installing a gentoo VM to run on it.

 So, I'm asking about using the LVM2 installation manual from Gentoo and
using LVM2 for just my gentoo VM...

 So, in this case, is it still recommended/fully supported/safe?

 Thanks


Honestly, I don't see how LVM can interact with VMware's VMDK... unless one
use VMware's thin provisioning over a SAN Storage Thin Provisioning, in
which case all hell will break loose once the actual disk size is reached...

Stick with VMware Thin Provisioning XOR SAN Storage Thin Provisioning.
Never both.

One thing you have to think about, is whether to implement
LVM/partition-less, or LVM/partitions.

Rgds,
--


Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Randy Barlow
On Sun, 2013-04-21 at 15:04 -0400, Tanstaafl wrote:
 ???
 
 This doesn't make sense to me.
 
 Unless you're talking about using LVM on the HOST.

Ah, apologies, I think I had misunderstood. Given that you are using
ESXi, I should have thought that LVM on the host wouldn't be possible.
Are you set on using ESXi? If not, I think there are some compelling
advantages to some of the open source solutions. If so, you are correct
that my suggestion about performance advantages wouldn't apply to you.

 I'm not. I didn't specify this in this particular post, but I'm using 
 vmWare ESXi, and installing a gentoo VM to run on it.
 
 So, I'm asking about using the LVM2 installation manual from Gentoo
 and 
 using LVM2 for just my gentoo VM...
 
 So, in this case, is it still recommended/fully supported/safe?

In this case, I think you may lose some of the LVM advantages. Assuming
that a volume resize in ESXi is pretty easy, you won't need the guest to
use LVM to take advantage of it.

Here's one though, and I'm not sure the answer: can you easily mount an
ESXi snapshot somewhere, and make a real backup from it? I use LVM to
snapshot my FS, mount that snapshot somewhere, and then I have my backup
software back that up instead of the live system. This has the advantage
of allowing the backup software to have a consistent view of the disk as
it does its work, and gives you sort of a crash consistent backup. If
you can still do that with ESXi, that's about the only other advantage.

As to whether its supported, I'm not an expert of ESXi (so don't count
me as an expert), but as far as I know, there should be nothing stopping
you from using LVM in the guests. LVM just works with block devices, and
the ESXi disks should work like block devices, so everything should be
fine. Again, I've never used VMWare very extensively, so this isn't
coming from experience.

-- 
Randy Barlow




[gentoo-user] Re: Removing pulseaudio

2013-04-21 Thread walt
On 04/21/2013 03:28 AM, Neil Bothwick wrote:
 On Sat, 20 Apr 2013 20:15:49 -0500, Canek Peláez Valdés wrote:
 
 So normal systems require PA. That *you* perhaps don't require PA is
 another thing altogether.

 bike-shedding

Been a long time since I've seen that used in a linux mailing list :)




Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Randy Barlow
On Sun, 2013-04-21 at 12:56 -0400, Tanstaafl wrote:
 Yes I can't take snapshots with ESXi, but everything I've read says
 that 
 for these to be consistent, they need to be done when the VM is
 shutdown.
 
 Also, they take a LONG time, whereas an LVM snapshot happens almost 
 immediately.

This reveals a significant advantage to using LVM, if you wish to use
snapshots as the target for backups. I personally prefer this, as I
mentioned in another e-mail in this thread, due to getting a crash
consistent view of the disk since it will not change while the backup
is in progress. The fact that LVM snapshots happen instantly is an
advantage, and also that you can perform the snapshots while the system
is running.

-- 
Randy Barlow




[gentoo-user] kvm/libvirt and kernel configuration

2013-04-21 Thread Michael Mol
So, I'm setting up number of kvm guests running Gentoo. KVM guests have
a pretty limited set of device drivers they need to support.

Is there a relatively up-to-date list of kernel configuration options?
I.e. the list of NIC drivers, video drivers, I/O drivers...



signature.asc
Description: OpenPGP digital signature


[gentoo-user] Re: [Bulk] Re: Removing pulseaudio

2013-04-21 Thread Nuno J. Silva (aka njsg)
On 2013-04-21, Kevin Chadwick ma1l1i...@yahoo.co.uk wrote:
 
  Just throwing out there that users can or atleast could use alsa
  plugs to have multiple applications. I did that before pulseaudio
  came along to play nfs carbon under cedega and listen to music.  
 
 It should be noted that ALSA users can have multiple applications by
 doing absolutely nothing other than using ALSA and using the
 applications they want to use.

 So are you saying plugs are no longer required or that they are only
 needed for certain apps that take over the audio device.

I don't even know exactly what ALSA plugs are, and ALSA has worked
perfectly for all these years, so yeah, whatever an ALSA plug is, either
it is not required anymore, or it is handled automagically by ALSA.

-- 
Nuno Silva (aka njsg)
http://njsg.sdf-eu.org/




Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread J. Roeleveld
On Sun, April 21, 2013 17:09, Tanstaafl wrote:
 On 2013-04-21 5:47 AM, J. Roeleveld jo...@antarean.org wrote:
 On Sat, April 20, 2013 17:38, Jarry wrote:
 Problem of virtualized filesystem is not that it is virtualized,
 but that it is located on datastore with more virtual systems,
 all of them competing for the same i/o. *That* is the bottleneck.
 If you switch reiser for xfs or btrfs, you might win (or loose)
 a few %. If you optimize your esxi-datastore design, you might
 win much more than what you have ever dreamed of.

 If the underlying I/O is fast enough with low seek-times and high
 throughput, that handling multiple VMs using a lot of disk I/O
 simultaneously isn't a problem. Provided the Host has sufficient
 resources
 (think memory and dedicated CPU) to handle it.

 My host specs:

 Dual AMD Opteron 4180 (6-core, 2.6Ghz)
 128GB RAM
 2x internal SSDs in RAID1 for Host OS
 6x 300G SAS 6Gb 15k hard drives in RAID10 for Guest OSs

Sounds like a nice machine for testing :)

 I allocate each Guest 1 virtual CPU with 2 cores

Do you limit the Guest to use any of 2 specific cores? Or are you giving 2
vCPUs to each Guest?

 A decent hardware raid-controller with multiple disks running in a
 higher
 raid version is cheaper then the same storage capacity in SSDs.

 Yep... I toyed with the idea of SSDs, but the cost was considerably more
 as compared to even these SAS drives...

I am planning on using SSDs when getting new desktops, but for servers I
prefer spinning disks. They're higher capacity and cheaper.
For speed, I just put a bunch of them together with hardware raid.

--
Joost




Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread J. Roeleveld
On Sun, April 21, 2013 12:06, Pandu Poluan wrote:
 On Apr 21, 2013 4:51 PM, J. Roeleveld jo...@antarean.org wrote:

 On Sat, April 20, 2013 18:22, Pandu Poluan wrote:
   If you have the budget, you really should invest in a SAN
   Storage solution that can provide tiered storage, in which
   frequently used blocks will be 'cached' in SSD, while less
   frequently used blocks are migrated first to slower SAS
   drives, and later on (if 'cold') to even slower SATA
   drives.
 
  4-tier sounds nicer: 1 TB in high speed RAM for the high-speed
  layer, with dedicated UPS to ensure this is backed up to disk
  on shutdown.

 Indeed! But 1 TB is kind of overkill, if you ask me... :-D

Maybe, but when using that for VMs in a lab environment where you want to
create snapshots quickly.

 VMware and XenServer can 'talk' with some Storage controllers, where they
 conspire in the background to provide 'victim cache' on the virtualization
 host. Not sure about Hyper-V.

 I myself had had good experience relying on EMC VNX's internal 8 GB cache;
 apparently the workload is not high enough to stress the system.

When the time comes to upgrade the hardware, I will look into that. By
then, this technology should be more common as well.

--
Joost