Re: [gentoo-user] Something firewall-ish

2014-12-16 Thread Pandu Poluan
On 17:54, Tue, Dec 16, 2014 thegeezer thegee...@thegeezer.net wrote:

On 15/12/14 20:39, Alan McKinnon wrote:
 On 15/12/2014 18:47, meino.cra...@gmx.de wrote:
 Hi,

 this question is not related to a fully fledged,
 big local area network with DMZs and such.

 Even the word firewall seems to be a little too
 huge and mighty in this context to me.

 The network consists of a PC, which is connected
 to a FritzBox (cable, no Wifi/WLAN), which connects
 to the ISP (internet) and (same adress range) to a
 embedded system (eth1)

 There are two additional embedded systems, both on
 a separate interface (eth over usb: usb0  usb1).

 I want to block (DROP or REJECT) the access to certain
 sites (the noise which is produced mostly by sites,
 which all exclusively only want my best: ads, trackers, analysts
 and so on...)

 I tried different tools: fwbuilder, which locks up either itsself
 or my rulesset...I had to reboot and Shorewall, which definitely
 is a great toola little too great tool and much more capable
 as I am... ;)

 I am sure that the problems are mostly not the problems of the
 tools but mine.

 Is there any simple straight forward tool to just block accesses
 to certain sites?


 to do it network-wide: squid

+1
and not in transparent mode either -- if you have a proxy server set
then https traffic is filtered by domain name as it is part of the proxy
connection request.
squid + squidGuard / dansguardian   is the way forward.
for examle, in my advertisers/tracking database there are some 12
thousand domains listed.   good luck adding those individually to iptables.

the only other way to do things super paranoidly is by whitelisting
i.e. on the router just keep adding ip and port to whitelist table for
those you _want_
iptables -N whitelist
iptables -A whitelist -d 8.8.8.8 -m udp --dport 53 -j ACCEPT

iptables -A FORWARD -i LAN -o WAN -j whitelist
iptables -A FORWARD -J REJECT

this takes longer to start but is easier to maintain, as facebook for
example are constantly buying ip ranges and adding them to their global
domination, so you start by thinking you have blocked all 100 facebook
ip addresses and then a month or two later discover they have another
200 you need to block

 to do it on a per-pc per-browser basis: there's a large variety of
 firefox plugins to chose from that will block this and allow that. It
 seems to me this is the better approach as you want to stop your browser
 chatting with sites who only have your best interest at heart :-)


 Either way, the list of black and white lists gets very big very quick,
 so chose your tool carefully. Try a bunch and pick one that makes sense
 to you, bonus points if it comes with a community-supported blacklist
 you can drop in, maintained by people whose POV matches your own.

 You don't want a classic firewall for this; firewalls are mostly built
 to block based on address and port, this is not how you solve your problem


You can automate the process by using 'dig' and 'ipset'.

Create an ipset named Fbook of type hash:ip, then every 5 minutes run a
script that does a lookup for facebook.com and add the found IP to the
Fbook ipset.


 Rgds,
--


Re: [gentoo-user] Re: Debian forked, because of systemd brouhaha

2014-12-01 Thread Pandu Poluan
On Mon, Dec 1, 2014 at 9:54 AM, »Q« boxc...@gmx.net wrote:
 On Sun, 30 Nov 2014 07:43:21 +0300
 Andrew Savchenko birc...@gentoo.org wrote:

 On Sat, 29 Nov 2014 17:32:08 +0100 Marc Stürmer wrote:
  Am 29.11.2014 um 11:11 schrieb Pandu Poluan:
 
   What do you think, people? Shouldn't we offer them our eudev
   project to assist?
 
  Since Eudev has always been opensource under the GPLv2, like udev
  too, there's no need to /offer/ it.
 
  If they choose to use it, they can use it, no offer/questions
  necessary. Simple.

 As far as I understand, Pandu meant we can recommend them to use,
 but not some offer in commercial or proprietary terms.

Yup, that's what I meant.

Sorry for the confusion; I'm not a native English speaker, so I may
have used an improper verb there :-)

 They've added something called devuan-eudev to their github workspace
 today, https://github.com/devuan/devuan-eudev.  It would be nice if
 there could be one eudev project with the aim of supporting Gentoo,
 Devuan, and whatever other distros want to use it.  Or if there must be
 multiple eudevs, it would be nice if the different teams could
 communicate and maybe take some patches from each other.  (I'm no dev,
 so take my opinions on what would be nice for development with a
 chunk of salt.)


Actually, that's my point by saying offer: Rather than letting them
build eudev from scratch, let's work together on the eudev we have,
promote it to something distro-neutral, then let Gentoo and Devuan
(and whatever other distros) derive from that 'upstream'

Uh, I do make myself clear(er) here, don't I?


Rgds,
--
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pandu.poluan.info/blog/
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] OT somehow: experiences around linode

2014-11-30 Thread Pandu Poluan
 On 18:00, Sun, Nov 30, 2014 Stefan G. Weichinger li...@xunil.at wrote:

Am 30.11.2014 um 11:57 schrieb J. Roeleveld:

 No, pv-grub is run inside the context of the host, using a kernel image
inside the VM.

... which is not in /boot as far as I see.
So if I want to add kernel-boot-time-options I have to install my own
kernel plus the entry in menu.lst, correct?

Thanks!


Correct.

pv-grub is a script in dom0 that will attempt to read the menu.lst in /boot
of the VM, and based on that file, initialize a domU with the referred
kernel (residing in /boot), passing all boot time options to the kernel.



It performs magic tricks to read the /boot proper, because / might be on a
different partition or even different virtual disk, but 99% of the time it
works automagically.

It's the 1% that you should be scared of :-)

Rgds,
--


[gentoo-user] Debian forked, because of systemd brouhaha

2014-11-29 Thread Pandu Poluan
So, I just found out that some Debian Developers decided to fork Debian,
because they can no longer stand this abomination called 'systemd':

https://lists.dyne.org/lurker/message/20141127.212941.f55acc3a.en.html

What do you think, people? Shouldn't we offer them our eudev project to
assist?

Rgds,
--


Re: [gentoo-user] Safeguarding strategies against SSD data loss

2014-10-27 Thread Pandu Poluan
On Oct 27, 2014 10:40 PM, Rich Freeman ri...@gentoo.org wrote:

 On Mon, Oct 27, 2014 at 11:22 AM, Mick michaelkintz...@gmail.com wrote:
 
  Thanks Rich, I have been reading your posts about btrfs with interest,
but
  have not yet used it on my systems.  Is btrfs agreeable with SSDs, or
should I
  be using f2fs:
 

 Btrfs will auto-detect SSDs and optimize itself differently, and is
 generally considered to be fine on SSDs.  Of course, btrfs itself is
 experimental and may eat your data, especially if you get it too full,
 but you'll be no worse off for running it on an SSD.

 I doubt you'll find any general-purpose filesystem that works as well
 overall on an SSD as something like f2fs as this is log-based and
 designed with SSDs in mind.  However, f2fs is also very immature and
 also carries risks, and the last time I checked it was missing some
 features like xattrs as well.  It also doesn't have anything like
 btrfs send to serialize your data.

 zfs on linux might be another option.  I don't know how well it
 handles SSDs in general, and you have to fuss with FUSE and a boot
 partition as I don't think grub supports it - it could be a bit of a
 PITA for a single-drive system.  However, it is probably more mature
 than btrfs overall, and it certainly supports send.

 I just had a btrfs near-miss which caused me to rethink how I'm
 managing my own storage.  I was half-tempted to blog on it - it is a
 bit frustrating as I believe we're right in the middle of the shift
 between the traditional filesystems and the next-generation ones.
 Sticking with the old means giving up a lot of potential benefits, but
 there are a lot of issues with jumping ship as well as the new systems
 all lack maturity or are not feature-complete yet.  I was looking at
 f2fs, btrfs, and zfs again this weekend and the issues I struggle with
 are the immaturity of btrfs and f2fs, the lack of working parity raid
 on btrfs, the lack of many features on f2fs, and the inability to
 resize vdevs on zfs which means on a system with few drives you get
 locked in.  I suspect all of those will change in time, but not yet!

 --
 Rich


ZoL (ZFS on Linux) nowadays is implemented using DKMS instead of FUSE, thus
running in kernelspace, and (relatively) easier to put into an initramfs.

Updating is a beeyotch on binary-based distros as it requires a recompile.
Not a big deal for us Gentooers :-)

vdevs can grow, but they can't (yet) shrink. And putting ZFS on SSDs... not
recommended. Rather, ZFS can employ SSDs to act as a 'write cache' for the
spinning HDDs.

In my personal opinion, the 'killer' feature of ZFS is that it's built from
the ground up to provide maximum data integrity. The second feature is its
high performance COW snapshot ability. You can do an obscene amount of
snapshots if you want (but don't actually do it; managing more than a
hundred snapshots is a Royal PITA). And it's also able to serialize the
snapshots, allowing perfect delta  replication to another system. This
saves a lot of time doing bit-perfect backup because only changed blocks
will be transferred. And you can ship a snapshot instead of the whole
filesystem, allowing online backup.

(And yes, actually deployed ZoL on my previous employer's email system,
with the aforementioned snapshot-shipping backup strategy).

Other features include: Much easier mounting (no need to mess with fstab),
built-in NFS support for higher throughput, and ability to easily rebuild a
pool merely by installing the drives (in any order) into a new box and let
ZFS scan for all the metadata.

The most serious drawback in my opinion is ZoL's nearly insatiable appetite
for RAM. Unless you purposefully limit its RAM usage, ZoL's cache will
consume nearly all available memory, causing memory fragmentation and
ending with OOM.

Rgds,
--


Re: [gentoo-user] Safeguarding strategies against SSD data loss

2014-10-27 Thread Pandu Poluan
On Oct 28, 2014 12:31 AM, Rich Freeman ri...@gentoo.org wrote:

 On Mon, Oct 27, 2014 at 12:52 PM, Pandu Poluan pa...@poluan.info wrote:
 
  ZoL (ZFS on Linux) nowadays is implemented using DKMS instead of FUSE,
thus
  running in kernelspace, and (relatively) easier to put into an
initramfs.

 Sorry about that.  I should have known that, but for some reason I got
 that memory crossed in my brain...  :)

  vdevs can grow, but they can't (yet) shrink.

 Can you point to any docs on that, including any limitations/etc?  The
 inability to expand raid-z the way you can do so with mdadm was one of
 the big things that has been keeping me away from zfs.  I understand
 that it isn't so important when you're dealing with large numbers of
 disks (backblaze's storage pods come to mind), but when you have only
 a few disks being able to manipulate them one at a time is very
 useful.  Growing is the more likely use case than shrinking.  Then
 again, at some point if you want to replace smaller drives with larger
 ones you might want a way to remove drives from a vdev.


First, you need to set your pool to autoexpand=on.

Then, one by one, you offline a disk within the vdev and replace it with a
larger one. After all disks have been replaced, do a scrub, and ZFS will
automagically enlarge the vdev.

If you're not using whole disks as ZFS, then s/replace with larger/enlarge
the partition/.

Rgds,
--


Re: [gentoo-user] Safeguarding strategies against SSD data loss

2014-10-27 Thread Pandu Poluan
On Oct 28, 2014 12:38 AM, Rich Freeman ri...@gentoo.org wrote:

 On Mon, Oct 27, 2014 at 1:23 PM, Volker Armin Hemmann
 volkerar...@googlemail.com wrote:
  Am 27.10.2014 um 16:36 schrieb Rich Freeman:
   and a boot
  partition as I don't think grub supports it - it could be a bit of a
  PITA for a single-drive system.
 
  nope. But I don't see any reason to use zfs with a single drive either.

 True, not needing to use FUSE does simplify things, but I don't
 believe that grub supports zfs, so you would need a boot partition.
 Granted, a newer laptop would need that for EFI anyway.

 
   However, it is probably more mature
  than btrfs overall, and it certainly supports send.
 
  and if your send stream is corrupted, your data is gone. That is why I
  prefer cptar to backup my zfs data tank.
 

 If you ONLY save the send stream without checking it, then you're
 right that you're depending on its integrity.  I'd certainly be
 nervous about doing that with btrfs, probably less so with zfs but I
 can't really vouch for it.  I don't know what ability either
 filesystem gives you to verify a send stream in isolation.

 Now, what you could do is receive the send stream into a replica
 filesystem on the far end, and not consider the backup successful
 until this is done.  That would look like a btrfs-to-btrfs rsync
 operation, but it would be much more efficient in terms of IO.  It
 would require a daemon on the far end to run the receive operation and
 report back status, vs just dumping the files via scp, etc.

 Does anybody know if either btrfs or zfs send includes checksums?  I
 know the data is checksummed on disk, but I have no idea if it is
 protected in this way while serialized.


zfs has checksum for the send stream. That's why you can send the stream to
a file, and fail to import the file sometime later if something changes in
that file.

So, always do a filesystem replication. Don't just save the send stream.
Have the replica make the snapshots visible in poolroot/.zfs, and backup
the whole filesystem using a deduping backup system.

Rgds,
--


Re: [gentoo-user] What happened to Qt 5?

2014-05-27 Thread Pandu Poluan
On Tue, May 27, 2014 at 12:32 PM, Nikos Chantziaras rea...@gmail.com wrote:
 Wasn't it supposed to hit portage a long time ago? Any news? There's zero
 information on the http://www.gentoo.org/proj/en/desktop/qt site.



I see many blockers in the [TRACKER] Qt5 in portage bug [0].

I do believe the KDE project team monitors Qt5-in-tree closely [1].


[0] https://bugs.gentoo.org/show_bug.cgi?id=454132
[1] http://wiki.gentoo.org/wiki/Project:KDE/Frameworks


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pandu.poluan.info/blog/
 • Linked-In : http://id.linkedin.com/in/pepoluan



[gentoo-user] Honeypot distro?

2014-04-03 Thread Pandu Poluan
My company ended up with several 'ancient' HP ProLiant G4 servers.

We're thinking of setting up honeypots there.

Although I know Gentoo is perfectly capable of becoming a honeypot, we
currently prefer something... less involving in deployment :-D

Now, since this mailing list unarguably contains the 'creme de la
creme' of Linux users in the world... maybe you can help me in
choosing a honeypot distro?

I've been looking at several, such as ADHD or Stratagem or
Honeydrive, also stalwarts such as BackTrack ... but I still can't
make up my mind yet.

TIA!

Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pandu.poluan.info/blog/
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] Gentoo as Firewall on HP ProLiant DL360 G5

2014-03-14 Thread Pandu Poluan
On Mar 14, 2014 2:42 PM, Edward M edwardm.gentoo.j...@live.com wrote:

 On Thu, 13 Mar 2014 03:26:27 +0700
 Pandu Poluan pa...@poluan.info wrote:

  Pointers are very welcome!

   May not apply now, but somebody was having kernel panics and
   network problems,etc last year.


http://forums.gentoo.org/viewtopic-t-960140-start-0-postdays-0-postorder-asc-highlight-.html


Aaahhh... thanks!

Very helpful information, indeed.

The '!' trick and 'emerge firmware' trick would save my hair from being
teared out :-)

Rgds,
--


[gentoo-user] Gentoo as Firewall on HP ProLiant DL360 G5

2014-03-12 Thread Pandu Poluan
Hello list!

I want to install Gentoo as headless firewalls on a pair of HP ProLiant
DL360 G5 servers we happen to have lying around.

Are there special issues I need to be aware of before embarking on this
endeavor?

Specifically, are there special steps to take w.r.t.:
* The RAID controller (AFAIK will result in 'strange' device names such as
/dev/cciss/c0d0, or somesuch)
* NIC controller chips
* SNMP stuff, e.g., Insight Management Agent
* Other stuff

Pointers are very welcome!

Rgds,
--


Re: [gentoo-user] LDAP server questions

2014-02-18 Thread Pandu Poluan
On Feb 18, 2014 1:13 PM, J. Roeleveld jo...@antarean.org wrote:

 On 18 February 2014 06:03:02 CET, Pandu Poluan pa...@poluan.info wrote:
 Hello list!
 
 I'm planning to replace an Active Directory server currently
 functioning
 *only* as an LDAP server, with a dedicated Linux-based LDAP server.
 
 Now, the function of the LDAP server is at the moment:
 * Provide the settings database for Axigen email server
 * Provide group membership for BlueCoat proxy (who allowed to access
 what)
 * Provide group membership for FreeRADIUS
 * Provide group membership for Fortinet VPN
 
 The day-to-day management will be handled be another division, and I'm
 quite sure that they prefer a GUI, so the solution really should have a
 GUI
 support (either Windows-based 'client' or web-based admin console).
 
 Apparently, there are now many implementations of LDAP in the *nix
 world,
 such as OpenLDAP, OpenDS, ApacheDS, and 389DS.
 
 Have any of you experiences with them? Which one do you think is the
 most
 mature and supported? And, quite importantly, which one has a GUI
 front-end?
 
 Rgds,
 --

 Openldap has a webbased gui: phpldapadmin.

 Both are in the tree.

 I use this myself for all the user accounts. Allowing me to only maintain
a single repository for all the services and desktops.

 Not been able to get ms windows to authenticate against it though. But
that requires further tools to be properly configured. (Think samba as a DC)


Interesting... thanks for the heads up!

MS Windows authentication is not necessary, since this AD server is not
used for that purpose...

Rgds,
--


[gentoo-user] LDAP server questions

2014-02-17 Thread Pandu Poluan
Hello list!

I'm planning to replace an Active Directory server currently functioning
*only* as an LDAP server, with a dedicated Linux-based LDAP server.

Now, the function of the LDAP server is at the moment:
* Provide the settings database for Axigen email server
* Provide group membership for BlueCoat proxy (who allowed to access what)
* Provide group membership for FreeRADIUS
* Provide group membership for Fortinet VPN

The day-to-day management will be handled be another division, and I'm
quite sure that they prefer a GUI, so the solution really should have a GUI
support (either Windows-based 'client' or web-based admin console).

Apparently, there are now many implementations of LDAP in the *nix world,
such as OpenLDAP, OpenDS, ApacheDS, and 389DS.

Have any of you experiences with them? Which one do you think is the most
mature and supported? And, quite importantly, which one has a GUI front-end?

Rgds,
--


Re: [gentoo-user] Re: Portage performance dropped considerably

2014-02-03 Thread Pandu Poluan
On Jan 28, 2014 5:57 AM, Neil Bothwick n...@digimed.co.uk wrote:

 On Mon, 27 Jan 2014 22:54:28 +0100, hasufell wrote:

   If it's about performance (in the sense of speed), then paludis
   is worse, because dependency calculation is more complex/complete
   there.
  
   That makes no sense at all. Paludis is written in a different
   language using different algorithms. It's not about the amount of
   work it does so much as how efficiently it does it.

  That's exactly what I was saying. I was talking about speed, not
  efficiency.

 But the efficiency of the algorithm, and the language, affects the speed.
 You can't presume it does more, therefore it takes longer if the two
 programs do things in very different ways.


I was thinking: is it feasible, to precalculate the dependency tree? Or,
at least preprocess all the sane (and insane) dependencies to help
portage?

Rgds,
--


Re: [gentoo-user] Re: Portage performance dropped considerably

2014-02-03 Thread Pandu Poluan
On Feb 3, 2014 9:17 PM, Alan McKinnon alan.mckin...@gmail.com wrote:

 On 03/02/2014 16:04, Pandu Poluan wrote:
 
  On Jan 28, 2014 5:57 AM, Neil Bothwick n...@digimed.co.uk
  mailto:n...@digimed.co.uk wrote:
 
  On Mon, 27 Jan 2014 22:54:28 +0100, hasufell wrote:
 
If it's about performance (in the sense of speed), then paludis
is worse, because dependency calculation is more complex/complete
there.
   
That makes no sense at all. Paludis is written in a different
language using different algorithms. It's not about the amount of
work it does so much as how efficiently it does it.
 
   That's exactly what I was saying. I was talking about speed, not
   efficiency.
 
  But the efficiency of the algorithm, and the language, affects the
speed.
  You can't presume it does more, therefore it takes longer if the two
  programs do things in very different ways.
 
 
  I was thinking: is it feasible, to precalculate the dependency tree?
  Or, at least preprocess all the sane (and insane) dependencies to help
  portage?


 I thought that's what the portage cache does, as far as it can.

 True, the cache reflects the state of the tree and not the parts of the
 tree a given machine is using, so how big a diff does that give? And
 don't forget overlays - they can slow things down immensely as more
 often than not there's no cache for them unless the user knows to do it
 manually.


Well, AFAIK, portage needs to kind of simulate everything going on in an
ebuild to get the list of dependencies/blockers... If this can be
'pre-simulated' resulting in a simpler to parse 'database' of
dependencies...

Rgds,
--


Re: [gentoo-user] IPTables question... simple as possible for starters

2013-12-31 Thread Pandu Poluan
On Dec 30, 2013 7:31 PM, shawn wilson ag4ve...@gmail.com wrote:

 Minor additions to what Pandu said...

 On Mon, Dec 30, 2013 at 7:02 AM, Pandu Poluan pa...@poluan.info wrote:
  On Mon, Dec 30, 2013 at 6:07 PM, Tanstaafl tansta...@libertytrek.org
wrote:

  The numbers within [brackets] are statistics/countes. Just replace
  them with [0:0], unless you really really really have a good reason to
  not start counting from 0...
 

 AFAIK, there's no reason this shouldn't alway be set to 0. If you want
 to keep your counter do --noflush

  NOTE: In that ServerFault posting, I suggested using the anti-attack
  rules in -t raw -A PREROUTING. This saves a great deal of processing,
  becase the raw table is just that: raw, unadulterated, unanalyzed
  packets. The CPU assumes nothing, it merely tries to match well-known
  fields' values.
 

 And because nothing is assumed, you can't prepend a conntrack rule. I
 can't think of why you'd ever want those packets (and I should
 probably move at least those 4 masks to raw) but just an FYI - no
 processing means no processing.

 Also see nftables: http://netfilter.org/projects/nftables/


Very interesting... were they aiming for something similar to *BSD's pf
firewall?

I personally prefer iptables-style firewall; no guessing about how a state
machine will respond in strange situations. Especially since I greatly
leverage ipset and '-m condition' (part of xtables-addons), which might or
might not be fully supported by nftables.

Rgds,
--


Re: [gentoo-user] IPTables question... simple as possible for starters

2013-12-30 Thread Pandu Poluan
On Mon, Dec 30, 2013 at 6:07 PM, Tanstaafl tansta...@libertytrek.org wrote:


[-- LE SNIP --]

 Ok, well, maybe I should have posted my entire ruleset...

 I have this above where I define my chains:

 #
 *filter
 :INPUT DROP [0:0]
 :FORWARD DROP [0:0]
 :OUTPUT DROP [0:0]
 #

 Does it matter where this goes?


Yes. Chain declarations must come before the rules themselves.

 And then above that, I have something else that I've never understood:

 *mangle

Begin declaration of the mangle table.

 :PREROUTING ACCEPT [1378800222:449528056411]
 :INPUT ACCEPT [1363738727:447358082301]
 :FORWARD ACCEPT [0:0]
 :OUTPUT ACCEPT [1221121261:1103241097263]
 :POSTROUTING ACCEPT [1221116979:1103240864155]

The numbers within [brackets] are statistics/countes. Just replace
them with [0:0], unless you really really really have a good reason to
not start counting from 0...

The second word is the 'policy' of the chain, i.e., the default action
taken if no rules match in the chain

 -A PREROUTING -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG FIN,PSH,URG
 -j DROP
 -A PREROUTING -p tcp -m tcp --tcp-flags FIN,SYN,RST,PSH,ACK,URG NONE -j DROP
 -A PREROUTING -p tcp -m tcp --tcp-flags SYN,RST SYN,RST -j DROP
 -A PREROUTING -p tcp -m tcp --tcp-flags FIN,SYN FIN,SYN -j DROP

Alright, the `--tcp-flags` option takes two parameters:
flags_to_check and expected_flags

These 4 rules collectively block 'well-known TCP Attacks', which I've
listed here:

http://serverfault.com/a/245713/15440

NOTE: In that ServerFault posting, I suggested using the anti-attack
rules in -t raw -A PREROUTING. This saves a great deal of processing,
becase the raw table is just that: raw, unadulterated, unanalyzed
packets. The CPU assumes nothing, it merely tries to match well-known
fields' values.

You *do* have to make sure that you don't forget to compile kernel
support for RAW tables ;-)

 COMMIT

End of mangle table declaration. Commit all chain definitions and
chain rules in one atomic operation.

 ipset create ssh_in iphash
 ipset add ssh_in 1.2.3.4

 and then this works:
 -A -m set --match-set ssh_in src -j ACCEPT

 ipset has the same save/load type things as ipt (minor differences
 with how you handle reload, but google or ask if you want to know).
 The set needs to be in place before the ipt rule is added, so ipset
 comes first in your boot sequence.


 Thanks, looks interesting and useful...

 So much to learn, so little time... ;)


iptables is a powerful beast; learn it well, and you'll prosper :-)


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pandu.poluan.info/blog/
 • Linked-In : http://id.linkedin.com/in/pepoluan



[gentoo-user] Fusion-IO Experience?

2013-11-26 Thread Pandu Poluan
Hello list!

My company's considering of purchasing a couple of Fusion-IO [1]
devices, especially the ioDrive Octal model [2]. However, before we
actually commit to purchasing it, I'd like to gather some info first.

Have any of you had any experience with a Fusion-IO product? Not
necessarily the same model.

Do you have difficulties running Linux on top of it? (We don't
actually plan to *boot* from it; most likely the system will boot from
a RAID-1 array of SSDs).

Did anyone successfully run it with Gentoo?


Any inputs will be very appreciated.


[1] http://www.fusionio.com
[2] http://www.fusionio.com/products/iodrive-octal/

Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pandu.poluan.info/blog/
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] Re: New mobo change

2013-10-18 Thread Pandu Poluan
On Fri, Oct 18, 2013 at 10:52 PM, J. Roeleveld jo...@antarean.org wrote:
 On Fri, October 18, 2013 05:33, Dale wrote:
 Bruce Hill wrote:
 On Thu, Oct 17, 2013 at 04:40:52PM -0500, Dale wrote:
 Well, this is interesting.  I swapped out the mobo.  First, it has the
 UEFI BIOS thing.  That was interesting for sure.  I'm not complaining
 but not used to it and wasn't expecting it either.  Second, it works
 except for the third part.  Third thing is, no mouse worky.  It works
 in
 the BIOS but not in the OS.  I have gpm set to start and it doesn't
 work
 in a console or a GUI.  I tried everything I can think of, no mouse.  I
 had to swap again.  I'm back to my old mobo.  Here is the kicker.  I
 plugged the USB mouse into the old mobo, it works just fine.  It works
 in KDE, console etc.  It just works.  The only kernel change I made was
 for the chipset on the mobo.  I left the USB stuff alone.

 Instead of me posting a lot of worthless stuff, what do y'all need me
 to
 post?  Keep in mind, I'm on my old mobo and it works on here.  I got
 the
 kernel config tho.  It's a start, I hope?   I followed this wiki howto.

 http://wiki.gentoo.org/wiki/USB/HOWTO

 Thoughts?  What do you need to figure this out?

 Dale
 Obliviously something simple, but why don't you throw your BB gun in the
 truck
 and come over so we can sight in the .44 magnum?

 Put all the stuff on the new mobo, boot with SystemRescueCD, see that
 the
 mouse works, take 2 aspirins, and let's go from there.

 That was my plan B, Sysrescue.  It wouldn't boot either.  It gets to the
 point where it is trying to find the USB stick, then errors out because
 it can't find it which is odd since it is plugged into a USB port and
 the mousey don't work from there either.  Thing is, it works just fine
 with the old mobo.  I actually tested it, mounted some stuff and
 everything.  Works fine on old mobo, errors out on the new one.  I also
 switched back to the legacy and desabled the UEFI stuff.  It still
 didn't work.  It does try to boot from it tho so it does see the USB
 stick at first.  It just can't see it later on.  H.

 Is USB-support ENABLED in the BIOS-settings?


I suspect this.

Since the UEFI BIOS (or whatever it's supposed to be called) is
mouse-driven-able, the USB is turned on _while_in_the_UEFI_BIOS_.

Outside of the UEFI BIOS, the USB might be enabled/disabled based on
the settings inside the UEFI BIOS.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] New mobo change

2013-10-14 Thread Pandu Poluan
On Oct 15, 2013 10:51 AM, Dale rdalek1...@gmail.com wrote:

 Howdy,

 I ordered the new mobo as much as I needed to wait.  The mobo is the
 same brand but a different chipset and a couple other things are
 different.  I have already built a kernel for those changes.  I plan to
 put everything on the old mobo on the new mobo.  That includes the CPU.
 I'm pretty sure this will not be needed but want to ask to be sure.  Do
 I need to do a emerge -e world or should it just work like it is?
 Since the CPU is going to be the exact same CPU, I'm thinking it is not
 needed.  I do have march=native set in make.conf.

 Thoughts?  Thanks.


Personally, I think all you need to do is to ensure that the kernel has all
the drivers it needs to speak to the new mobo. Other members of the @world
set relies on the drivers in the kernel.

But I don't use any GUI or audio; if you're using a GUI and/or audio, you
might also have to re-emerge the relevant bits.

BSTS: just re-emerge @world :-)

Rgds,
--


Re: OT - RAM disks - WAS Re: [gentoo-user] Network failed and weird error message

2013-10-14 Thread Pandu Poluan
On Oct 14, 2013 6:04 PM, Tanstaafl tansta...@libertytrek.org wrote:

 On 2013-10-13 5:49 PM, Dale rdalek1...@gmail.com wrote:

 Talk about putting some stuff on tmpfs.  O_O  I have always wanted to
 copy the tree to tmpfs and run time emerge -uvaDN world.  Just to see
 how fast it will go.  lol


 I remember once I worked for an Apple reseller that had this accounting
program that required them to do some kind of 'reconciliation' every month
that required a massive amount of processing - it took like 36 hours or
something ridiculous (literally almost took all weekend), and he had
implemented a rule that someone had to be there the entire time to baby sit
the process - apparently it wasn't uncommon for there to be an error that
would require them to restart it - and this was on a pretty powerful system
at the time.

 Well, one weekend, when we were building a system for a customer with
tons of RAM (for the time) I talked them into a little experiment. The boss
didn't believe me when I told him I could get the reconciliation processing
time down to less than a day (I told him probably just a few hours, but
wasn't sure)... so we made a bet.

 I took a Quadra 900 (or maybe it was a 950), and added a bunch of RAM - I
think we got it up to 128MB or something ridiculous (this was in about
1992). The accounting DB was about 40MB at the time, but hey, we had the
RAM, so I just loaded it up.

 I created a RAM disk, copied the entire Accounting DB into it, and
started running the reconciliation. The process finished after about 45
minutes (I was even surprised at that), and while there were no errors and
it said it had completed successfully, the boss was sure that something had
gone wrong. So, he re-ran it the old way on the old server, and almost 2
days later, when the numbers matched, he just shook his head and paid me
off, muttering about the lost weekends over the last 5 years he'd been
there. He kept that machine around for running the reconciliation for at
least a few months, but then I left, so no idea how long he kept it for...


Nce.. 48x performance improvement? I know of some DBA who would gladly
pay an arm + a leg + their grandmothers for that kind of improvement :-)

Kind of tangential, but that's what Oracle is aiming with their TimesTen
product: give the server oodles of RAM, and load the database in memory.

Another similar performance-improving method would be using Fusion-IO to
load the database into direct-memory-mapped SSDs. They claimed that the
most high-end Fusion-IO devices can reach up to 9 million IOPS...

Rgds,
--


Re: [gentoo-user] scripted iptables-restore

2013-10-13 Thread Pandu Poluan
On Oct 13, 2013 9:15 PM, Michael Orlitzky mich...@orlitzky.com wrote:

 On 10/13/2013 06:08 AM, Martin Vaeth wrote:
  5. You can't script iptables-restore!
 
  Well, actually you can script iptables-restore.
 
  For those who are interested:
  net-firewall/firewall-mv from the mv overlay
  (available over layman) now provides a separate
  firewall-scripted.sh
  which can be conveniently used for such scripting.
 

 You snipped the rest of my point =)

  You can write a bash script that writes an iptables-restore script to
  accomplish the same thing, but how much complexity are you willing to
  add for next to no benefit?

 If you have a million rules and you need to wipe/reload them all
 frequently you're probably doing something wrong to begin with.

 With bash, you can leverage all of the features of bash that everybody
 already knows. You can read files, call shell commands, pipe between
 them, etc. You can write bash functions to avoid repetitive commands.
 You can write inline comments to explain what the rules do.

 Something like,

   # A function which sets up a static mapping between an external IP
   # address and an internal one.
   #
   # USAGE: static_nat internal ip external ip
   #
   function static_nat() {
   iptables -t nat -A PREROUTING  -d ${2} -j DNAT --to ${1}
   iptables -t nat -A POSTROUTING -s ${1} -j SNAT --to ${2}
   }

 can make your iptables script a lot cleaner, and it conveys your intent
 better when the rule is created:

   # Danny likes to torrent linux isos at work so he needs a public ip
   static_nat 192.168.1.x 1.2.3.x

 I'm not saying you can't do all of this with iptables-restore, just that
 you're punishing yourself for little benefit if you do.


One benefit of being familiar with iptables-save and iptables-restore : you
can use iptables-apply.

Might save your sanity if you fat-fingered your iptables rule.

Just do `iptables-apply -t 180 ( preprocessor.sh new-rules.conf)`. Changes
are done atomically. After 180 seconds, if you don't indicate to
iptables-apply that the changes are proper, it atomically reverts the whole
netfilter tables.

bash scripts are powerful, but there might be unexpected cases that render
the netfilter tables to be wildly different from what you actually want.

The file format used by iptables-{save,restore,apply} is more like a
domain-specific language; less chance of partial mistakes. And it's atomic:
Either everything gets applied, or none gets applied (without clobbering
existing in-effect rules).

Rgds,
--


Re: [gentoo-user] scripted iptables-restore (was: Where to put advanced routing configuration?)

2013-10-13 Thread Pandu Poluan
On Oct 13, 2013 5:09 PM, Martin Vaeth va...@mathematik.uni-wuerzburg.de
wrote:

  5. You can't script iptables-restore!
 
  Well, actually you can script iptables-restore.

 For those who are interested:
 net-firewall/firewall-mv from the mv overlay
 (available over layman) now provides a separate
 firewall-scripted.sh
 which can be conveniently used for such scripting.


Thanks, Martin! I was about to create my own preprocessor, but I'll check
out yours first. If it's what I had planned, may I contribute, too?

Rgds,
--


Re: [gentoo-user] which filesystem is more suitable for /var/tmp/portage?

2013-10-03 Thread Pandu Poluan
On Thu, Oct 3, 2013 at 4:55 PM, Kerin Millar kerfra...@fastmail.co.uk wrote:
 On 18/09/2013 16:09, Alan McKinnon wrote:

 On 18/09/2013 16:05, Peter Humphrey wrote:

 On Wednesday 18 Sep 2013 14:52:30 Ralf Ramsauer wrote:

 In my opinion, reiser is a bit outdated ...


 What is the significance of its date? I use reiserfs on my Atom box for
 /var,
 /var/cache/squid and /usr/portage, and on my workstation for /usr/portage
 and
 /home/prh/.VirtualBox. It's never given me any trouble at all.



 Sooner or later, reiser is going to bitrot. The ReiserFS code itself
 will not change, but everything around it and what it plugs into will
 change. When that happens (not if - when), there is no-one to fix the
 bug and you will find yourself up the creek sans paddle

 An FS is not like a widget set, you can't really live with and
 workaround any defects that develop. When an FS needs patching, it needs
 patching, no ifs and buts. Reiser may nominally have a maintainer but in
 real terms there is effectively no-one

 Circumstances have caused ReiserFS to become a high-risk scenario and
 even though it might perform faultlessly right now, continued use should
 be evaluated in terms of that very real risk.


 Another problem with ReiserFS is its intrinsic dependency on the BKL (big
 kernel lock). Aside from hampering scalability, it necessitated compromise
 when the time came to eliminate the BKL:

 https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=8ebc423

 Note the performance loss introduced by the patch; whether that was
 addressed I do not know.

 In my view, ReiserFS is only useful for saving space through tail packing.
 Unfortunately, tail packing makes it slower still (an issue that was
 supposed to be resolved for good in Reiser4).

 In general, I would recommend ext4 or xfs as the go-to filesystems these
 days.

 --Kerin


XFS is honestly looking mighty good if your host has 8 cores or more:

http://lwn.net/Articles/476263/

If data corruption is *totally* not acceptable, and if you have more
than one disks of similar sizes, ZFS might even be more suitable.

Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] s6 et al

2013-10-03 Thread Pandu Poluan
On Oct 3, 2013 9:26 PM, William Hubbs willi...@gentoo.org wrote:

 On Wed, Oct 02, 2013 at 09:52:36AM -0500, Bruce Hill wrote:
  On Wed, Oct 02, 2013 at 09:44:35AM -0500, William Hubbs wrote:
   On Wed, Oct 02, 2013 at 12:04:24AM -0500, Bruce Hill wrote:
Just stumbled across some very interesting software/ideas:
   
http://skarnet.org/poweredby.html
  
   Yes, I have been looking at this for a few days, and some of the other
   members of the OpenRC team are interested in it as well.
  
   I am waiting for a bug to be fixed before I can put execline and s6 in
   the tree [1].
  
   William
  
   [1] https://bugs.gentoo.org/show_bug.cgi?id=486744
 
  Found that bug report earlier and CCed myself. Will track the progress,
maybe
  use it on a test box I plan to build soon.

 That bug report is now closed, and skalibs, execline and s6 are now in
 the tree. If you emerge s6 you get all of them since s6 depends on them.

 For everyone else who is following this thread: The web page originally
 referred to is just discussing what his server is running. The more
 interesting page is his software page:

 http://www.skarnet.org/software/

 William


I chuckle at what s6's author wrote about systemd:

http://skarnet.org/software/s6/s6-svscan-not-1.html

Rgds,
--


Re: [gentoo-user] systemd installation location

2013-09-29 Thread Pandu Poluan
On Sep 30, 2013 9:31 AM, Daniel Campbell li...@sporkbox.us wrote:


--- le snip ---

 If the proposed solution is all binaries and libraries in the same
 root/prefix directory, then why call it /usr?

My question exactly.

Why install to /usr at all, leaving /bin and /sbin a practically empty
directory containing symlinks only?

I mean, I have no quarrel with / and /usr separation, having had them in
the same partition for ages... but why not do it the other way around,
i.e., put everything in / and have /usr be a container for symlinks?

 It has little to do with
 users if it's nothing but binaries, libraries, etc. In addition, would a
 local directory still be under this, for user-compiled programs not
 maintained by the PM? Or does that deserve a different top level
directory?

 Then there's /opt, whose purpose I'm still not sure of. This is
 strengthening the idea that something new should be thought up and
 drafted.

IIRC, it was supposed to contain third-party binaries, i.e., things not
available in a distro's package repo. Thus, when one's tired of a
third-party binary package, he/she can just delete the relevant directory
under /opt, because the third-party package might not be uninstallable
using the distro's package management system (if any).

Of course, he/she might have to clean up the leftover crud in /etc, but
those are usually small and can safely be ignored. Except perhaps startup
initscripts.

 Not necessarily by us at Gentoo, but *somebody*. If I was crazy
 and knowledgeable enough I'd volunteer myself.


Rgds,
--


Re: [gentoo-user] ZFS

2013-09-21 Thread Pandu Poluan
On Sep 21, 2013 7:54 PM, thegeezer thegee...@thegeezer.net wrote:

 On 09/17/2013 08:20 AM, Grant wrote:
  I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
  running.  I'd also like to stripe for performance, resulting in
  RAID10.  It sounds like most hardware controllers do not support
  6-disk RAID10 so ZFS looks very interesting.
 
  Can I operate ZFS RAID without a hardware RAID controller?
 
  From a RAID perspective only, is ZFS a better choice than conventional
  software RAID?
 
  ZFS seems to have many excellent features and I'd like to ease into
  them slowly (like an old man into a nice warm bath).  Does ZFS allow
  you to set up additional features later (e.g. snapshots, encryption,
  deduplication, compression) or is some forethought required when first
  making the filesystem?
 
  It looks like there are comprehensive ZFS Gentoo docs
  (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
  world about how much extra difficulty/complexity is added to
  installation and ongoing administration when choosing ZFS over ext4?
 
  Performance doesn't seem to be one of ZFS's strong points.  Is it
  considered suitable for a high-performance server?
 
  http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA
 
  Besides performance, are there any drawbacks to ZFS compared to ext4?
 
  - Grant
 
 Howdy,
 been reading this thread and am pretty intrigued, ZFS is much more than
 i thought it was.
 I was wondering though does ZFS work as a multiple client single storage
 cluster such as GFS/OCFS/VMFS/OrangeFS ?

Well... not really.

Of course you could run ZFS over DRBD, or run any of those filesystems on
top a zvol...

But I'll say, ZFS is not (yet?) a clustered filesystem.

 I was also wondering if anyone could share their experience with ZFS on
 iscsi - especially considering the readahead /proc changes required on
 same system ?
 thanks!


Although I have no experience of ZFS over iSCSI, I don't think that's any
problem.

As long as ZFS can 'see' the block device comes time for it to mount the
pool and all 'child' datasets (or zvols), all should be well.

In this case, however, you would want the iSCSI target to not perform a
readahead. Let ZFS 'instructs' the iSCSI target on which sectors to read.

Rgds,
--


[gentoo-user] The meaning of number in brackets in /proc/cpuinfo power management?

2013-09-20 Thread Pandu Poluan
Hello list!

Does anyone know the meaning of the 'number between brackets' in the
power management line of /proc/cpuinfo?

For instance (I snipped the flags line to not clutter the email:

processor   : 0
vendor_id   : AuthenticAMD
cpu family  : 21
model   : 2
model name  : AMD Opteron(tm) Processor 6386 SE
stepping: 0
cpu MHz : 2800.110
cache size  : 2048 KB
fdiv_bug: no
hlt_bug : no
f00f_bug: no
coma_bug: no
fpu : yes
fpu_exception   : yes
cpuid level : 13
wp  : yes
flags   : --snip--
bogomips: 5631.71
clflush size: 64
cache_alignment : 64
address sizes   : 48 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate [9] [10]

What's [9] and [10] supposed to mean?

(Note: The OS is not actually Gentoo, but this list is sooo
knowledgeable, and methinks the output of /proc/cpuinfo is quite
universal...)


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] Re: ZFS

2013-09-19 Thread Pandu Poluan
On Thu, Sep 19, 2013 at 11:49 AM, Grant emailgr...@gmail.com wrote:
 I think many folks are interested in upgrading to EXT4 with RAID  from
 an ordinary JBOD workstation(server); or better yet to ZFS on RAID. I wish
 one of the brighter minds amongst us would put out a skeleton
 (wiki) information page as such:

 http://wiki.gentoo.org/wiki/ZFS+RAID

 I know I have struggled with completing this sort of installation
 several time in the last 6 months. I'm sure this (proposed) wiki page
 would get lots of updates from the Gentoo user community. Surely,
 I'm not qualified to do this, or it would have already been on the
 gentoo wiki

 Much of the older X + RAID pages are deprecated, when one considers
 the changes that accompany such an installation ( Grub2, UUID, fstab,
 partitioning of drives, Kernel options, just to name a few). We're
 talking about quite a bit of deviation from the standard handbook
 installation, fraught with hidden, fatal mis-steps.

 Any important points or key concepts a ZFS newbie should remember when
 installing with it for the first time?

 - Grant


Plan carefully how you are going to create the vdev's before you add
them to a pool.

Once a vdev has been created and added to a pool, you can't ever
un-add and/or replace them.

(You always can replace a component of a vdev -- e.g., if one physical
drive fails -- but you can't remove a vdev in its entirety).


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] ZFS

2013-09-19 Thread Pandu Poluan
On Thu, Sep 19, 2013 at 2:40 PM, Dale rdalek1...@gmail.com wrote:
 Grant wrote:
 Interesting news related to ZFS:

 http://open-zfs.org/wiki/Main_Page
 I wonder if this will be added to the kernel at some point in the
 future?  May even be their intention?
 I think the CDDL license is what's keeping ZFS out of the kernel,
 although some argue that it should be integrated anyway.  OpenZFS
 retains the same license.

 - Grant

 .


 Then I wonder why it seems to have forked?  scratches head 


At the moment, only to 'decouple' ZFS development from Illumos development.

Changing a license require the approval of all rightsholders, and that
takes time.

At least, with a decoupling, ZFS can quickly improve to fulfill the
needs of its users, no longer depending on Illumos' dev cycle.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] Re: ZFS

2013-09-19 Thread Pandu Poluan
On Thu, Sep 19, 2013 at 2:44 PM, Hinnerk van Bruinehsen
h.v.bruineh...@fu-berlin.de wrote:
 On Wed, Sep 18, 2013 at 09:49:40PM -0700, Grant wrote:
  I think many folks are interested in upgrading to EXT4 with RAID  from
  an ordinary JBOD workstation(server); or better yet to ZFS on RAID. I wish
  one of the brighter minds amongst us would put out a skeleton
  (wiki) information page as such:
 
  http://wiki.gentoo.org/wiki/ZFS+RAID
 
  I know I have struggled with completing this sort of installation
  several time in the last 6 months. I'm sure this (proposed) wiki page
  would get lots of updates from the Gentoo user community. Surely,
  I'm not qualified to do this, or it would have already been on the
  gentoo wiki
 
  Much of the older X + RAID pages are deprecated, when one considers
  the changes that accompany such an installation ( Grub2, UUID, fstab,
  partitioning of drives, Kernel options, just to name a few). We're
  talking about quite a bit of deviation from the standard handbook
  installation, fraught with hidden, fatal mis-steps.

 Any important points or key concepts a ZFS newbie should remember when
 installing with it for the first time?

 - Grant


 You should definitely determine the right value for ashift on pool creation
 (it controls the alignment on the medium). It's an option that you afaik can 
 only set
 on filesystem creation and therefore needs a restart from scratch if you get 
 it
 wrong.
 According to the illumos wiki it's possible to run a mixed pool (if you have
 drives requiring different alignments[1])
 If in doubt: ask ryao (iirc given the right information he can tell you which
 are the right options for you if you can't deduce it yourself).
 Choosing the wrong alignment can cause severe performance loss (that's not
 a ZFS issue but happened when 4k sector drives appeared and tools like fdisk
 weren't aware of this).

 WKR
 Hinnerk


Especially with SSDs. One must find out the blocksize used by his/her SSDs.

With spinning disks, setting ashift=12 is enough since no spinning
disks have sectors larger than 2^12 bytes.

With SSDs, one might have to set ashift=13 or even ashift=14.

Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] Re: ZFS

2013-09-19 Thread Pandu Poluan
On Thu, Sep 19, 2013 at 5:37 PM, Tanstaafl tansta...@libertytrek.org wrote:
 On 2013-09-19 3:44 AM, Hinnerk van Bruinehsen h.v.bruinehsen@fu-

 You should definitely determine the right value for ashift on pool
 creation
 (it controls the alignment on the medium). It's an option that you afaik
 can only set
 on filesystem creation and therefore needs a restart from scratch if you
 get it
 wrong.
 According to the illumos wiki it's possible to run a mixed pool (if you
 have
 drives requiring different alignments[1])
 If in doubt: ask ryao (iirc given the right information he can tell you
 which
 are the right options for you if you can't deduce it yourself).
 Choosing the wrong alignment can cause severe performance loss (that's not
 a ZFS issue but happened when 4k sector drives appeared and tools like
 fdisk
 weren't aware of this).


 Yikes...

 Ok, shouldn't there be a tool or tools to help with this? Ie, boot up on a
 bootable tools disk on the system with all drives connected, then let it
 'analyze' your system, maybe ask you some questions (ie, how you will be
 configuring the drives/RAID, etc), then spit out an optimized config for
 you?

 It is starting to sound like you need to be a dang engineer just to use
 ZFS...


Just do ashift=12 and you're good to go. No need to analyze further.

The reason I said that because in the future, *all* drives will have 4
KiB sectors. Currently, many drives still have 512 B sectors. But when
one day your drive dies and you need to replace it, will you be able
to find a drive with 512 B sectors?

Unlikely.

That's why, even if your drives are currently of the 'classic' 512 B
ones, go with ashift=12 anyway.

For SSDs, the situation is murkier. Many SSDs 'lie' about their actual
sector size, reporting to the OS that their sector size is 512 B (or 4
KiB). No tool can pierce this veil of smokescreen. The only way is to
do research on the Internet.

IIRC, a ZFS developer has embedded -- or planned to embed -- a small
database into the ZFS utilities to conclusively determine what
settings will be optimal. I forgot who exactly. Maybe @ryao can pipe
in (hello Richard! If you're watching this thread, feel free to add
more info).


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] Re: ZFS

2013-09-19 Thread Pandu Poluan
On Thu, Sep 19, 2013 at 8:01 PM, Grant emailgr...@gmail.com wrote:
 You should definitely determine the right value for ashift on pool
 creation
 (it controls the alignment on the medium). It's an option that you afaik
 can only set
 on filesystem creation and therefore needs a restart from scratch if you
 get it
 wrong.
 According to the illumos wiki it's possible to run a mixed pool (if you
 have
 drives requiring different alignments[1])
 If in doubt: ask ryao (iirc given the right information he can tell you
 which
 are the right options for you if you can't deduce it yourself).
 Choosing the wrong alignment can cause severe performance loss (that's not
 a ZFS issue but happened when 4k sector drives appeared and tools like
 fdisk
 weren't aware of this).

 Yikes...

 Ok, shouldn't there be a tool or tools to help with this? Ie, boot up on a
 bootable tools disk on the system with all drives connected, then let it
 'analyze' your system, maybe ask you some questions (ie, how you will be
 configuring the drives/RAID, etc), then spit out an optimized config for
 you?

 It is starting to sound like you need to be a dang engineer just to use
 ZFS...


 Just do ashift=12 and you're good to go. No need to analyze further.

 The reason I said that because in the future, *all* drives will have 4
 KiB sectors. Currently, many drives still have 512 B sectors. But when
 one day your drive dies and you need to replace it, will you be able
 to find a drive with 512 B sectors?

 Unlikely.

 That's why, even if your drives are currently of the 'classic' 512 B
 ones, go with ashift=12 anyway.

 For SSDs, the situation is murkier. Many SSDs 'lie' about their actual
 sector size, reporting to the OS that their sector size is 512 B (or 4
 KiB). No tool can pierce this veil of smokescreen. The only way is to
 do research on the Internet.

 OK, so figure out what SSD you're using and Google to find the correct ashift?

 - Grant


Kind of like that, yes. Find out exactly the size of the SSD's
internal sectors (for lack of better term), and find the log2 to it.

But don't go higher than ashift=14

Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] {OT} Need a new server

2013-09-17 Thread Pandu Poluan
On Tue, Sep 17, 2013 at 2:28 PM, Grant emailgr...@gmail.com wrote:
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 Can't you just emerge zfs-kmod?  Or maybe you're trying to do it
 without module support?


@tanstaafl's kernels have no module support.

Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] ZFS

2013-09-17 Thread Pandu Poluan
On Tue, Sep 17, 2013 at 2:20 PM, Grant emailgr...@gmail.com wrote:
 I'm convinced I need 3-disk RAID1 so I can lose 2 drives and keep
 running.  I'd also like to stripe for performance, resulting in
 RAID10.  It sounds like most hardware controllers do not support
 6-disk RAID10 so ZFS looks very interesting.

 Can I operate ZFS RAID without a hardware RAID controller?


Yes. In fact, that's ZFS' preferred mode of operation (i.e., it
handles all redundancy by itself).

 From a RAID perspective only, is ZFS a better choice than conventional
 software RAID?


Yes.

ZFS checksummed all blocks during writes, and verifies those checksums
during read.

It is possible to have 2 bits flipped at the same time among 2 hard
disks. In such case, the RAID controller will never see the bitflips.
But ZFS will see it.

 ZFS seems to have many excellent features and I'd like to ease into
 them slowly (like an old man into a nice warm bath).  Does ZFS allow
 you to set up additional features later (e.g. snapshots, encryption,
 deduplication, compression) or is some forethought required when first
 making the filesystem?


Snapshots is built-in from the beginning. All you have to do is create
one when you want it.

Deduplication can be turned on and off at will -- but be warned: You
need HUGE amount of RAM.

Compression can be turned on and off at will. Previously-compressed
data won't become uncompressed unless you modify them.

 It looks like there are comprehensive ZFS Gentoo docs
 (http://wiki.gentoo.org/wiki/ZFS) but can anyone tell me from the real
 world about how much extra difficulty/complexity is added to
 installation and ongoing administration when choosing ZFS over ext4?


Very very minimal. So minimal, in fact, that if you don't plan to use
ZFS as a root filesystem, it's laughably simple. You don't even have
to edit /etc/fstab

 Performance doesn't seem to be one of ZFS's strong points.  Is it
 considered suitable for a high-performance server?

 http://www.phoronix.com/scan.php?page=news_itempx=MTM1NTA


Several points:

1. The added steps of checksumming (and verifying the checksums)
*will* give a performance penalty.

2. When comparing performance of 1 (one) drive, of course ZFS will
lose. But when you build a ZFS pool out of 3 pairs of mirrored drives,
throughput will increase significantly as ZFS has the ability to do
'load-balancing' among mirror-pairs (or, in ZFS parlance, mirrored
vdevs)

Go directly to this post:
http://phoronix.com/forums/showthread.php?79922-Benchmarks-Of-The-New-ZFS-On-Linux-EXT4-Winsp=326838#post326838

Notice how ZFS won against ext4 in 8 scenarios out of 9. (The only
scenario where ZFS lost is in the single-client RAID-1 scenario)

 Besides performance, are there any drawbacks to ZFS compared to ext4?


1. You need a huge amount of RAM to let ZFS do its magic. But RAM is
cheap nowadays. Data... possibly priceless.

2. Be careful when using ZFS on a server on which processes rapidly
spawn and terminate. ZFS doesn't like memory fragmentation.

For point #2, I can give you a real-life example:

My mail server, for some reasons, choke if too many TLS errors happen.
So, I placed Perdition in to capture all POP3 connections and
'un-TLS' them. Perdition spawns a new process for *every* connection.
My mail server has 2000 users, I regularly see more than 100 Perdition
child processes. Many very ephemeral (i.e., existing for less than 5
seconds). The RAM is undoubtedly *extremely* fragmented. ZFS cries
murder when it cannot allocate a contiguous SLAB of memory to increase
its ARC Cache.

OTOH, on another very busy server (mail archiving server using
MailArchiva, handling 2000+ emails per hour), ZFS run flawlessly. No
incident _at_all_. Undoubtedly because MailArchiva use one single huge
process (Java-based) to handle all transactions, so no RAM
fragmentation here.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] {OT} Need a new server

2013-09-16 Thread Pandu Poluan
On Sun, Sep 15, 2013 at 6:10 PM, Grant emailgr...@gmail.com wrote:
 Is the Gentoo Software RAID + LVM guide the best place for RAID
 install info if I'm not using LVM and I'll have a hardware RAID
 controller?

 Not ready to take the ZFS plunge? That would greatly reduce the complexity
 of RAID+LVM, since ZFS best practice is to set your hardware raid controller
 to JBOD mode and let ZFS take care of the RAID - and no LVM required (ZFS
 has mucho better tools). That is my next big project for when I switch to my
 next new server.

 I'm just hoping I can get comfortable with a process for getting ZFS
 compiled into the kernel that is workable/tenable for ongoing kernel updates
 (with minimal fear of breaking things due to a complex/fragile
 methodology)...

 That sounds interesting.  I don't think I'm up to it this time around,
 but ZFS manages a RAID array better than a good hardware card?


Yes. If you use ZFS to wrestle a JBOD array into its version of
RAID1+0, when comes time for resilvering (i.e., rebuilding a failed
drive), ZFS smartly only copies the used blocks and skips over unused
blocks.

 It sounds like ZFS isn't included in the mainline kernel.  Is it on its way 
 in?


Unlikely. There has been a discussion on that in this list, and there
is some concern that ZFS' license (CDDL) is not compatible with the
Linux kernel license (GPL), so never the twain shall be integrated.

That said, if your kernel supports modules, it's a piece of cake to
compile the ZFS modules on your own. @ryao has a zfs-overlay you can
use to emerge ZFS as a module.

If you have configured your kernel to not support modules, it's a bit
more work, but ZFS can still be integrated statically into the kernel.

But the onus is on us ZFS users to do the necessary steps.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] Need help: Filesystem (ext4) corrupted!

2013-09-02 Thread Pandu Poluan
On Sep 2, 2013 11:16 PM, meino.cra...@gmx.de wrote:

 Hi,



 I need some urgent help...



 The rootfs and $HOME of my embedded system is stored
 on a 16GB SD-card (about 5GB used, rest free). The FS
 is ext4.

 Since the system hangs for unknown reasons several times I
 removed the sdcard, put it in a card reader and did an
 ext4.fsck on it.

 Clean was the result.

 Then I forced a check with -f -p.

 The result was:

 solfire:/rootfsck.ext4 -f -p /dev/sdb2
 rootfs: Inodes that were part of a corrupted orphan linked list found.

 rootfs: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
 (i.e., without -a or -p options)
 [1]18644 exit 4 fsck.ext4 -f -p /dev/sdb2


 Return code 4 means

 4- File system errors left uncorrected

 which indicates nothing and all at the same time.


 At this point I started to write this mail.

 Before I fscked the sdcard I mounted the FS and tar'ed everything on
 it into a backup file.

 The tar process did not return an error.

 Since it costs A LOT OF TIME to compile everything from source on a
 1GHz CPUed embedded system natively - and for abvious different other
 reasons - I am very interested in doing the next steps correctly.

 What can I do to eliminate the problem without data loss (best
 case) or to save the most while knowing what and where the corrupted
 data are located on the system?

 Thank you very much in advance for any help!


I'm not really sure how to fix the corrupt fs, but don't forget to backup
the whole disk using dd

Rgds,
--


Re: [gentoo-user] Re: Need help: Filesystem (ext4) corrupted!

2013-09-02 Thread Pandu Poluan
On Sep 3, 2013 10:51 AM, William Kenworthy bi...@iinet.net.au wrote:

 On 03/09/13 11:26, meino.cra...@gmx.de wrote:
  William Kenworthy bi...@iinet.net.au [13-09-03 05:08]:

--snip--

  Have you run out of inodes? - ext 4 has had very mixed success for me
on
  solid state.  Running out of inodes is a real problem for gentoo on
  smaller SD cards with standard settings.
 
  BillK
 
 
 
  Does this error message from fsck indicate that? I am really bad in
  guessing what fsck tries to cry at me ... ;)
 
 
  solfire:/rootfsck.ext4 -f -p /dev/sdb2
  rootfs: Inodes that were part of a corrupted orphan linked list
found.
 
  rootfs: UNEXPECTED INCONSISTENCY; RUN fsck MANUALLY.
  (i.e., without -a or -p options)
  [1]18644 exit 4 fsck.ext4 -f -p /dev/sdb2
 
 
  Is there any way to correct the settings from the default values to
  more advances ones, which respect the sdcard size of 16GB *without*
  blanking it...a correction on the fly so to say???
 
  And if not: Is there a way to backup the sdcard and playback the files
  after reformatting it by preserving all three time stamps of the
  files (atime is deactivated via fstab though) ?
 
  Best regards,
  mcc
 
 
 
 
 
 df -i - if you get 100% iUSE or near to it thats your problem ... I have
 seen that error message you give as a result of running out of inodes
 corrupting the FS.

 No, your only way out is to copy (I use rync) the files off, recreate
 the fs with max inodes (man mke2fs) and rsync the files back.  Once an
 ext* fs has been created with a certain number of inodes its fixed until
 you re-format.

 I get it happening regularly on 4G cards when I forget and just emerge a
 couple of packages without cleaning up in between packages.  On 16G
 cards, its compiling something like glibc or gcc that uses huge numbers
 of inodes at times.  On a single 32G card I have, the standard settings
 have been fine ... so far :)

 Billk



While you're considering of formatting the flash disk, consider also
whether ext3/4 is suitable.

When I first use Gentoo, I got bitten by inode exhaustion several times, so
I used an inode-less fs (reiserfs, to be precise).

I have no idea if reiserfs is suitable for a flash disk, though.

Rgds,
--


Re: Integrated ZFS for Gentoo - WAS Re: [gentoo-user] Optional /usr merge in Gentoo

2013-08-31 Thread Pandu Poluan
On Sat, Aug 31, 2013 at 12:10 PM, Mark David Dumlao madum...@gmail.com wrote:
 On Sat, Aug 31, 2013 at 4:16 AM, Mick michaelkintz...@gmail.com wrote:
 On Friday 30 Aug 2013 15:44:35 Tanstaafl wrote:
 On 2013-08-30 10:34 AM, Alan McKinnon alan.mckin...@gmail.com wrote:
  On 30/08/2013 16:29, Tanstaafl wrote:
  Why would there be a problem if someone decided to create a 3rd party
  overlay *not* part of the official gentoo portage tree that contained
  *only* the zfs stuff, and when this overlay was installed combined with
  a zfs keyword for the kernel, portage would then pull in the required
  files, and automagically build a kernel with an up to date version of
  zfs properly and fully integrated?
 
  Would this not work, *and* have no problems with licensing?
 
  there is no problem with licensing in that case.
  The ebuild could even go in the portage tree, as Gentoo is not
  redistributing sources when it publishes an ebuild.

 Thanks Alan! Just the answer I wanted.

 Ok, so... how hard would this be then? What would the chances be that
 this could actually happen? I'll happily go open a bug for it if you
 think the work would be minimal...

 It seems to me that I can't be the only one who would like to see this
 happen?

 Nope! I will vote for you.  ;-)

 --
 Regards,
 Mick

 Sounds like an awful lot of trouble for a problem that's already solved by
 installing sys-kernel/module-rebuild and running module-rebuild rebuild
 after every kernel update, which is how nvidia, broadcom, and other
 kernel modules are dealt painlessly with anyways...


Well, if you follow Tanstaafl in the other thread, you'll see that he
wants ZFS to be integrated into the kernel, not existing as a kernel
module.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: Integrated ZFS for Gentoo - WAS Re: [gentoo-user] Optional /usr merge in Gentoo

2013-08-31 Thread Pandu Poluan
On Sep 1, 2013 7:51 AM, Mark David Dumlao madum...@gmail.com wrote:

 On Sun, Sep 1, 2013 at 8:13 AM, Walter Dnes waltd...@waltdnes.org wrote:
  On Sat, Aug 31, 2013 at 02:19:56PM +0200, Joerg Schilling wrote
 
  So there seems to be no real need to create a static linux kernel
  with ZFS inside.
 
  See
http://www.gentoo.org/doc/en/handbook/handbook-amd64.xml?full=1#book_part1_chap7
 
  Now go to File Systems and select support for the filesystems you use.
  Don't compile the file system you use for the root filesystem as
  module, otherwise your Gentoo system will not be able to mount
  your partition.
 
You can get away with most stuff as modules; ***BUT NOT THE ROOT
  FILESYSTEM***.  Think about it for a minute.  Gentoo reads modules off
  the disk.  If the code for the root filesystem is a module, Gentoo would
  have to read the module off the disk to enable it to read the module off
  the disk... OOPS.  This is a classic chicken and egg situation.

 And this is why the initrd was actually invented.
 http://en.wikipedia.org/wiki/Initrd

 It's a means of loading kernel modules so that the root filesystem can be
 mounted as a module.

Not everyone is willing to use an initr* thingy. It's another potential
breaking point.

I have no problem with /usr being 'merged' with /, in fact I have been
doing that for a couple of years now.

But I will keep myself a mile away from an initr* thingy.

Rgds,
--


Re: [gentoo-user] crond: time disparity detected (on hibernate and wake-up)

2013-08-31 Thread Pandu Poluan
On Aug 31, 2013 8:47 AM, Walter Dnes waltd...@waltdnes.org wrote:

   Whenever I hibernate or do a wake-up from hibernation, I always get
 messages about a crond time disparity detected.  It's usually 580 or 602
 minutes.  But the clock appears to be correct within a few seconds after
 wake-up.  I'm in Eastern time, running local time...

 [i660][waltdnes][~] cat /etc/timezone
 Canada/Eastern


Maybe the mobo battery is dying?

Rgds,
--


[gentoo-user] HA-Proxy or iptables?

2013-08-29 Thread Pandu Poluan
Hello list!

Here's my scenario:

Currently there is a server performing 2 functions; one runs on, let's
say, port 2000, and another one runs on port 3000.

Due to some necessary changes, especially the need to (1) provide more
resource for a function, and (2) delegate management of the functions
to different teams, we are going to split the server into two.

The problem is: Many users -- spread among 80+ branches throughout the
country -- access the server using IP Address instead of DNS name.

So, my plan was to leave port 2000's application on the original
server, implement port 3000's application on a new server, and have
all access to port 3000 of the original server to be redirected to
same port on the new server.

I can implement this using iptables SNAT  DNAT ... or I can use HA-Proxy.

Can anyone provide some benefit / drawback analysis on either solution?

Thank you very much!


-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] HA-Proxy or iptables?

2013-08-29 Thread Pandu Poluan
On Aug 29, 2013 7:46 PM, thegeezer thegee...@thegeezer.net wrote:

 On 08/29/2013 01:12 PM, Randy Barlow wrote:
  Honestly, I think the best solution is to switch the company to using
domain names to access these resources. This makes it much easier to
silently introduce things like load balancers later on if you ever need to
scale. It's also much easier to communicate to new users how to find this
resource. Once you migrate to IPv6 it becomes a very long address to tell
people as well.
 
  To answer your specific question, I would just do it with iptables if
you must continue accessing it by IP address. I will point out that the
service on the new IP address now has doubled its chances of going out of
service, because it depends on both machines running, even though the first
has nothing to do with it. Also, doing this with firewall rules isn't very
nice from a systems management perspective for the future, as it's not very
obvious what's going on with some server rewriting packets for another one.
If someone sees that in two years, are they going to know what to do? What
if they want to take server 1 down, and forget that it also disrupts 2?
Using DNS is much cleaner for these reasons.
 With iptables this could be tricky if everything is in the same LAN
 subnet, you will need to ensure you have both DNAT and SNAT otherwise
 you will have:
 PC --- serverA:3000 ---DNAT serverB
 serverB ---replies--- PC
 PC ignores packet i wasn't talking to you, i was talking to serverA


I do have some experience with double NAT-ting, but thanks for the reminder
anyways :-)

 Also bear in mind that from serverB's perspective, all connections on
 port 3000 will appear to come from serverA.  I know that a VT based
 terminal server can  set up users based on their originating IP, which
 would previously have been a good detector of which terminal they are
 connecting from.


Luckily, to the best of my knowledge, the apps do not make such
distinction, so I can get away with such sleight of hand...

 Rather than using iptables on serverA, you may like to consider EBtables
 or IPtables on a server that sits in front of both serverA and serverB.
 this would act as a bridge, and rewrite packets for serverA on port 3000
 to go to serverB on port 3000
 or
 it could act as a router for NAT (iptables) if you change the ip subnet
 of serverA and serverB, and make the NAT box have the original IP of
serverA
 this would allow connections by IP to be tracked


Interesting... I'll consider that. Although not strictly needed, tracking
by IP will certainly be helpful.

Thank you for the tip!


Re: [gentoo-user] HA-Proxy or iptables?

2013-08-29 Thread Pandu Poluan
On Aug 29, 2013 7:13 PM, Randy Barlow ra...@electronsweatshop.com wrote:

 Honestly, I think the best solution is to switch the company to using
domain names to access these resources. This makes it much easier to
silently introduce things like load balancers later on if you ever need to
scale. It's also much easier to communicate to new users how to find this
resource. Once you migrate to IPv6 it becomes a very long address to tell
people as well.


I agree, but considering that the split is Really Urgent™, I'll just have
to make do with redirection for the time being.

 To answer your specific question, I would just do it with iptables if you
must continue accessing it by IP address. I will point out that the service
on the new IP address now has doubled its chances of going out of service,
because it depends on both machines running, even though the first has
nothing to do with it. Also, doing this with firewall rules isn't very nice
from a systems management perspective for the future, as it's not very
obvious what's going on with some server rewriting packets for another one.
If someone sees that in two years, are they going to know what to do? What
if they want to take server 1 down, and forget that it also disrupts 2?
Using DNS is much cleaner for these reasons.

Again , I agree 100%.

Fortunately, nobody is allowed to bring down a server without my team's
blessing, so if they ever need to bring the server down, we will force them
to arrange a schedule with the other team.

Rgds,
--


Re: [gentoo-user] Optional /usr merge in Gentoo

2013-08-28 Thread Pandu Poluan
On Tue, Aug 27, 2013 at 8:03 PM, Alan McKinnon alan.mckin...@gmail.com wrote:

 On 27/08/2013 14:05, Tanstaafl wrote:

[-- snippy --]

  Thanks Alan, starting to get excited about playing with ZFS.
 
  How would you rate their docs and support community (for the free version)?

 Support is top-notch, on par with what you find around here if that
 helps ;-)

 Each major.minor version has a .pdf manual published, while the next
 version is in development, the docs get updated on a wiki and the final
 version is an export of that. There's a forum with knowledgeable users
 and the devs hang around just in case regular users can't help with a
 question.

 No mailing list though :-(
 And the forum does have a lot of noise from n00bs, but that's common
 with web forums. Like on Gentoo, you quickly learn to spot those posts
 and scan over them.


Actually, there *is* a mailing list. I happened upon it accidentally
several minutes ago.

Two of them in fact.

https://groups.google.com/a/zfsonlinux.org/forum/#!forum/zfs-discuss

... and if you want to partake in development of ZFS-on-Linux:

https://groups.google.com/a/zfsonlinux.org/forum/#!forum/zfs-devel


(I've just subscribed to the first list)


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] Re: The NVIDIA/Kernel fiasco -- is it safe to sync yet?

2013-08-27 Thread Pandu Poluan
On Tue, Aug 27, 2013 at 12:57 PM, Alan McKinnon alan.mckin...@gmail.com wrote:

 On 27/08/2013 04:06, »Q« wrote:
  On Mon, 26 Aug 2013 08:02:32 +0200
  Alan McKinnon alan.mckin...@gmail.com wrote:
 
  On 26/08/2013 03:52, »Q« wrote:
  I doubt your wiki page idea will work, it will be just accurate
  enough
  to look like it might work and just inaccurate enough to be
  useless. Which brings you back to the previous paragraph - try
  emerge nvidia-drivers and if it fails then don't use that kernel.
  I was unclear to the point of being misleading.  I'm sorry.
 
  The wiki idea is only for a page which tells which
  kernel/nvidia-drivers combinations the Gentoo nvidia-drivers
  maintainers support.  And by support, I mean they'll look into
  bugs and fix build problems if they're able to.  This is exactly
  the info I'm grepping out of ewarn messages in their ebuilds now.
 
 
  That list is the list of kernels that nVidia supports, which is easy
  to find.
 
  Where?  AIUI from reading various threads about this, sometimes that
  info can be found in nVidia's developer web forum, but I've never been
  able to find it there.  nVidia's READMEs give a minimum kernel version,
  but no max.


 So ask nVidia to clearly and unambiguously state in an easily found
 place what kernels *they* support.

 Look, all issues with building the driver shim are directly the
 responsibility of nVidia themselves, a result of *their* business
 decisions. The correct thing to do is to make it nVidia's problem and
 not force the community to jump through hoops trying to track down what
 does and does not work today.

 Or, you could do the heavy lifting yourself. You test all current
 drivers with all recent kernels and maintain a gentoo wiki page that
 lists the info you want.


Hmm... reading this thread makes me understand why Linus gave nVidia 'the bird'.


Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~



Re: [gentoo-user] SLES or gentoo ... ?

2013-08-27 Thread Pandu Poluan
On Tue, Aug 27, 2013 at 8:53 PM, Neil Bothwick n...@digimed.co.uk wrote:
 On Tue, 27 Aug 2013 08:34:38 -0400, Tanstaafl wrote:

 Where is the best docs for understanding and working with ZFS?

 There are plenty of links on the zfsforlinux site.


on Linux, not for Linux ... http://zfsonlinux.org/

:-)

That said, I just discovered the mbuffer trick [1]  to speed up ZFS
snapshot shipping (my self-made term) from one server to another.

Never thought that I'd see the GbE saturated...

Of course the Network Admin gave me the evil eye :-P


[1] 
http://blogs.everycity.co.uk/alasdair/2010/07/using-mbuffer-to-speed-up-slow-zfs-send-zfs-receive/



Rgds,
-- 
FdS Pandu E Poluan
~ IT Optimizer ~

 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan



Re: [gentoo-user] Makeing /dev/rtc1 accessible as soon as possible - how?

2013-08-26 Thread Pandu Poluan
On Aug 26, 2013 8:41 AM, Mark David Dumlao madum...@gmail.com wrote:

 On Sun, Aug 25, 2013 at 12:54 PM,  meino.cra...@gmx.de wrote:
  Hi Mark, hi William,
 
  the script ds3231 in /etc/init.d is -- according to rc-update --
  set as folows:
 
 ds3231 | boot

 Long and short of it, here's the boot order:
 sysinit - boot - (single) - default

 rc(8) tells me that sysinit is for bringing up system specific stuff
 such as /dev, /proc, /sys. So it's appropriate for a special device
 file such as yours, with the caveat that you want it up AFTER any
 dependencies such as sysfs.

 Now how to do that is to make your script openrc compliant, so...

 
  There is no corresponding file in /etc/conf.d since the script
  onlu consist of two commands (see previous posting). There is no
  initramfs.

 Since openrc is running your script, it will check /etc/conf.d/same-name
 for any script with environment variables. Or you can put it in the init
 script itself. Mind you, I don't know where to find documentation on
 how openrc implements this, unlike, say, some controversial init system
 on this list...


Just to add some info:

To the best of my knowledge, scripts in init.d will source the relevant
file in conf.d. So, whether or not an initscript requires a conf.d file
totally depends on the initscript in question.

Rgds,
--


Re: [gentoo-user] Optional /usr merge in Gentoo

2013-08-26 Thread Pandu Poluan
On Aug 26, 2013 5:06 AM, Alan McKinnon alan.mckin...@gmail.com wrote:

 On 18/08/2013 21:38, Tanstaafl wrote:
  On 2013-08-18 5:16 AM, Alan McKinnon alan.mckin...@gmail.com wrote:
  While we're on the topic, what's the obsession with having different
  bits of the file hierarchy as different*mount points*? That harks back
  to the days when the only way to have a chunk of fs space be different
  was to have it as a separate physical thing and mount it. Nowadays we
  have something better - ZFS. To me this makes so much more sense. I
have
  a large amount of storage called a pool, and set size limits and
  characteristics for various directories without having to deal with
  fixed size volumes.
 
  Eh? *Who* has ZFS? Certainly not the linux kernel.
 

 FreeBSD

 You can get ZFS on Linux with relative ease, you just have to build it
 yourself. Distros feel they can't redistribute that code.



 The bit you quoted shouldn't be read to mean that we have ZFS, it works
 on Linux and everyone should activate it and use it and chuck ext* out
 the window.

 I meant that we've been chugging along since 1982 or so with ancient
 disk concepts that come mostly from MS_DOS and limited by that hardware
 of that day.

 And here we are in 2013 *still* fiddling with partition tables, fixed
 file systems, fixed mountpoints and we still bang our heads weekly
 because sda3 has proven to be too small, and it's a *huge* mission to
 change it. Yes, LVM has made this so much easier (kudos to Sistina
 for that) but I believe the entire approach is wrong.

 The ZFS approach is better - here's the storage, now do with it what I
 want but don't employ arbitrary fixed limits and structures to do it.


+1 on ZFS. It's honestly a truly *modern* filesystem.

Been using it as the storage back-end of my company's email server.

The zpool and zfs command may need some time to be familiar with, but the
self-mounting self-sharing ability of zfs (i.e., no need to muck with fstab
and exports files) is really sweet.

I really leveraged its ability to do what I call delta snapshot shipping
(i.e., send only the differences between two snapshots to another place).
It's almost like an asynchronous DRBD, but with the added peace of mind
that if the files become corrupted (due to buggy app, almost no way for ZFS
to let corrupt data exist), I can easily 'roll back' to the time where the
files are still uncorrupted.

Rgds,
--


Re: [gentoo-user] Optional /usr merge in Gentoo

2013-08-26 Thread Pandu Poluan
On Mon, Aug 26, 2013 at 4:56 PM, Neil Bothwick n...@digimed.co.uk wrote:

 On Mon, 26 Aug 2013 09:45:15 +0100, Mick wrote:

   emerge zfs works too :)
  
   I really like the way ZFS just lets you get on with things.
 
  Does anyone run it on a desktop/laptop as their day to day fs?

 Yes.

  Any
  drawbacks or gotchas?  Other than reliability, how does it perform
  compared say to ext4?

 I haven't benchmarked it. It feels as if it may be a little slower on my
 desktop with spinning disks, but that may be down to other factors, like
 impatience. It flies on my laptop's SSD.


Additional note:

*Of course* it will be slower than ext*, because during every read it
ensures that the block being read has a proper checksum.

Likewise on writes.

But that IMO is very worth it just for the additional peace-of-mind,
knowing you will never ever have a silent corruption.


-- 
FdS Pandu E Poluan
* ~ IT Optimizer ~**
*
 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan


Re: [gentoo-user] SLES or gentoo ... ?

2013-08-26 Thread Pandu Poluan
On Aug 26, 2013 9:02 PM, Stefan G. Weichinger li...@xunil.at wrote:


 We now get into choosing hardware for that shiny new gentoo-server ...

 as mentioned this will be a KVM-host and we look at a HP Proliant DL385p
 ... 2 AMD Opteron 6344 ... 32 GB RAM, 8 disks (the customer has some
 bigger contract with hp, so we are a bit biased to hp here).

 For the server that runs the VMs right now we back then chose 2 smaller
 SAS-disks for the OS (speed, expensive) and installed SLES on a mirror
 built on those 2 disks. The other 8 (?) SATA-disks run in a RAID-6 ...

 Now I ask myself if I want to do that again:

 2 smaller disks, fast, RAID1 - OS/ Gentoo (SSD? I think, no)

 6 bigger disks, maybe 7200rpm, SATA - RAID6 - LVM - space for VMs
 (maybe straight LVs for the VM-disks ... performs well, nice backups)

 Or should I go for all 8 disks in a RAID6 ... ?

 I'd like to discuss that and hear your opinions and experiences.

 Thanks!



2 smallest disks in RAID1 configuration for booting.

Then the 6 disks as JBOD and use it within a RAIDZ2 array.

Or, 6 disks in JBOD, add them a 3 ZFS mirror vdev's.


Rgds,
--


Re: [gentoo-user] SLES or gentoo ... ?

2013-08-26 Thread Pandu Poluan
On Mon, Aug 26, 2013 at 11:11 PM, Stefan G. Weichinger li...@xunil.atwrote:

 Am 26.08.2013 17:51, schrieb Pandu Poluan:

  2 smallest disks in RAID1 configuration for booting.
 
  Then the 6 disks as JBOD and use it within a RAIDZ2 array.
 
  Or, 6 disks in JBOD, add them a 3 ZFS mirror vdev's.

 Interesting suggestion.

 I don't know if I dare ... I wonder how VMs would perform on
 ZFS-filesystems ... I should test that before actually setting that up.
 The system will run ~800km away from me so it should be rather
 bullet-proof ;-)

 I trust ZFS, sure ... but I don't have experience with it as a storage
 for KVM-virtualization.

 But I consider it, thanks!

 Stefan



Well... with a ZFS-backed storage, when you want to create a new VM, all
you have to do is `zfs clone` ;-)

Rgds,
-- 
FdS Pandu E Poluan
* ~ IT Optimizer ~**
*
 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan


Re: [gentoo-user] SLES or gentoo ... ?

2013-08-26 Thread Pandu Poluan
On Mon, Aug 26, 2013 at 11:19 PM, Stefan G. Weichinger li...@xunil.atwrote:

 Am 26.08.2013 18:14, schrieb Pandu Poluan:

  Well... with a ZFS-backed storage, when you want to create a new VM, all
  you have to do is `zfs clone` ;-)

 Yes, I see those possibilities ;-)

 That system will run rather important stuff ... so I have to be rather
 careful. But the bit-rot-prevention/checksumming-features are a plus in
 this perspective.

 Do you run such a system? How big?


I have to admit that my virtualized servers are not running on top of
ZFS-backed storage... because the company already has a quite sizable EMC
VNX storage, and it's a shame to not use that behemoth... ;-)

At the moment, the ZFS-backed storage is used to store our email archive
system (using MailArchiva), and with more than 15'000 emails per day, it
has been performing flawlessly.

That said, when I was hunting around for information on ZFS before taking
the plunge, a *lot* of discussion revolves around using ZFS as the storage
back-end for virtualization, be it using KVM or VMware.


Rgds,
-- 
FdS Pandu E Poluan
* ~ IT Optimizer ~**
*
 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan


Re: [gentoo-user] Proxy server problem

2013-08-25 Thread Pandu Poluan
On Aug 25, 2013 11:38 PM, Grant emailgr...@gmail.com wrote:

I set up squid on a remote system so I can browse the internet
from
that IP address.  It works but it stalls frequently.  I had
similar
results with ziproxy.  I went over this with the squid list but
we
got nowhere as it seems to be some kind of a system or network
problem.
   
   
http://squid-web-proxy-cache.1019090.n4.nabble.com/squid-3-3-5-hangs- 
 the -en tire-system-td4660893.html
   
Can anyone here help me figure out what is wrong?  I'm not sure
where
to start.
 
  Is this stalling problem happening when you just browse the internet,
visiting
  websites, or do you get it when you are downloading large files such as
  videos, or music?  If it is the former, then I am not sure what causes
it.  If
  it is the latter, then this may be relevant to http timeout settings.

 Actually it happens when I'm just browsing the internet.  How can a
 problem of this nature be debugged?

 - Grant


After reading your description on the squid mailing list, I'm suspicious
that you might've run out of TCP buffers.

Please post the results of:

sysctl -A | egrep (mem)|(tcp)

Rgds,
--


Re: [gentoo-user] How to determine 'Least Common Denominator' between Xen(Server) Hosts

2013-08-17 Thread Pandu Poluan
On Aug 16, 2013 12:26 AM, Kerin Millar kerfra...@fastmail.co.uk wrote:

 On 14/08/2013 13:15, Bruce Hill wrote:

 On Wed, Aug 14, 2013 at 12:18:41PM +0700, Pandu Poluan wrote:

 Hello list!

 My company has 2 HP DL585 G5 servers and 5 Dell R... something servers.
All
 using AMD processors. They currently are acting as XenServer hosts.

 How do I determine the 'least common denominator' for Gentoo VMs
(running
 as XenServer guests), especially for gcc flags?

 I know that the (theoretical) best performance is to use -march=native ,
 but since the processors of the HP servers are not exactly the same as
the
 Dell's, I'm concerned that compiling with -march=native will render the
VMs
 unable to migrate between the different hosts.


 A couple of points:

 * The effect of setting -march=native depends on the characteristics of
   the CPU (be it virtual or otherwise)
 * The characteristics of the vCPU are defined by qemu's -cpu parameter
 * qemu can emulate features not implemented by the host CPU (at a cost)

 One way to go about it is to start qemu with a -cpu model that exposes
features that all of your host CPUs have in common (or a subset thereof).
In that case, -march=native is fine because all of the features that it
detects as being available will be supported in hardware on the host side.

 Another way is to expose the host CPU fully with -cpu host and to
define your guest CFLAGS according to the most optimal subset. If you are
looking for a 'perfect' configuration then this this would be the most
effective method, if applied correctly.


AFAIK, that's how XenServer configured its hosts. There's CPU Masking
option when a heterogeneous pool of hosts were created, but I have the
impression that CPU Masking is only being employed by the 'xe toolstack'
(CloudStack) layer to determine to which host a VM can be migrated.

 Irrespective of the method, by examining /proc/cpuinfo and using the diff
technique mentioned by Bruce, you should be able to determine the optimal
configuration.

 Finally, in cases where the host CPUs differ significantly - in that
native would imply a different -march value - you may choose to augment
your CFLAGS with -mtune=generic to even out performance across the board. I
don't think this would apply to you though.


Certainly doesn't apply to me. Based on tech spec I have on the servers,
the processors are very similar... I just want to be doubly sure :-)

Thanks for the explanation (including the difference between 'march' and
'mtune' in your other email)!

Rgds,
--


Re: [gentoo-user] How to determine 'Least Common Denominator' between Xen(Server) Hosts

2013-08-15 Thread Pandu Poluan
Thanks, Wang Xuerui and Bruce!

That's exactly helpful.

I'm going to do some testing Real Soon.


Rgds,
--



On Wed, Aug 14, 2013 at 1:28 PM, Wang Xuerui idontknw.w...@gmail.comwrote:

 2013/8/14 Helmut Jarausch jarau...@igpm.rwth-aachen.de:
  Why not compute it yourself?
 
  Do   cat /proc/cpuinfo on all machines and compute the intersection of
 all
  flags
  Then enter something like the following into /etc/portage/make.conf
 
  CFLAGS=-O3 -pipe -msse -msse2 -msse3 -msse4a -m3dnow


 Or even better, directly intersect the GCC flags which can be obtained
 in this way:

 echo | gcc -O2 -pipe -march=native -E -v - 21 | grep cc1

 Also, according to the Gentoo Handbook going -O3 for the entire system
 may cause instability and other problems. Has the situation changed
 over the years?




-- 
FdS Pandu E Poluan
* ~ IT Optimizer ~**
*
 • LOPSA Member #15248
 • Blog : http://pepoluan.tumblr.com
 • Linked-In : http://id.linkedin.com/in/pepoluan


[gentoo-user] How to determine 'Least Common Denominator' between Xen(Server) Hosts

2013-08-13 Thread Pandu Poluan
Hello list!

My company has 2 HP DL585 G5 servers and 5 Dell R... something servers. All
using AMD processors. They currently are acting as XenServer hosts.

How do I determine the 'least common denominator' for Gentoo VMs (running
as XenServer guests), especially for gcc flags?

I know that the (theoretical) best performance is to use -march=native ,
but since the processors of the HP servers are not exactly the same as the
Dell's, I'm concerned that compiling with -march=native will render the VMs
unable to migrate between the different hosts.

Note: Yes I know the HP servers are much older than the Dell ones, but if I
go -march=native then perform an emerge when the guest is on the Dell host,
the guest VM might not be able to migrate to the older HPs.

Rgds,
--


Re: [gentoo-user] Re: Any .config for vbox gentoo guest

2013-08-04 Thread Pandu Poluan
On Aug 4, 2013 5:48 PM, Alan McKinnon alan.mckin...@gmail.com wrote:

 On 02/08/2013 23:08, Kerin Millar wrote:
  Regarding VirtualBox, it does support a virtio-net type ethernet adapter
  so you would certainly benefit from enabling CONFIG_VIRTIO_NET in a
guest.
 
  I'm not entirely certain as to where VirtualBox stands with regard to
  PVOPS support [1] but it would probably also help to enable
  CONFIG_PARAVIRT_GUEST (even though there are no directly applicable
  sub-options).


 How well or otherwise does it perform?

 I have a bunch of minimal VMs in a dev environment using Intel Pro1000,
 and have considered changing to virtio. But it's a lot of work to do[1]
 so someone else's opinion first would be nice.

 [1] They are Gentoo VMs but don't have sources installed and can't
 compile a kernel with the mere 256G RAM I give them. So the entire
 kernel+modules has to be built somewhere else and scp'ed to the VM.
 hence lot of work

 --
 Alan McKinnon
 alan.mckin...@gmail.com



Good Lord! Your VM has 256 GB of RAM??

That's even larger than some virtualization hosts we have in the company...

Rgds,
--


[gentoo-user] Complete list of USE flags?

2013-08-04 Thread Pandu Poluan
Hello guys,

I'm a bit ashamed to ask this question, as it belies how long I haven't
actually installed a 'lightweight' Gentoo system...

But I digress. On to my question:

Anyone knows an exhaustive list of USE flags?

And a related subquestion:

Is the USE flags list at znurt.org up-to-date?

The reason I'm asking, is because I'm planning on building *very*
lightweight systems with as small attack surface as possible.

Rgds,
--


Re: [gentoo-user] systemd - are we forced to switch?

2013-07-22 Thread Pandu Poluan
On Jul 22, 2013 6:34 PM, Samuli Suominen ssuomi...@gentoo.org wrote:

 On 22/07/13 10:03, Helmut Jarausch wrote:

 Hi,

 How did you resolve this conflict?

 Many thanks for a hint,
 Helmut


 As a maintainer of sys-auth/consolekit, and xfce-base/xfce4-meta I can
assure you sys-auth/consolekit is not about to be removed and the support
for systemd-logind will be appended to XFCE instead of replaced like in
GNOME.

 I'm trying to say, either migrate to systemd or pick another desktop with
saner future plans like XFCE.

 - Samuli


+1 for XFCE.

It's more similar to Gnome2 than Gnome3 itself :-)

Rgds,
--


Re: SSDs, VM SANs RAID - WAS Re: [gentoo-user] SSD partitioning and migration

2013-07-20 Thread Pandu Poluan
On Jul 20, 2013 9:27 PM, Tanstaafl tansta...@libertytrek.org wrote:

 On 2013-07-19 3:02 PM, Paul Hartman paul.hartman+gen...@gmail.com wrote:

 I think you are. Unless you are moving massive terabytes of data
 across your drive on a constant basis I would not worry about regular
 everyday write activity being a problem.


 I have a question regarding the use of SSDs in a VM SAN...

 We are considering buying a lower-end SAN (two actually, one for each of
our locations), with lots of 2.5 bays, and using SSDs.

 The two questions that come to mind are:

 Is this a good use of SSDs? I honestly don't know if the running VMs
would benefit from the faster IO or not (I *think* the answer is a
resounding yes)?


Yes, the I/O would be faster, although how significant totally depends on
your workload pattern.

The bottleneck would be the LAN, though. The peak bandwidth of SATA is 6
GB/s = 48 Gbps. You'll need active/active multipathing and/or bonded
interfaces to cater for that firehose.

 Next is RAID...

 I've avoided RAID5 (and RAID6) like the plague ever since I almost got
bit really badly by a multiple drive failure... luckily, the RAID5 had just
finished rebuilding successfully after the first drive failed, before the
second drive failed. I can't tell you how many years I aged that day while
it was rebuilding after replacing the second failed drive.

 Ever since, I've always used RAID10.


Ahh, the Cadillac of RAID arrays :-)

 So... with SSDs, I think another advantage would be much faster rebuilds
after a failed drive? So I could maybe start using RAID6 (would survive two
simultaneous disk failures), and not lose so much available storage (50%
with RAID10)?


If you're using ZFS with spinning disks as its vdev 'elements', resilvering
(rebuilding the RAID array) would be somewhat faster because ZFS knows what
needs to be resilvered (i.e., used blocks) and skip over parts that don't
need to be resilvered (i.e., unused blocks).

 Last... while researching this, I ran across a very interesting article
that I'd appreciate hearing opinions on.

 The Benefits of a Flash Only, SAN-less Virtual Architecture:


http://www.storage-switzerland.com/Articles/Entries/2012/9/20_The_Benefits_of_a_Flash_Only,_SAN-less_Virtual_Architecture.html

 or

 http://tinyurl.com/khwuspo

 Anyway, I look forward to hearing thoughts on this...


Interesting...

Another alternative for performance is to buy a bunch of spinning disks
(let's say, 12 of them 'enterprise'-grade disks), join them into a ZFS Pool
of 5 mirrored vdevs (that is, a RAID10 a la ZFS) + 2 spares, then use 4
SSDs to hold the ZFS Cache and Intent Log.

The capital expenditure for the gained capacity should be cheaper, but with
a very acceptable performance.

Rgds,
--


Re: [gentoo-user] Linux viruses

2013-07-07 Thread Pandu Poluan
On Jul 8, 2013 1:05 AM, Bruce Hill da...@happypenguincomputers.com
wrote:

 On Sun, Jul 07, 2013 at 12:07:51PM -0400, Tanstaafl wrote:
  On 2013-07-06 8:12 AM, Bruce Hill da...@happypenguincomputers.com
wrote:
   NB: I only sell the best A/V software on the market, which hasn't
   missed a virus in the  wild since it's inception.
 
  Not to start a flamewar on which is the best AV, but I'm curious which
  one this is?

 http://www.eset.com/us/home/whyeset/compare/

Oh. My. Goodness! ESET! I *love* that piece of gold :-)

In fact, I'm still having the glow from the satisfaction of successfully
convincing the management to replace the previous p.o.s. that goes by the
name of SEP, with this wonderful antivirus.

The first week, ESET unearthed more than 1'000 threats (throughout the
company) that SEP had turned a blind eye to. It's really a mystery how SEP
ever got crowned with any Good attributes.

Granted, the Business version has much more options than one can shake a
stick at, but for control-happy BOFHs, ESET is a godsend, a breath of fresh
air compared to the CPU-guzzling ineffective p.o.s. called SEP.

(sorry for the tangential offtopicness, I'm just so very glad to see a
fellow ESET-believer ;-) ).

Rgds,
--


Re: [gentoo-user] Lightweight Simple Proxy that supports upstream authentication

2013-05-20 Thread Pandu Poluan
On May 20, 2013 12:04 PM, staticsafe m...@staticsafe.ca wrote:

 On Mon, May 20, 2013 at 11:31:31AM +0700, Pandu Poluan wrote:
  Hello,
 
  I'm looking for a simple HTTP+FTP proxy that supports upstream
  authentication.
 
  The reason is that we (that is, my employer) have a server that requires
  Internet access for its setup, but for some reason* my employer does not
  want to give the contractors a login for the corporate proxy.
 
  I'm planning of setting up a simple proxy to authenticate against the
  corporate proxy using one of my credentials, and have the contractor use
  this simple proxy instead of the corporate one.
 
  I think Squid can do that... but is there a simpler solution? I truly
don't
  need caching, inter-proxy coordination, or other exotic stuff. Just a
way
  to allow other people to authenticate against the corporate proxy using
my
  credentials, but without giving my credentials away.
 
  (Of course the simple proxy will be installed on a totally separate
system,
  one under my full control and nobody else's)
 
  Rgds,
  --

 Polipo perhaps?

 http://www.pps.univ-paris-diderot.fr/~jch/software/polipo/
 --
 staticsafe

Ahh, yes! I once used polipo... damn how could I possibly forget that
*bangs head against wall.

Thanks for reminding me :-)

Rgds,
--


[gentoo-user] Lightweight Simple Proxy that supports upstream authentication

2013-05-19 Thread Pandu Poluan
Hello,

I'm looking for a simple HTTP+FTP proxy that supports upstream
authentication.

The reason is that we (that is, my employer) have a server that requires
Internet access for its setup, but for some reason* my employer does not
want to give the contractors a login for the corporate proxy.

I'm planning of setting up a simple proxy to authenticate against the
corporate proxy using one of my credentials, and have the contractor use
this simple proxy instead of the corporate one.

I think Squid can do that... but is there a simpler solution? I truly don't
need caching, inter-proxy coordination, or other exotic stuff. Just a way
to allow other people to authenticate against the corporate proxy using my
credentials, but without giving my credentials away.

(Of course the simple proxy will be installed on a totally separate system,
one under my full control and nobody else's)

Rgds,
--


Re: [gentoo-user] Fine Tuning NTP Server

2013-05-10 Thread Pandu Poluan
On May 10, 2013 5:23 PM, Andrea Conti a...@alyf.net wrote:

 Hello,

  server  tick.nrc.ca minpoll 64 maxpoll 1024 iburst prefer

 Ouch! minpoll and maxpoll should be specified as the log2 of the actual
 value, i.e. 6 and 10. Those are the defaults anyway.

  disable auth
  broadcastclient
  server ntp.server.com prefer

 This looks fine to me; although configuring a broadcast association when
 your clients also have a unicast association to the same server seems a
 bit pointless, this should not cause any harm.

 I think you should first try to fix your server config and see if
 getting a proper sync on the server also solves the problem with the
 clients.

  As for /etc/conf.d/ntpd, we have set nothing. To be honest I did not
  even know the file
  existed till you mentioned it:
 
  NTPD_OPTS=-u ntp:ntp

 That is where you put the commandline options you want ntpd to be
 started with.

  I would have liked to be better prepared for this but the gentoo wiki
  page has been down for a few weeks now. We are not looking for
  microsecond synchronization however, down to the second would be nice!

 I doubt you can consistently achieve microsecond-level synchronization
 with NTP ;)

 The official documentation of the ntp suite [1] is a good source of
 information; the man pages of ntpd and ntp.conf are also quite
 extensive, albeit a bit terse.

 andrea

 [1] http://www.eecis.udel.edu/~mills/ntp/html/index.html




Many thanks Andrea!

Although I'm not the original poster, but within the next couple of months,
me  my team will have to implement something similar. Your reply is a good
reference for us!

Again, thank you!

Rgds,
--


Re: [gentoo-user] Using date/time variables in cronjobs?

2013-05-05 Thread Pandu Poluan
On May 6, 2013 4:57 AM, Tanstaafl tansta...@libertytrek.org wrote:

 On 2013-05-05 5:25 PM, Neil Bothwick n...@digimed.co.uk wrote:

 On Sun, 5 May 2013 16:06:45 -0400, Todd Goodman wrote:

 mkdir -p $BACKUP_DIR/$PGyy/$PGmm/$PGdd
 /usr/bin/pg_dumpall -U $PGUSER -o | \
  gzip $BACKUP_DIR/$PGyy/$PGmm/$PGdd/pg_all-$PGtt.gz

 You could have it check first and only do the mkdir if the directory
 didn't already exist:

 [[ -d $BACKUP_DIR/$PGyy/$PGmm/$PGdd ]] || \
  mkdir -p $BACKUP_DIR/$PGyy/$PGmm/$PGdd


 You could, but it is redundant as mkdir already does that test when
 invoked with -p.


 Many thanks for the short noob bash scripting tutorial guys. Obviously
I'm very new to it.

 I did confirm that there was no need to test (thanks Neil, this will come
in handy I'm sure), but I also decided to uncomplicate it and just dump all
fo the backups in a single directory. I realized this isn't like my email
backups where I will be keeping years of them. I'll probably only keep 30
or so.

 So, my final script looks like this:


 #!/bin/bash
 BACKUP_DIR=/home/user/mypg_backups
 PGUSER=superuser
 PGyy=`date '+%Y'`
 PGmm=`date '+%m'`
 PGdd=`date '+%d_'`
 PGtt=`date '+%H:%M'`
 /usr/bin/pg_dumpall -U $PGUSER -o -f
$BACKUP_DIR/mypg-$PGyy-$PGmm-$PGdd$PGtt.gz

 For some reason, if I put the underscore in the filename itself, ie:

 $BACKUP_DIR/mypg-$PGyy-$PGmm-$PGdd_$PGtt.gz

 It omitted the $PGdd variable entirely.

 I had to add the underscore into the variable to get the output like I
wanted. Weird...

 Anyway, this is working perfectly, thanks guys.


Totally unweird.

In bash, underscores can be part of a variable name.

$PGdd_$PGtt involves two variables: $PGdd_ (note the trailing underscore)
and $PGtt

You should write it like this:

${PGdd}_$PGtt

The { } syntax is bash's way to indicate what exactly constitutes a
variable name. IOW, the above construct has three parts: the variable
$PGdd, an underscore, and the variable $PGtt

Whenever you want to 'run' a variable name (i.e., combine it with other
characters without specifying a whitespace), you should always use braces {
} around the variable name.

That said, since you're no longer creating a directory structure, why don't
you just output everything using a single 'date' command? E.g. :

date +'mypg-%Y-%m-%d_%H:%M'

Rgds,
--


Re: [gentoo-user] {OT} laptops for a developing country (Vanuatu)

2013-04-27 Thread Pandu Poluan
On Apr 27, 2013 4:02 PM, Neil Bothwick n...@digimed.co.uk wrote:

 On Sat, 27 Apr 2013 12:05:06 +0530, Nilesh Govindrajan wrote:

   I think the problem there is a Chromebook needs to be online in order
   to do much of anything, and the connection needs to be fast in order
   to make them very functional.  Plus most people are paying by the MB
   in Vanuatu and a Chromebook must use a fair amount of data even on a
   fast connection.

  Well, any Chromebook can run a normal Linux distro. The chromebook team
  has put up a chroot helper on their github.

 But they are designed to be used with cloud services, and as such have
 very little storage.

 Have you considered the used market, especially companies replacing
 hardware at regular intervals. You may even get them fro free as a
 charitable donation, giving the company a tax write-off.


This.

Remember that 1- or 2-year old laptops are mighty powerful enough for
nearly everything, except hi-def gaming.

OTOH, sometimes business laptops sacrifice battery life for processing
power. You will want to select the less power-hungry ones.

Rgds,
--


Re: [gentoo-user] mkfs.reiserfs hangs system?

2013-04-26 Thread Pandu Poluan
On Apr 26, 2013 10:31 AM, Pandu Poluan pa...@poluan.info wrote:


 On Apr 26, 2013 9:46 AM, Mark David Dumlao madum...@gmail.com wrote:
 
  On Fri, Apr 26, 2013 at 12:08 AM, Pandu Poluan pa...@poluan.info
wrote:
   Can't get to see dmesg, the system locked up tight.
  
   I can create an ext4 fs on a different partition, and since the
'disk' is
   actually a RAID array, if the array is going south, I should see the
same
   problem with ext4, right?
 
  I am guessing that mkreiserfs happens to touch parts of the disk that
  mke2fs doesn't, and that the system hangs because the disk becomes
  unresponsive. I will predict that mkntfs, which by default zeroes out
  the partition, will fail similarly?
  --
  This email is:[ ] actionable   [ ] fyi[x] social
  Response needed:  [ ] yes  [x] up to you  [ ] no
  Time-sensitive:   [ ] immediate[ ] soon   [x] none
 

 Okay, everybody, thanks for all the input!

 Since this is a server in my office, I couldn't test until I arrive in my
office.

 I'm now (just) arrived in my office, and I will try the following:

 1. Create Reiserfs on a different partition, and

 2. Create a different fs on the problematic partition.

 I'll report back with what happened.

 Rgds,
 --

A follow up:

A partition that had no problems at all with ext4 earlier...

... again got stuck during mkfs.reiserfs. The journal creation progress
reached 100%, then... nothing.

System stuck totally. After about 15 minutes, ssh session died. Console
session totally unresponsive.

I will retry using mkntfs.

Rgds,
--


Re: [gentoo-user] can't mount ext4 fs as est3 or ext3

2013-04-26 Thread Pandu Poluan
On Apr 26, 2013 3:09 PM, Andrea Conti a...@alyf.net wrote:

 Hi,

  EXT3-fs (sda5): error: couldn't mount because of unsupported optional
 features (240)

  /dev/sda5 /   ext4noatime,discard  0 1

 When first mounting the root filesystem the kernel has no access to
 /etc/fstab and therefore by default tries mounting it with all available
 FS drivers until one succeeds. ext3 (or ext4 in ext3 mode) is tried
 before ext4 and you get that error when it fails because the filesystem
 is using ext4-only features such as extents.

 You can avoid that by adding rootfstype=ext4 to the kernel command line.


Cool! I didn't know that before...

For a long time I just ignore the error messages, although yes they are
annoying ;-)

Rgds,
--


Re: PVSCSI vs LSI Logic Parallel/SAS - WAS: Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-25 Thread Pandu Poluan
On Apr 25, 2013 5:54 PM, Tanstaafl tansta...@libertytrek.org wrote:

 On 2013-04-24 10:23 PM, Pandu Poluan pa...@poluan.info wrote:

 My Gentoo VMs in the cloud (using VMware's vCloud) uses PV-SCSI. It's
 stable... but kind of sensitive: Everytime the cloud provider do
 something with their storage, my VMs become Read-Only.


 Ouch... so, how do you fix it?


For the Read-Only problem, a simple reboot suffices.

But I made sure to fire an angry email to the Customer Support, telling
them in no uncertain terms that the next time they want to do something re:
their storage, they contact me first.

So, when the time comes for then to do some storage management things, they
tell me exactly at what time. I scheduled the VMs to shut down at the
agreed time, and turn them back on as soon as they text me that their
maintenance had finished. Only one VM I left on, the gatewall VM. And
every single time the gatewall VM detected that something is being done on
the storage level, and changed to a Read-Only mode.

I consider myself lucky that when the 'problem' manifested itself, the VMs
are not yet in Production. And no corruption happened.

(PS: Strangely enough, the RO switch happens only on Gentoo VMs; the
FreeBSD VMs and Debian VMs are unaffected. Maybe because I've pared down
each and every Gentoo VM to their bare minimum, so they are much more
responsive)

Rgds,
--


[gentoo-user] mkfs.reiserfs hangs system?

2013-04-25 Thread Pandu Poluan
Just wondering if any of you guys experienced this lately:

System hangs when creating a brand-new ReiserFS on a new partition.

I've tried using the latest gentoo minimal CD, or the latest
SystemRescueCD, both exhibited the same.

I'm on an HP DL585 G7 box, by the way, so it's using an AMD CPU.

I'd appreciate any suggestions. My Google-fu all expose only old threads.

Rgds,
--


Re: [gentoo-user] mkfs.reiserfs hangs system?

2013-04-25 Thread Pandu Poluan
On Apr 25, 2013 10:37 PM, Michael Hampicke m...@hadt.biz wrote:

 Am 25.04.2013 17:25, schrieb Pandu Poluan:

 Just wondering if any of you guys experienced this lately:

 System hangs when creating a brand-new ReiserFS on a new partition.

 I've tried using the latest gentoo minimal CD, or the latest
 SystemRescueCD, both exhibited the same.

 I'm on an HP DL585 G7 box, by the way, so it's using an AMD CPU.

 I'd appreciate any suggestions. My Google-fu all expose only old threads.


 Are there any error/warnings in dmes or the logs?

 Maybe the disk is toast? Can you create other file systems?


Can't get to see dmesg, the system locked up tight.

I can create an ext4 fs on a different partition, and since the 'disk' is
actually a RAID array, if the array is going south, I should see the same
problem with ext4, right?

Rgds,
--


Re: [gentoo-user] mkfs.reiserfs hangs system?

2013-04-25 Thread Pandu Poluan
On Apr 26, 2013 9:46 AM, Mark David Dumlao madum...@gmail.com wrote:

 On Fri, Apr 26, 2013 at 12:08 AM, Pandu Poluan pa...@poluan.info wrote:
  Can't get to see dmesg, the system locked up tight.
 
  I can create an ext4 fs on a different partition, and since the 'disk'
is
  actually a RAID array, if the array is going south, I should see the
same
  problem with ext4, right?

 I am guessing that mkreiserfs happens to touch parts of the disk that
 mke2fs doesn't, and that the system hangs because the disk becomes
 unresponsive. I will predict that mkntfs, which by default zeroes out
 the partition, will fail similarly?
 --
 This email is:[ ] actionable   [ ] fyi[x] social
 Response needed:  [ ] yes  [x] up to you  [ ] no
 Time-sensitive:   [ ] immediate[ ] soon   [x] none


Okay, everybody, thanks for all the input!

Since this is a server in my office, I couldn't test until I arrive in my
office.

I'm now (just) arrived in my office, and I will try the following:

1. Create Reiserfs on a different partition, and

2. Create a different fs on the problematic partition.

I'll report back with what happened.

Rgds,
--


Re: PVSCSI vs LSI Logic Parallel/SAS - WAS: Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-24 Thread Pandu Poluan
On Apr 24, 2013 2:29 AM, Tanstaafl tansta...@libertytrek.org wrote:

 On 2013-04-22 8:56 AM, Andre Lucas Falco alfa...@gmail.com wrote:

 2013/4/21 Tanstaafl tansta...@libertytrek.org wrote:

 Windows VMs see get an 'LSI Logic SAS', and my gentoo VM gets an
 'LSI Logic Parallel' controller.


 Did you tested using pvscsi? It's improve performance with less cost to
 CPU usage.


 No, I didn't...

 It appears there is pvscsi support in the mainline linux kernel, but is
it rock-solid? Anyone else here running gentoo linux with this driver for
their primary/boot disk controller?

 Also, for my windows server 2008r2 vms, I used the default, which was the
LSI SAS... I did search and found the knowledgebase article describing how
to change them, but is the gain really worth the trouble (and more
importantly, the risk)?


My Gentoo VMs in the cloud (using VMware's vCloud) uses PV-SCSI. It's
stable... but kind of sensitive: Everytime the cloud provider do something
with their storage, my VMs become Read-Only.

Other than that, performance is good, no fs corruption, etc.

Rgds,
--


Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Pandu Poluan
On Apr 21, 2013 4:51 PM, J. Roeleveld jo...@antarean.org wrote:

 On Sat, April 20, 2013 18:22, Pandu Poluan wrote:
  On Apr 20, 2013 10:01 PM, Tanstaafl tansta...@libertytrek.org wrote:
 
  Thanks for the responses so far...
 
  Another question - are there any caveats as to which filesystem to use
  for a mail server, for virtualized systems? Ir do the same
  issues/questions
  apply (ie, does the fact that it is virtualized not change anything)?
 
  If there are none, I'm curious what others prefer.
 
  I've been using reiserfs on my old mail server since it was first set
up
  (over 8 years ago). I have had no issues with it whatsoever, and even
had
  one scare with a bad UPS causing the system to experienc an unclean
  shutdown - but it came back up, auto fsck'd, and there was no 'apparent'
  data loss (this was a very long time ago, so if there had been any
serious
  problems, I'd have known about it long go).
 
  I've been considering using XFS, but have never used it before.
 
  So, anyway, opinions are welcome...
 
  Thanks again
 
  Charles
 
 
  Reiterating what others have said, in a virtualized environment, it's
how
  you build the underlying storage that will have the greatest effect on
  performance.
 
  Just an illustration: in my current employment, we have a very heavily
  used
  database (SQL Server). To ensure good performance, I dedicated a RAID
  array
  of 8 drives (15k RPM each), ensure that the space allocation is 'thick'
  not
  'thin', and dedicate the whole RAID array to just that one VM.
Performance
  went through the roof with that one... especially since it was
originally
  a
  physical server running on top of 4 x 7200 RPM drives ;-)
 
  If you have the budget, you really should invest in a SAN Storage
solution
  that can provide tiered storage, in which frequently used blocks will
be
  'cached' in SSD, while less frequently used blocks are migrated first to
  slower SAS drives, and later on (if 'cold') to even slower SATA drives.

 4-tier sounds nicer: 1 TB in high speed RAM for the high-speed layer, with
 dedicated UPS to ensure this is backed up to disk on shutdown.


Indeed! But 1 TB is kind of overkill, if you ask me... :-D

VMware and XenServer can 'talk' with some Storage controllers, where they
conspire in the background to provide 'victim cache' on the virtualization
host. Not sure about Hyper-V.

I myself had had good experience relying on EMC VNX's internal 8 GB cache;
apparently the workload is not high enough to stress the system.

Rgds,
--


Re: [gentoo-user] LVM on VM or not? - WAS Re: Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-21 Thread Pandu Poluan
On Apr 22, 2013 2:05 AM, Tanstaafl tansta...@libertytrek.org wrote:

 On 2013-04-21 12:38 PM, Randy Barlow ra...@electronsweatshop.com wrote:

 I should mention one specific advantage to using LVM over file-based
 images: I believe you will find that LVM performs better. This is due to
 avoiding the duplicated filesystem overhead that would occur in the
 file-based image approach. If the guest wants to fsync(), for example,
 both filesystems need to be involved (the guest's, and the host's). With
 LVM, you still have the host processing the LVM bits of that process,


 ???

 This doesn't make sense to me.

 Unless you're talking about using LVM on the HOST.

 I'm not. I didn't specify this in this particular post, but I'm using
vmWare ESXi, and installing a gentoo VM to run on it.

 So, I'm asking about using the LVM2 installation manual from Gentoo and
using LVM2 for just my gentoo VM...

 So, in this case, is it still recommended/fully supported/safe?

 Thanks


Honestly, I don't see how LVM can interact with VMware's VMDK... unless one
use VMware's thin provisioning over a SAN Storage Thin Provisioning, in
which case all hell will break loose once the actual disk size is reached...

Stick with VMware Thin Provisioning XOR SAN Storage Thin Provisioning.
Never both.

One thing you have to think about, is whether to implement
LVM/partition-less, or LVM/partitions.

Rgds,
--


Re: [gentoo-user] vmWare HowTo / best practices

2013-04-20 Thread Pandu Poluan
On Apr 19, 2013 11:14 PM, J. Roeleveld jo...@antarean.org wrote:i

 Pandu Poluan pa...@poluan.info wrote:


 On Apr 19, 2013 10:24 PM, Jarry mr.ja...@gmail.com wrote:
 
  On 19-Apr-13 16:21, Tanstaafl wrote:
 
  Previously I had asked for some help with a preconfigured image, but
  decided against that, and have been playing and reading.
 
  I'm ready to get down to brass tacks with the ultimate goal of
getting a
  new gentoo vm up and running on my esxi host this weekend.
 
  Can someone point me to some recent/decent docs on best practices for
  this? Ideally gentoo related, but just general linux related would be
ok
  too.
 
  Things like vmware-tools installation (is open-vm-tools good enough
  nowadays?), time syncing, snapshots/backups, etc is what I'm looking
for.
 
 
  May I join the club? I have been running a few Gentoo-VMs
  for some time, but I'm still quite new to this ESXi-world.
  But one I know for sure is that hypervisor-virtualization
  is much more complex than OS-virtualization (i.e. VServer
  or OpenVZ which I have used previously).
 
  vmware-tools: I have tested open-vm-tools but now I'm running
  my VMs without them because every kernel upgrade was a real
  pain in a**. And trully I did not see any benefit in running
  vm-tools (maybe it would be different on desktop). For
  shutdown of Gentoo-VMs from ESXi I use ssh-script or
  hibernation.
 
  Snapshots are very well covered by esxi and for backup I use
  ghetto-vcb tool (script). It tried backuprestore on one
  of my running Gentoo-VM servers and it works like charm.
 
  For VM-hardware I used (iirc) CentOS template, because
  with other linux 64b I did not get hw-options I wanted
  to use (LSI-Logic Parallel SCSI controller, and VMXNET3
  network adapter).
 
  Unfortunatelly there is not a lot info about Gentoo  ESXi
  and what exists is quite outdated (i.e. Gentoo-wiki). But
  I used guidelines for general linux-VM, and I consulted
  problems on VMware community web-page...
 
  Jarry
  --

 Well, for me, XenServer-based virtualization is very very simple. And if
I compile the kernel with all Xen PV (paravirtualized) 'FrontEnds', it runs
near-natively.

 Only the xend daemon need some 'tweaking' to run properly.

 Do a Google search for gentoo xenserver and if you find pages written
by me, those are my experiences running Gentoo on top of XenServer,
successfully.

 Rgds,
 --


 Pandu.

 Do you still use xend on your Xen hosts?
 I thought that was deprecated?


Ah, sorry. What I meant was xstools daemon. It's necessary to properly
monitor Linux PV guests on XenServer.

It's apparently a simple shell script, which commits suicide when it can't
determine what Linux distro it's running in. Need to add several lines of
code within the script to enable it to recognize Gentoo and not commit
suicide.

XenCenter/CloudStack will.afterwards detect that the machine is running
Gentoo. Won't affect anything, but better than it reporting unknown Linux
:-)

Rgds,
--


Re: [gentoo-user] vmWare HowTo / best practices

2013-04-20 Thread Pandu Poluan
On Apr 20, 2013 9:31 PM, J. Roeleveld jo...@antarean.org wrote:

 On Fri, April 19, 2013 18:42, Jarry wrote:
  On 19-Apr-13 17:52, Pandu Poluan wrote:
 
  Well, for me, XenServer-based virtualization is very very simple. And
if
  I compile the kernel with all Xen PV (paravirtualized) 'FrontEnds', it
  runs near-natively.
 
  Only the xend daemon need some 'tweaking' to run properly.
 
  Do a Google search for gentoo xenserver and if you find pages written
  by me, those are my experiences running Gentoo on top of XenServer,
  successfully.
 
  What I had in mind is administration of hypervisor itself.
  ESXi is feature-rich product, and to handle all its possibilities
  (i.e. vMotion, vShield, HA, FT, vCenter, DRS/DPM, FW, etc) one have
  to spend quite long time by studying and the learning curve is
  very steep (again, I'm comparing with VServer or OpenVZ/Virtuozzo,
  I do not know XenServer).
 
  Deploying Gentoo-guest (or VM / DomU as they call it) is
  actually very easy. And after reading your wiki-page I'd say
  it is easier on ESXi then on XenServer, because there is actually
  no difference between installing Gentoo on VM, or real hardware
  (no need for special compile options or special device-files,
  no limit on boot-loader, etc.).

 Actually, deploying it on ESXi and on XenServer is both very easy.
 The difference is, XenServer has 2 options for the guests:
 1) Fully Virtualised
 2) Paravirtualised

 ESXi only supports the first.
 If you install all VMs using the first option, it is very simple.

 But, if you want maximum (as in, near native) performance, the 2nd option
 is definitely worth the extra effort.

 I use a Gentoo Dom0 (Xen Host) with several Gentoo VMs running on top of
 it. I only had to add a few options to the kernel configuration to get the
 VMs working.
 Similar effort to installing a Gentoo guest on ESXi, but on ESXi, I would
 need to add the VMWare tools to get the VMs to shutdown correctly when I
 need to shutdown the host.

 --
 Joost Roeleveld



True. Since on Gentoo we have access to the source code (and have to
compile our own kernel anyways), it's just wasteful top not leverage the
near-native performance of full PV (paravirtualized) mode, in which all
important devices, namely storage and networking, are offloaded to the dom0
via the proper hypercalls.

XenServer also has all the so-called 'advanced' features of VMware, even in
the free edition. Citrix only charged for the 'automation' tools (e.g.,
automatic workload balancing). But if one is well versed with programming
and the CloudStack API, one can easily create one's own automation system.

In my current employment, I have been busy rebuilding the data center using
XenServer, and decommissioning many VMware ESXi hosts while at it. By the
end of this year, I expect we will be VMware-free.

Rgds,
--

Rgds,
--


Re: [gentoo-user] Best filesystem for virtualized gentoo mail server - WAS: vmWare HowTo / best practices

2013-04-20 Thread Pandu Poluan
On Apr 20, 2013 10:01 PM, Tanstaafl tansta...@libertytrek.org wrote:

 Thanks for the responses so far...

 Another question - are there any caveats as to which filesystem to use
for a mail server, for virtualized systems? Ir do the same issues/questions
apply (ie, does the fact that it is virtualized not change anything)?

 If there are none, I'm curious what others prefer.

 I've been using reiserfs on my old mail server since it was first set up
(over 8 years ago). I have had no issues with it whatsoever, and even had
one scare with a bad UPS causing the system to experienc an unclean
shutdown - but it came back up, auto fsck'd, and there was no 'apparent'
data loss (this was a very long time ago, so if there had been any serious
problems, I'd have known about it long go).

 I've been considering using XFS, but have never used it before.

 So, anyway, opinions are welcome...

 Thanks again

 Charles


Reiterating what others have said, in a virtualized environment, it's how
you build the underlying storage that will have the greatest effect on
performance.

Just an illustration: in my current employment, we have a very heavily used
database (SQL Server). To ensure good performance, I dedicated a RAID array
of 8 drives (15k RPM each), ensure that the space allocation is 'thick' not
'thin', and dedicate the whole RAID array to just that one VM. Performance
went through the roof with that one... especially since it was originally a
physical server running on top of 4 x 7200 RPM drives ;-)

If you have the budget, you really should invest in a SAN Storage solution
that can provide tiered storage, in which frequently used blocks will be
'cached' in SSD, while less frequently used blocks are migrated first to
slower SAS drives, and later on (if 'cold') to even slower SATA drives.

Rgds,
--


Re: [gentoo-user] vmWare HowTo / best practices

2013-04-19 Thread Pandu Poluan
On Apr 19, 2013 10:24 PM, Jarry mr.ja...@gmail.com wrote:

 On 19-Apr-13 16:21, Tanstaafl wrote:

 Previously I had asked for some help with a preconfigured image, but
 decided against that, and have been playing and reading.

 I'm ready to get down to brass tacks with the ultimate goal of getting a
 new gentoo vm up and running on my esxi host this weekend.

 Can someone point me to some recent/decent docs on best practices for
 this? Ideally gentoo related, but just general linux related would be ok
 too.

 Things like vmware-tools installation (is open-vm-tools good enough
 nowadays?), time syncing, snapshots/backups, etc is what I'm looking for.


 May I join the club? I have been running a few Gentoo-VMs
 for some time, but I'm still quite new to this ESXi-world.
 But one I know for sure is that hypervisor-virtualization
 is much more complex than OS-virtualization (i.e. VServer
 or OpenVZ which I have used previously).

 vmware-tools: I have tested open-vm-tools but now I'm running
 my VMs without them because every kernel upgrade was a real
 pain in a**. And trully I did not see any benefit in running
 vm-tools (maybe it would be different on desktop). For
 shutdown of Gentoo-VMs from ESXi I use ssh-script or
 hibernation.

 Snapshots are very well covered by esxi and for backup I use
 ghetto-vcb tool (script). It tried backuprestore on one
 of my running Gentoo-VM servers and it works like charm.

 For VM-hardware I used (iirc) CentOS template, because
 with other linux 64b I did not get hw-options I wanted
 to use (LSI-Logic Parallel SCSI controller, and VMXNET3
 network adapter).

 Unfortunatelly there is not a lot info about Gentoo  ESXi
 and what exists is quite outdated (i.e. Gentoo-wiki). But
 I used guidelines for general linux-VM, and I consulted
 problems on VMware community web-page...

 Jarry
 --

Well, for me, XenServer-based virtualization is very very simple. And if I
compile the kernel with all Xen PV (paravirtualized) 'FrontEnds', it runs
near-natively.

Only the xend daemon need some 'tweaking' to run properly.

Do a Google search for gentoo xenserver and if you find pages written by
me, those are my experiences running Gentoo on top of XenServer,
successfully.

Rgds,
--


Re: [gentoo-user] which machine to buy for perfect gentoo machine?!

2013-04-14 Thread Pandu Poluan
On Apr 14, 2013 1:27 PM, Michael Mol mike...@gmail.com wrote:

 On 04/14/2013 01:55 AM, Pandu Poluan wrote:
 
  On Apr 14, 2013 1:42 AM, Michael Mol mike...@gmail.com
  mailto:mike...@gmail.com wrote:
 

 [snip]

 
  What I meant was: given 4 physical AMD cores (but only 2 FPUs, courtesy
  of AMD's Bulldozer/Piledriver arch) vs 4 virtual Intel cores (2 cores
  split into 4 by Hyperthreading), I undoubtedly prefer 4 physical ones.
 
  (Of course if the Intel CPU has 4 pphysical cores, it should be compared
  with an 8-core AMD CPU).
 
  I had some lively discussion on AMD vs Intel *for virtualization* in the
  Gentoo Community on Google+, which referenced a thread on ServerFault.
  The conclusion was: Intel CPUs (provided they support VT-x) can run
  baremetal virtualization as well as AMD, in the majority of cases.
 
  It's the minority of cases -- edge cases -- that I'm concerned with.
  And, lacking the money to actually buy 2 complete systems to perform
  comparison, I'll take the safe route anytime.
 
  Yes, Intel's top-of-the-line processors might be faster than AMD's, but
  the latter is cheaper, and exhibited a much more 'stable' performance
  (i.e., no edge cases to bite me later down the road).
 
  That said, I read somewhere about the 'misimplementation' of some
  hypercalls in Intel CPUs... in which some hypercall exceptions are
  mistakenly handled by the Ring 0 hypervisor instead of the Ring 1 guest
  OS, thus enabling someone to 'break out' of the VM's space. This
  misimplementation is exploitable on KVM and Xen (the latter, my
  preferred baremetal virtualization).

 That's actually very interesting. I hadn't heard about this.


Here you go:

http://blog.xen.org/index.php/2012/06/13/the-intel-sysret-privilege-escalation/

It's CVE-2012-0217, and the guys from vupen actually has created a working
proof:

http://www.vupen.com/blog/20120904.Advanced_Exploitation_of_Xen_Sysret_VM_Escape_CVE-2012-0217.php

Rgds,
--


Re: [gentoo-user] which machine to buy for perfect gentoo machine?!

2013-04-13 Thread Pandu Poluan
On Apr 13, 2013 12:18 PM, Nilesh Govindrajan m...@nileshgr.com wrote:

 On Saturday 13 April 2013 10:39:08 AM IST, Kvothe Tech wrote:

 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA256

 Tamer Higazi th9...@googlemail.com wrote:

 Hi people!
 My old core2duo machine says slowly goodbye and I am at this lever
 after
 7 years for buying myself a new developer machine, that should serve me
 well for a long time again. With intel I never had problems, all their
 systems were REALLY stable, and they were really worth their money up
 to
 the last cent.

 I am asking all the gentoo people for an advise, for it's opinion which
 cpu to buy, that would perfectly work with Gentoo, as well where all of
 it's future would be used-
 There are 3 choices:

 Intel Xeon E5-2650
 Core i7 3979 extreme edition
 AMD FX.8350 CPU


 for an advise, I would kindly thank you.



 Tamer


 Xeon if its a server i7 if not but that's just me also depends what else
you have in it or want and what other than developing you're doing
 - --
 Sent from Kaiten Mail. Please excuse my brevity.



 IMO FX8350 would be better, especially if you're running long compile
jobs (include portage as a part of it).
 FX8350 outperforms i5 in various cases for the price point.


I myself prefer AMD CPUs to Intel ones.

Intel has this habit of 'segmenting' their processor features. E.g., Intel
VT-x (Intel's buggy implementation of AMD-V) is not available across the
board. If one needs to leverage VT-x for virtualization purposes, one must
be double sure that the CPU one bought supports VT-x.

All latest AMD CPUs (except the laptop versions) support all AMD features.
So, as long as one does not buy a laptop CPU, one can be sure that one gets
everything one wants from a modern CPU.

Rgds,
--


Re: [gentoo-user] which machine to buy for perfect gentoo machine?!

2013-04-13 Thread Pandu Poluan
On Apr 13, 2013 8:29 PM, Tamer Higazi th9...@googlemail.com wrote:

 Hi Dale!


 Am 13.04.2013 13:54, schrieb Dale:
  Pandu Poluan wrote:
 
 
  I myself prefer AMD CPUs to Intel ones.
 
  Intel has this habit of 'segmenting' their processor features. E.g.,
  Intel VT-x (Intel's buggy implementation of AMD-V) is not available
  across the board.

 What is VT-x 


you really should learn to use Google...

In short: VT-x is Intel's version of AMD-V.

What is AMD-V? It's a feature of AMD CPUs that *greatly* assist
virtualization.

It's not just VT-x, there are a *lot* of features that Intel may or may not
provide on a certain model.

 And also all the time, Intel promotes for their Hiperthreading
 support, as well Intel swears on their QuickPath system they have
 developed and should release the FSB which is stil being used at AMD,

Incorrect. AMD uses HyperTransport for a lng time. QuickPath is just
Intel's version of HyperTransport.

As to Hyperthreading... it was technology from Pentium 4 actually,
originally called NetBurst, it splits a core into two virtual cores,
leveraging Intel's long pipeline. There are benefits, but also drawbacks.

 even when they mention that MT (Megatransfer instead GHZ) for
 describing their frontside bus speed

 so, it is in this case not only the CPU's speed, also the Speed the data
 reaches the memory, and other components like the GPU of your graphics
 device, no?!


Yes, and honestly, AMD was there first. IIRC, Intel still have some
problems with cache coherency on multiple processor systems. AMD has no
such problems; the HyperTransport technology used by AMD is perfectly
capable of servicing NUMA Architecture.


 And what about Hyperthreading?! At the Gentoo make configuration guide,
 the intel corei7 are fully supported.


The 'support' comes from gcc, and gcc fully supports AMD CPUs also.

 There is being described, that if Intel corei 5 or 7 CPU's are used, I
 could double the amount of cpu's for compiling

 MAKEOPTS=-j8 (for a quadcore core i5 / 7) because of it's
 hyperthreading support.


As I wrote above, Intel's Hyperthreading splits each core into two virtual
cores. Thus, if you know the number of physical cores *and* you've turned
on Hyperthreading in the BIOS, you can (and should) double the number of
jobs.

That information is *not* due to Gentoo better supporting Intel, it's there
because of Intel's complexity.

AMD CPUs from the get-go already support a higher core density than Intel;
they never need to split their cores into virtual cores.


  If one needs to leverage VT-x for virtualization
  purposes, one must be double sure that the CPU one bought supports
VT-x.
 
  All latest AMD CPUs (except the laptop versions) support all AMD
  features.

 Where are the latest AMD CPU sets on Gentoo used at all ?! What about
 the Intel's one?! And do they make a huge difference in this case?!


gcc -march=native will allow gcc to detect and leverage all features.

I don't know which features are used where, except for AMD-V, which is
heavily leveraged by virtualization (virtualbox or Xen, in my situation).


 If you can give me a deep technical answer, I would be very happy


 The money is not what counts. It's the system stability. My AMD cpu was
 very lng time ago an AMD Athlon XP which makde me a lots of
 headache.


You're sooo out of date.

Nowadays, AMD CPUs are at least as stable as Intel CPUs.

Rgds,
--


Re: [gentoo-user] which machine to buy for perfect gentoo machine?!

2013-04-13 Thread Pandu Poluan
On Apr 13, 2013 11:57 PM, Tamer Higazi th9...@googlemail.com wrote:

 Am 13.04.2013 18:24, schrieb Pandu Poluan:
 
  On Apr 13, 2013 8:29 PM, Tamer Higazi th9...@googlemail.com
  mailto:th9...@googlemail.com wrote:
 
  Hi Dale!
 
 
  Am 13.04.2013 13:54, schrieb Dale:
   Pandu Poluan wrote:
  
  
   I myself prefer AMD CPUs to Intel ones.
  
   Intel has this habit of 'segmenting' their processor features. E.g.,
   Intel VT-x (Intel's buggy implementation of AMD-V) is not available
   across the board.
 
  What is VT-x 
 
 
  you really should learn to use Google...
 
  In short: VT-x is Intel's version of AMD-V.
 
  What is AMD-V? It's a feature of AMD CPUs that *greatly* assist
  virtualization.
 
  It's not just VT-x, there are a *lot* of features that Intel may or may
  not provide on a certain model.
 
  And also all the time, Intel promotes for their Hiperthreading
  support, as well Intel swears on their QuickPath system they have
  developed and should release the FSB which is stil being used at AMD,
 
  Incorrect. AMD uses HyperTransport for a lng time. QuickPath is just
  Intel's version of HyperTransport.
 
  As to Hyperthreading... it was technology from Pentium 4 actually,
  originally called NetBurst, it splits a core into two virtual cores,
  leveraging Intel's long pipeline. There are benefits, but also
drawbacks.
 
  even when they mention that MT (Megatransfer instead GHZ) for
  describing their frontside bus speed
 
  so, it is in this case not only the CPU's speed, also the Speed the
data
  reaches the memory, and other components like the GPU of your graphics
  device, no?!
 
 
  Yes, and honestly, AMD was there first. IIRC, Intel still have some
  problems with cache coherency on multiple processor systems. AMD has no
  such problems; the HyperTransport technology used by AMD is perfectly
  capable of servicing NUMA Architecture.
 
 
  And what about Hyperthreading?! At the Gentoo make configuration guide,
  the intel corei7 are fully supported.
 
 
  The 'support' comes from gcc, and gcc fully supports AMD CPUs also.
 
  There is being described, that if Intel corei 5 or 7 CPU's are used, I
  could double the amount of cpu's for compiling
 
  MAKEOPTS=-j8 (for a quadcore core i5 / 7) because of it's
  hyperthreading support.
 
 
  As I wrote above, Intel's Hyperthreading splits each core into two
  virtual cores. Thus, if you know the number of physical cores *and*
  you've turned on Hyperthreading in the BIOS, you can (and should) double
  the number of jobs.
 
  That information is *not* due to Gentoo better supporting Intel, it's
  there because of Intel's complexity.
 
  AMD CPUs from the get-go already support a higher core density than
  Intel; they never need to split their cores into virtual cores.
 
 
   If one needs to leverage VT-x for virtualization
   purposes, one must be double sure that the CPU one bought supports
  VT-x.
  
   All latest AMD CPUs (except the laptop versions) support all AMD
   features.
 
  Where are the latest AMD CPU sets on Gentoo used at all ?! What about
  the Intel's one?! And do they make a huge difference in this case?!
 
 
  gcc -march=native will allow gcc to detect and leverage all features.
 
  I don't know which features are used where, except for AMD-V, which is
  heavily leveraged by virtualization (virtualbox or Xen, in my
situation).
 
 
  If you can give me a deep technical answer, I would be very happy
 
 
  The money is not what counts. It's the system stability. My AMD cpu was
  very lng time ago an AMD Athlon XP which makde me a lots of
  headache.
 
 
  You're sooo out of date.
 
  Nowadays, AMD CPUs are at least as stable as Intel CPUs.
 
  Rgds,
  --
 



 Hi Erick!
 Thank you very much for your great description that makes my decision
 easier.

 However, one last question


 On a modern AMD machine, would I have to enable hyperthreading support
 in the kernel as well, and should / must I double the cores at the
 MAKEOPTS flag ?!


One, I'm not Erick.

Two, please don't top-post.

Three, AMD has no concept of Hyperthreading. Just match -j to the number of
cores your CPU provides, and that's it.

As I wrote, an AMD Quad Core provides actual 4 cores. An Intel Quad Core
with Hyperthreading actually provides only 2 physical cores, but then it
performs some internal trickery so the OS sees a total of 4 cores.

I much prefer having 4 actual cores than 4 virtual cores (only 2 actual
cores); less chance of things messing up royally if I hit some edge cases
where Hyperthreading falls flat on its face.

Rgds,
--


Re: [gentoo-user] which machine to buy for perfect gentoo machine?!

2013-04-13 Thread Pandu Poluan
On Apr 14, 2013 1:42 AM, Michael Mol mike...@gmail.com wrote:

 On 04/13/2013 01:45 PM, Pandu Poluan wrote:
 
 [snip]

  Three, AMD has no concept of Hyperthreading.

 Correct.

  Just match -j to the number of cores your CPU provides, and that's
  it.

 Well, YMMV. You can spend a lot of time adjusting -j on a per-system
 basis to account for things like I/O. Right now, I'm in the -j
 $(cores*1.5) -l $(cores) camp.

 
  As I wrote, an AMD Quad Core provides actual 4 cores.

 Correct.

  An Intel Quad Core with Hyperthreading actually provides only 2
  physical cores, but then it performs some internal trickery so the OS
  sees a total of 4 cores.

 Incorrect. Intel Quad Core with Hyperthreading means there are four
 physical cores, and there is hyperthreading enabled. This results in the
 OS seeing eight logical cores. There is sufficient information available
 via ACPI (or is it DMI?) that the kernel knows which virtual cores are
 part of which physical cores, which physical cores are part of which CPU
 packages, and how everything is connected together.


Ah yes, thank you for the correction. I misstated there, my bad.

What I meant was: given 4 physical AMD cores (but only 2 FPUs, courtesy of
AMD's Bulldozer/Piledriver arch) vs 4 virtual Intel cores (2 cores split
into 4 by Hyperthreading), I undoubtedly prefer 4 physical ones.

(Of course if the Intel CPU has 4 pphysical cores, it should be compared
with an 8-core AMD CPU).

I had some lively discussion on AMD vs Intel *for virtualization* in the
Gentoo Community on Google+, which referenced a thread on ServerFault. The
conclusion was: Intel CPUs (provided they support VT-x) can run baremetal
virtualization as well as AMD, in the majority of cases.

It's the minority of cases -- edge cases -- that I'm concerned with. And,
lacking the money to actually buy 2 complete systems to perform comparison,
I'll take the safe route anytime.

Yes, Intel's top-of-the-line processors might be faster than AMD's, but the
latter is cheaper, and exhibited a much more 'stable' performance (i.e., no
edge cases to bite me later down the road).

That said, I read somewhere about the 'misimplementation' of some
hypercalls in Intel CPUs... in which some hypercall exceptions are
mistakenly handled by the Ring 0 hypervisor instead of the Ring 1 guest OS,
thus enabling someone to 'break out' of the VM's space. This
misimplementation is exploitable on KVM and Xen (the latter, my preferred
baremetal virtualization).

 
  I much prefer having 4 actual cores than 4 virtual cores (only 2
  actual cores); less chance of things messing up royally if I hit some
  edge cases where Hyperthreading falls flat on its face.

 Whatever works. I'll note that AMD's piledriver core does something very
 complementary to hyperthreading. Where HT uses some circuitry to avoid
 context switching when changing whether a core is handling one thread vs
 another thread, Piledriver has a small number of physical front-end
 cores dispatching to a larger number of backend pipelines. It's a very
 curious architecture, and I look forward to seeing how it plays out. HT
 and Piledriver are conceptually very similar when you look at them in
 the right way...Piledriver might be seen as a more general approach to
 what HT does.


True. The main complexity is when an instruction requires access to the
FPU, since there's only one FPU per two GP cores. This will somewhat impact
applications that uses the FPU heavily... except if they can switch to
OpenCL and leverage the embedded Radeon on AMD's so-called APUs.

 Personally, I've enjoyed both Intel and AMD processors. Last I assembled
 a system, Intel's midrange offered more bang for the buck than AMD, but
 Intel's midrange part was also much more expensive. OTOH, AMD systems
 could be upgraded for piece by piece for much, much, much longer,
 whereas Intel systems tended to require replacing many more parts at the
 same time.

 That was about five years ago, though...I don't know exactly where
 things sit today. I'd start with the cpubenchmarking.net CPU value
 listing, and find the best-value part that has the performance degree
 I'm looking for.

 http://cpubenchmark.net/cpu_value_available.html

 I might also cross-reference that page with this one:

 http://cpubenchmark.net/mid_range_cpus.html


True. My desktop computer died on me about 6 months ago. It was 4.5 years
old at the moment of death. It had served me very well.

That said, my brother had just purchased an AMD system (store-assembled)
with an FX-8350, and he said that it's faster than anything he's ever used
before, and he's used many high-end systems in his job (he's a Petroleum
Geologist, his line of work involves analyzing a HUGE amount of data to
find out the 'oil potential' of an area, to give his company a ballpark
figure on how much to bid for the exploitation rights to the area).

 If buying an Intel part, I'd be very, very careful to make sure that it
 supported all the features I want. I've

Re: [gentoo-user] Rant/Warning: fun with awesome and lightdm

2013-04-10 Thread Pandu Poluan
On Apr 9, 2013 11:18 PM, Marc Joliet mar...@gmx.de wrote:

 Update:

 I opened a bug: https://bugs.gentoo.org/show_bug.cgi?id=465288. There is a
 reference to the Awesome upstream bug which resulted in the change to the
 desktop file, along with a link to a LightDM upstream bug that sounds
like what
 was happening on my system.

 I hope I didn't aggravate anybody with this thread, I don't usually rant
 publicly like this (I'm sort of ashamed, actually).


No worries; sometimes one needs to 'let out' one's bottled emotions.
Keeping strong emotions unvented will be bad for your heart.

 I do consider the information relevant to Awesome users, though, since the
 change might also hit users of kdm, gdm, and others. In fact, Fedora also
has a
 bug about this: https://bugzilla.redhat.com/show_bug.cgi?id=901434.


... and that's why I'm sure some of us surely appreciate your report. They
may not say so publicly, but they are thankful :-)

Rgds,
--


Re: [gentoo-user] -march=? for this cpu

2013-04-09 Thread Pandu Poluan
On Apr 9, 2013 1:16 PM, Michael Hampicke gentoo-u...@hadt.biz wrote:

 Am 09.04.2013 05:50, schrieb Nilesh Govindrajan:
  I have two Gentoo VMs at Hetzner and the CPU supports 64 bit (grep lm
  /proc/cpuinfo = true).
 
  But something funny, gcc -march=native -mtune=native -v -E - 21
  /dev/null returns this:
 
  Using built-in specs.
  COLLECT_GCC=/usr/x86_64-pc-linux-gnu/gcc-bin/4.6.3/gcc
 
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-pc-linux-gnu/4.6.3/lto-wrapper
  Target: x86_64-pc-linux-gnu
  Configured with:
  /var/tmp/portage/sys-devel/gcc-4.6.3/work/gcc-4.6.3/configure
  --prefix=/usr --bindir=/usr/x86_64-pc-linux-gnu/gcc-bin/4.6.3
  --includedir=/usr/lib/gcc/x86_64-pc-linux-gnu/4.6.3/include
  --datadir=/usr/share/gcc-data/x86_64-pc-linux-gnu/4.6.3
  --mandir=/usr/share/gcc-data/x86_64-pc-linux-gnu/4.6.3/man
  --infodir=/usr/share/gcc-data/x86_64-pc-linux-gnu/4.6.3/info
 
--with-gxx-include-dir=/usr/lib/gcc/x86_64-pc-linux-gnu/4.6.3/include/g++-v4
  --host=x86_64-pc-linux-gnu --build=x86_64-pc-linux-gnu --disable-altivec
  --disable-fixed-point --without-ppl --without-cloog --enable-lto
  --enable-nls --without-included-gettext --with-system-zlib
  --enable-obsolete --disable-werror --enable-secureplt --enable-multilib
  --enable-libmudflap --disable-libssp --enable-libgomp
  --with-python-dir=/share/gcc-data/x86_64-pc-linux-gnu/4.6.3/python
  --enable-checking=release --disable-libgcj --enable-libstdcxx-time
  --enable-languages=c,c++,fortran --enable-shared --enable-threads=posix
  --enable-__cxa_atexit --enable-clocale=gnu --enable-targets=all
  --with-bugurl=http://bugs.gentoo.org/ --with-pkgversion='Gentoo 4.6.3
  p1.11, pie-0.5.2'
  Thread model: posix
  gcc version 4.6.3 (Gentoo 4.6.3 p1.11, pie-0.5.2)
  COLLECT_GCC_OPTIONS='-march=native' '-mtune=native' '-v' '-E'
   /usr/libexec/gcc/x86_64-pc-linux-gnu/4.6.3/cc1 -E -quiet -v -
  -march=pentium-m -mcx16 -msahf -mno-movbe -mno-aes -mno-pclmul -mpopcnt
  -mno-abm -mno-lwp -mno-fma -mno-fma4 -mno-xop -mbmi -mno-tbm -mno-avx
  -mno-sse4.2 -mno-sse4.1 --param l1-cache-size=32 --param
  l1-cache-line-size=64 --param l2-cache-size=4096 -mtune=generic
  ignoring nonexistent directory /usr/local/include
  ignoring nonexistent directory
 
/usr/lib/gcc/x86_64-pc-linux-gnu/4.6.3/../../../../x86_64-pc-linux-gnu/include
 
  #include ... search starts here:
  #include ... search starts here:
   /usr/lib/gcc/x86_64-pc-linux-gnu/4.6.3/include
   /usr/lib/gcc/x86_64-pc-linux-gnu/4.6.3/include-fixed
   /usr/include
  End of search list.
  # 1 stdin
  stdin:1:0: error: CPU you selected does not support x86-64 instruction
  set
 
  It is returning pentium-m as architecture, which indeed is 32bit. I'm
  presently running the VM with -march=core2, but this is very weird.
 
  This is the /proc/cpuinfo:
 
  processor   : 0
  vendor_id   : GenuineIntel
  cpu family  : 6
  model   : 2
  model name  : QEMU Virtual CPU version 1.0
  stepping: 3
  microcode   : 0x1
  cpu MHz : 3399.998
  cache size  : 4096 KB
  fpu : yes
  fpu_exception   : yes
  cpuid level : 4
  wp  : yes
  flags   : fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca
  cmov pse36 clflush mmx fxsr sse sse2 syscall nx lm rep_good nopl pni vmx
  cx16 popcnt hypervisor lahf_lm
  bogomips: 6799.99
  clflush size: 64
  cache_alignment : 64
  address sizes   : 40 bits physical, 48 bits virtual
  power management:
 
  Any idea what should be the march/mtune value?
 

 I don't have any experience with Hetzner's VMs - I only use their
 dedicated machines :)

 But as a more generic tip, you should install gentoo in VMs only with
 generic optimization. You can not safely rely on what CPU your VM will
 get. Maybe in 6 months the host your vm is on crashes, and your vm will
 be migrated to a new system with totally different cpu (actual hardware
 cpu or the cpu configuration that qemu is going to emulate).


One tip for me: If your VMs are 64-bit,  obviously the underlying CPU must
be 64-bit. So, setting -march=nocona should be safe (Nocona is Intel's
first AMD64-compatible CPU).

Rgds,
--


Re: [gentoo-user] Re: Eth0 interface not found - udev that little slut!!!!!

2013-04-08 Thread Pandu Poluan
On Apr 9, 2013 12:32 AM, Jarry mr.ja...@gmail.com wrote:

 On 08-Apr-13 19:19, Michael Mol wrote:

 On 04/08/2013 12:28 PM, Bruce Hill wrote:

 On Sat, Apr 06, 2013 at 10:58:38PM -0400, Randy Barlow wrote:

 On Sat, 6 Apr 2013 22:35:22 -0400
 Nick Khamis sym...@gmail.com wrote:

 As for /sbin/ip. I have no such command.


 I'd recommend installing and becoming familiar with the iproute2
 package. I personally find the tools it delivers to be more intuitive
 than the older tools, and I *think* they are considered to obsolote
some
 tools, such as ifconfig.


 Ack to Randy's. FWIW: http://inai.de/2008/02/19


 That page has a handy list at the end. I've gone back to the page twice
 today...bookmarked.


 Maybe time to update our Gentoo Handbook to use ip instead
 of ifconfig/route so that users could get used to it right
 during installation...


 Jarry
 --


TBH, the first time I learnt about iproute2 -- about 3 or 4 years ago --  I
no longer use ifconfig.

It's so similar to Cisco IOS commands structure that I immediately took a
liking to it. (Less cognitive dissonance going back and forth between Linux
and Cisco routers).

Rgds,
--


Re: [gentoo-user] Re: Eth0 interface not found - udev that little slut!!!!!

2013-04-08 Thread Pandu Poluan
On Apr 8, 2013 11:17 PM, Bruce Hill da...@happypenguincomputers.com
wrote:

 On Sun, Apr 07, 2013 at 07:42:23PM +0200, Michael Hampicke wrote:
 
  Mike is right, if it's not a dep of another ebuild, you don't need
  wpa_supplicant. I just upgraded udev to 200 on the last remote box
  (which is always a bit of a thrill after typing reboot return :-) ).
  As expected, eth0 came up, everything works fine, wpa_supplicant is not
  installed.

 Don't know what you guys do for rebooting a headless server blindly like
this,
 nor if it would work for the udev/NIC situation. But fwiw, what I've
always
 done for new kernels is:

 mingdao@server ~ $ egrep -v (^#|^ *$) /etc/lilo.conf
 compact
 lba32
 default = Gentoo-def
 boot = /dev/md0
 raid-extra-boot = mbr-only
 map = /boot/.map
 install = /boot/boot-menu.b   # Note that for lilo-22.5.5 or later you
   # do not need boot-{text,menu,bmp}.b in
   # /boot, as they are linked into the lilo
   # binary.
 menu-scheme=Wb
 prompt
 timeout=50
 append=panic=10 nomce dolvm domdadm rootfstype=xfs
 image = /boot/vmlinuz
 root = /dev/md0
 label = Gentoo
 read-only  # Partitions should be mounted read-only for checking
 image = /boot/vmlinuz.old
 root = /dev/md0
 label = Gentoo-def
 read-only  # Partitions should be mounted read-only for checking

 Then issue lilo -R Gentoo or whatever the label of your new kernel, and
if
 it boots, you're okay. If not, after 10 seconds of panic, it automatically
 reboots back into the default kernel and you can check logs to see what
you've
 broken. (panic=10 append statement and default = Gentoo-def) After you
know
 the new kernel works, comment the default line. (NB: You can name them
 differently, etc. It just helps to know before you reboot that if you
panic,
 the machine will boot back into the known, good, kernel.)

 Granted, this might not help with the udev/NIC situation, but it's saved
me
 from a few PEBKAC situations, as well as new kernel changes I'd not
learned
 until the reboot.

Personally, I always try to install *any* Linux server on top of Xen (in my
case, XenServer). That way, I got a remote console always.

Rgds,
--


Re: [gentoo-user] Re: Udev update and persistent net rules changes

2013-04-07 Thread Pandu Poluan
On Apr 7, 2013 5:59 PM, Neil Bothwick n...@digimed.co.uk wrote:

 On Sun, 7 Apr 2013 00:34:03 -0400, Walter Dnes wrote:

   Now I only had to figure out how to rename eth[0-9]+ to the custom
   naming scheme when using mdev.
 
***UDEV*** has broken using eth[0-9].  mdev works just fine, thank
  you.

 udev has broken nothing, it is avoiding the breakage caused by a
 fundamentally flawed renaming procedure. Or does mdev have some magic for
 for renaming eth0 to eth1 while eth1 already exists?


Broken or not is totally depending on the eye of the beholder.

Server SysAdmins *sometimes* need to reboot, and if the name suddenly
changes, that's hell on earth for us.

AFAICT, prior to udev-200, once an interface got assigned an ethX moniker,
it just won't change name unless there's a hardware change. At least,
that's my experience so far.

Rgds,
--


Re: [gentoo-user] Eth0 interface not found - udev

2013-04-07 Thread Pandu Poluan
On Apr 7, 2013 3:56 PM, Stroller strol...@stellar.eclipse.co.uk wrote:


 On 7 April 2013, at 07:00, Joseph wrote:
  ...
  Are these new udev rules going across all Linux distros or this is
something specific to Gentoo?

 I would assume across all distros.

 Gentoo generally makes a policy of just packaging whatever upstream
offers. In fact, the origins of the ebuild is that it does little more than
automating the `configure  make  make install` of compiling upsteam's
source.

 I don't see why the Gentoo devs would impose this on us, unless it came
from upstream.

 AIUI the motive for these changes are so that you can unpack an
enterprise-type server, the ones with two NICs on the motherboard, and
always know which NIC is which. You can then unpack a pallet load of them,
and deploy them without any need for determining which is which or for any
manual intervention. This is actually pretty important and useful, but I'm
not sure this has all been done the best way.

 Stroller.


AFAICT, on-board NICs have sequential MAC Adresses, with the one labeled
Port 1 has the smallest MAC Address. So far, *all* Linux distros I've
used on a server will reliably name Port X as eth$((X-1)). So it's
never a puzzle as to which port bears which ethX moniker.

The new naming scheme, however, is much less intuitive. Where originally I
just immediately use eth0, now I have to enumerate the monikers first,
because even between servers of the same model (let's say, HP's DL360 G7),
the PCI attachment point might differ.

Granted, Linux SysAdmins *are* expected to understand the vagaries of
Linux, but it's still a great inconvenience.

Rgds,
--


Re: [gentoo-user] Eth0 interface not found - udev that little slut!!!!!

2013-04-07 Thread Pandu Poluan
On Apr 7, 2013 8:13 AM, William Kenworthy bi...@iinet.net.au wrote:

 On 07/04/13 01:10, Alan Mackenzie wrote:
  'Evening, Alan.
 
  On Sat, Apr 06, 2013 at 06:36:07PM +0200, Alan McKinnon wrote:
  On 06/04/2013 17:57, Alan Mackenzie wrote:
  Please excuse me, I am running back and forth from the servers and
  typing the error message here. Did our configuration get switched to
  IP6? These are our DB servers and why me!!! Why ME!
  No, it's not just you, it's happened to pretty much everybody.
 udev-200
  now renames eth0, eth1, 
 
  Please please PLEASE, for the love of god joseph mary and every other
  $DEITY on the planet
 
  STOP SPREADING THIS FUD
 
  It did not happen to pretty much everybody. It happened to people who
  blindly updated thignsd and walked away, who did not read the news
  announcement, who did not read the CLEARLY WORDED wiki article at
  freedesktop.org or alternatively went into mod-induced panic and
started
  making shit up in their heads.
 
  Steady on, old chap!  By it I was meaning the general inconvenience
  all round occasioned by the changes between udev-{197,200}.  Not
  everybody encountered this.  For example Dale, and Walt D. didn't have
  to do anything.  But pretty much everybody else did.
 
 

 I didnt get hit either either, but (STRONG hint) ... I use eudev, so
 dies Dale and I believe Walt uses mdev.  Time for those in server
 environments to jump ship?

 It may hit us eventually, but at the moment its :)

 BillK

Well, *my* Gentoo servers are already running mdev...

Hmm... doesn't anyone think it's weird that we haven't heard any complaints
/ horror stories from the Gentoo-server mailing list?

Rgds,
--


Re: [gentoo-user] Re: Udev update and persistent net rules changes

2013-04-06 Thread Pandu Poluan
On Apr 6, 2013 3:44 PM, Neil Bothwick n...@digimed.co.uk wrote:

 On Fri, 5 Apr 2013 21:14:39 -0400, Walter Dnes wrote:

  * on a machine with multiple network cards *ALL USING DIFFERENT DRIVERS*
  * drivers are built as modules, not built-in into the kernel
  * is it possible to set things up so that the network driver modules do
not load automatically at bootup?
  * have a script in /etc/local.d/ (or wherever) modprobe the drivers in
the desired order
 
I can see complications involving services that depend on net (e.g.
  sshd), but in general, would it work reliably?

 What happens if one of the modules fails to load for any reason?

 If you need persistent device names, set up rules to give them, but use
 names outside of the kernel namespace to avoid kk problems that udev is
 trying to avoid with its new naming rules.ooh


Ahhh... I think now I understand...

So. Here's my summarization of the situation:

* The ethX naming can change, i.e., the interfaces can get out of order
* So, to fix this, udev decided to use the physical attachment points of
the NIC in driving a persistent name, a name that will be identical across
boots as long as there is no hardware change
* In doing so, it also frees the 'traditional' ethX names to be used
* If one wants, one can still 'rename' the NICs to the 'traditional' names
using the 70-*.rules script
* Doing so (specifying the NICs' names using the 70-*r.rules script) will
also disable the new 'persistent naming' system -- for the NICs specified
in the 70-*r.rules file
* Therefore, users that will be impacted are those that upgraded udev but
doesn't have the 70-*r.rules, for udev will then assign new names for the
NICs
* For these users, specifying the netsomething switch for the kernel
(sorry, forgot the complete switch) will disable the new naming system

So, have I gotten everything correctly?

CMIIW, please.

Rgds,
--


Re: [gentoo-user] Re: Udev update and persistent net rules changes

2013-04-06 Thread Pandu Poluan
On Apr 6, 2013 7:32 PM, kwk...@hkbn.net wrote:

 On Sat, 6 Apr 2013 19:11:46 +0700
 Pandu Poluan pa...@poluan.info wrote:

  On Apr 6, 2013 3:44 PM, Neil Bothwick n...@digimed.co.uk wrote:
  
   On Fri, 5 Apr 2013 21:14:39 -0400, Walter Dnes wrote:
  
* on a machine with multiple network cards *ALL USING DIFFERENT
DRIVERS*
* drivers are built as modules, not built-in into the kernel
* is it possible to set things up so that the network driver
modules do not load automatically at bootup?
* have a script in /etc/local.d/ (or wherever) modprobe the
drivers in the desired order
   
  I can see complications involving services that depend on net
(e.g. sshd), but in general, would it work reliably?
  
   What happens if one of the modules fails to load for any reason?
  
   If you need persistent device names, set up rules to give them, but
   use names outside of the kernel namespace to avoid kk problems that
   udev is trying to avoid with its new naming rules.ooh
  
 
  Ahhh... I think now I understand...
 
  So. Here's my summarization of the situation:
 
  * The ethX naming can change, i.e., the interfaces can get out of
  order
  * So, to fix this, udev decided to use the physical attachment points
  of the NIC in driving a persistent name, a name that will be
  identical across boots as long as there is no hardware change

 There are also other ways such as using the mac address (disabled by
 default).

  * In doing so, it also frees the 'traditional' ethX names to be used

 No.  The eth[0-9]+ namespace is not free, it has always been used by
 the linux kernel, and will stay so.

  * If one wants, one can still 'rename' the NICs to the 'traditional'
  names using the 70-*.rules script
  * Doing so (specifying the NICs' names using the 70-*r.rules script)
  will also disable the new 'persistent naming' system -- for the NICs
  specified in the 70-*r.rules file
  * Therefore, users that will be impacted are those that upgraded udev
  but doesn't have the 70-*r.rules, for udev will then assign new names
  for the NICs
  * For these users, specifying the netsomething switch for the
  kernel (sorry, forgot the complete switch) will disable the new
  naming system
 
  So, have I gotten everything correctly?

 Almost, except you should not specify a name that is also eth[0-9]+
 (what you called 'traditional' name), since it can cause a race
 condition where the kernel and udev fight for the name.  While it used
 to be the case (i.e. udev-197) that udev tries to handle the race
 condition, the devs has decided to remove those code.

 Regards,

 Kerwin.

Ah, thanks for the clarification! :-)

So, from now on, for safety I'm going to use a custom naming scheme, like
lan[0-9]+ or wan[0-9]+ or wifi[0-9]+, anything that won't collide with
kernel names of eth[0-9]+

Now I only had to figure out how to rename eth[0-9]+ to the custom naming
scheme when using mdev.

Rgds,
--


Re: [gentoo-user] Re: Udev update and persistent net rules changes

2013-04-01 Thread Pandu Poluan
On Apr 1, 2013 2:10 AM, Alan McKinnon alan.mckin...@gmail.com wrote:

 On 31/03/2013 20:26, Dale wrote:
  Nuno J. Silva (aka njsg) wrote:
  On 2013-03-31, Dale rdalek1...@gmail.com wrote:
  Pandu Poluan wrote:
 
 
  Since it's obvious that upsteam has this my way or the highway
  mentality, I'm curious about whether eudev (and mdev) exhibits the
  same behavior...
 
  I synced yesterday and I didn't see the news alert.   Last eudev
update
  was in Feb. so I *guess* not.  It seems to be a udev thing.  That is
  why I mentioned eudev to someone else that was having this issue with
a
  server setup.
  I'd guess eudev will eventually do the same, although I hope that, it
  being a separate codebase, makes it easier to adopt some solution like
  the old rule generator, instead of using udev's approach.
 
  The udev upstream may have its issues, but there's actually a point in
  removing this, the approach there was so far was just a dirty hack.
 
 
 
  Thing is, it works for me.  The old udev worked,

 It's more accurate to say it worked by accident rather than by design.
 (Sort of like how the power utility gets power to your house - if yours
 is anything like mine I get power despite their best efforts to not give
 me any ...)

 Anyway, the old method sucked and it sort of works for you and I because
 we don't add anything ourselves that trip it up. But this new method...
 geez lads, I just dunno.

 How do Windows, Mac and Android deal with this stuff? They don't seem to
 have any device naming problems, so what is the magic solution in use on
 those platforms?


I'm not sure about Macs and Android, but with Windows it all happens based
on MAC address.

I found about it quite accidentally; I had exported a VM from XenServer and
imported it to a different host. By default, XenServer assigns a new,
random MAC for imported VMs. Windows saw this and proceeded to initialize a
new NIC. When I tried setting the IP settings, it complained that the
settings are currently being used by an invisible NIC. So, I shut down the
VM, restored the old MAC, and the prior settings reappeared.

Rgds,
--


Re: [Bulk] [gentoo-user] Re: Udev update and persistent net rules changes

2013-04-01 Thread Pandu Poluan
On Apr 1, 2013 1:54 PM, Neil Bothwick n...@digimed.co.uk wrote:

 On Sun, 31 Mar 2013 21:34:51 +0100, Kevin Chadwick wrote:

   What about USB network adaptors? A user may not even realise they
   plugged it into a different USB slot from last time, yet the device
   name changes.
 
  Fair point but wouldn't that be only if you plug in two of the same
  type that the names may switch?

 According to Flameyes' blog, if you have only one adaptor, its name will
 change according to the port used, which is a rather different definition
 of persistent than I have been used to.


 --
 Neil Bothwick

 All mail what i send is thoughly proof-red, definately!

True, that.

I still don't understand what's so bad with MAC-based identification? I
mean, uniqueness defined through MAC Address identity, the system name is
just a label...

Rgds,
--


Re: [gentoo-user] Re: Udev update and persistent net rules changes

2013-03-31 Thread Pandu Poluan
On Mar 31, 2013 7:13 PM, Nuno J. Silva (aka njsg) nunojsi...@ist.utl.pt
wrote:

 On 2013-03-31, Nuno J. Silva (aka njsg) nunojsi...@ist.utl.pt wrote:
  On 2013-03-31, Nikos Chantziaras rea...@gmail.com wrote:
  On 30/03/13 17:15, Tanstaafl wrote:
  Ok, just read the new news item and the linked udev-guide wiki page
 
  You should probably also read:
 
 http://blog.flameeyes.eu/2013/03/predictably-non-persistent-names
 
  and:
 
 
 
http://blog.flameeyes.eu/2013/03/predictable-persistently-non-mnemonic-names
 
  The feeling that I got while reading the first was exactly what the
  second talks about.
 
  We - from what I understand - had scripts automatically generating the
  name rules from MAC addresses, it's just that they generated stuff like
  ethX.
 
  Can't we just keep these scripts around (even if this was something
  provided by upstream and we would have to forge a new incarnation)?
 
  I mean, IMHO, net0, wl0, ... are much easier to deal with and understand
  than something physically-based. They also avoid problems caused by
  moving these cards around, or changes in the kernel drivers or BIOS, or
  BIOS settings that eventually end up exposing cards in a different way.
 
  The problem with the old approach was *just* the name clash that
  rendered the hacky approach unreliable. Maybe we could just fix the
  issue by using non-clashing namespaces, instead of pushing a completely
  different (and possibly less reliable) naming scheme by default.

 Ok, after some chat on IRC, it seems that upstream made it rather
 non-trivial to have something like the old rule-generator, and that's
 why we can't simply move that from, e.g., ethX to, say, netX.

 --
 Nuno Silva (aka njsg)
 http://njsg.sdf-eu.org/



Since it's obvious that upsteam has this my way or the highway mentality,
I'm curious about whether eudev (and mdev) exhibits the same behavior...

Rgds,
--


Re: eudev - is it a viable *long-term* option? - WAS: Re: [gentoo-user] Updating our live servers. I'm scared!

2013-03-30 Thread Pandu Poluan
On Mar 30, 2013 9:48 PM, Tanstaafl tansta...@libertytrek.org wrote:

 I should have added that this is for a server (not hardened), so I don't
care about hot plug this or that, I just care about stability and
reliability with respect to updates not breaking booting capability...


 On 2013-03-30 10:39 AM, Tanstaafl tansta...@libertytrek.org wrote:

 On 2013-03-28 2:15 PM, Dale rdalek1...@gmail.com wrote:

 Just a thought.  Have you thought about switching to eudev?  That would
 solve some udev issues.  Since you are running a hardened profile and
 servers, may not be a option tho.


 I'm curious...

 Is eudev still being 'maintained'? Does it still have any advantages
 over the new udev?

 I'm mostly concerned about getting so far behind that I end up in an
 untenable situation... ie, eudev dies in 1+ years, and the changes
 between now and then make it virtually to update to whatever is the new
 way...




All my servers use mdev.

'nuff said.

Rgds,
--


Re: [gentoo-user] iptables (not) started?

2013-03-29 Thread Pandu Poluan
On Mar 30, 2013 1:27 AM, Jarry mr.ja...@gmail.com wrote:

 Hi Gentoo-users,

 I noticed one thing on my server: during boot-up no message
 about firewall being started is printed on console. I always
 have to check manually if iptables-rules have been loaded.
 Strange thing, when doing shutdown, I see messages I expect:

 * Saving iptables state ...  [ ok ]
 * Stopping firewall ...  [ ok ]

Slightly tangential to the subject, but related...

I personally prefer *not* to automatically save iptables rules on shutdown.

That way, if I made some stupid mistake, a reboot restores the system to
the LKGC (Last Known Good Configuration)...

Rgds,
--


Re: [gentoo-user] ext4 inline data

2013-03-29 Thread Pandu Poluan
On Mar 29, 2013 8:49 PM, Florian Philipp li...@binarywings.net wrote:

 Hi list!

 I noticed that beginning with kernel 3.8, ext4 can store small files
 entirely inside the inode. But I couldn't find much additional
information:

 - Is the improvement automatically enabled?

 - Is the change backwards compatible? Can I still read such files with
 kernel 3.7?

 - Can current stable e2fsprogs (especially e2fsck) handle this?

 Thanks in advance!
 Florian Philipp


My question would be: Will it introduce a significant advantage to my
situation, so much so that I'm willing to live with the obvious drawbacks?

Rgds,
--


  1   2   3   4   5   6   7   8   9   10   >