On 4 May 2012, at 17:00, Stan Hoeppner wrote:

On 5/3/2012 6:54 PM, Bill Cole wrote:
...
For many of these systems,
the OS resides on a mirrored pair of local disks which see very
infrequent writes because every filesystem with significant flux is
physically resident across the SAN. Spinning disks draw power. Anything drawing power generates heat. Heat requires cooling. Cooling typically
requires more power than the devices it is compensating for. Cooling
also requires careful attention to the details of physical server
density and rack design and so on...

This could be completely resolved by PXE/bootp and NFS mounted root
filesystems, and save you $200-500/node in disk drive costs after
spending $1000-2000 for the NFS server hardware, or nothing using a VM
server.  It would also save you substantial admin time by using
templates for new node deployments. This diskless node methodology has
been around for ~30 years.

Yes, it is possible to fundamentally re-architect working environments that have been "organically" developed over years by adding significant new infrastructure to save on capital costs of hypothetical growth and maybe on future admin time. The idea that a server in the $1000-$2000 range would be part of a global conversion to diskless servers or even the largest capital cost of such a project reveals that I failed to communicate an accurate understanding of the environment, but that's not terribly important. There's no shortage of well-informed well-developed specific proposals for comprehensive infrastructure overhaul, and in the interim between now and the distant never when one of those meets up with a winning lottery ticket and an unutilized skilled head or three, I have sufficient workarounds in place.

I didn't mention that environment seeking a solution, but rather to point out that there are real-world systems that take advantage of the power management capabilities of modern disks and have nothing else in common with the average personal system. I think that was responsive to the paragraph of yours that I originally quoted. It's easy to come up with flippant advice for others to spend time and money to replace stable working systems, but it is also irrelevant and a bit rude.

[...]
Ultimately the result is having to choose
between power management and timely delivery. If the periodic wakeups
didn't force a disk write, it would be less onerous to let master run in its normal persistent mode for a lot of Postfix users (many of whom may
not even be aware that they are Postfix users.)

This is only true if two things persist into the future:

1.  Postfix isn't modified in order to perform a power management role

No reason for it to "perform" but it would be nice for it to "stop thwarting."

2.  Laptops will forever have spinning rust storage

Who said anything about laptops?

Addressing the first point, should it be the responsibility of
application software to directly address power management concerns? Or
should this be left to the OS and hardware platform/BIOS?

Applications should not do things that are actively hostile to housekeeping functions of lower-level software (in this case: drive firmware) without a functional justification. It's not wrong for a filesystem to change the mtime on a pipe with every write to it, nor is it wrong for a filesystem to commit every change in a timely manner. This is not really fixable at a lower level without eliminating the hardware in question or making changes to filesystem software that could cause wide-ranging problems with other software.

Addressing the 2nd, within a few years all new laptops will ship with
SSD instead of SRD specifically to address battery run time
issues.  Many are shipping now with SSDs.  All netbooks already do,
smart phones use other flash types.

This is not about laptops. Really.

Systems can live a long time without drive replacements. Spinning rust with power management firmware is not going to be rare in running systems until at least 5 years after dependable & fast SSD's hit $1/GB for devices larger than 100GB. Of course, those drives may die out a lot faster where applications do periodic pointless writes that keep them running continuously.

Note that the reason this issue exists *AT ALL* is to work around a bug in Solaris 2.4. I spent most of the last 14 years working mostly on Solaris systems in change-averse places and the last time I saw Solaris 2.4 was 1999. I don't have the details of the bug or the free time to rig up a test system to prove it gone in whatever version Postfix needs to work on today, but I have no gripe with that relatively ancient and *likely* inoperative history being the blocking issue. I hope someone else can settle the issue. An argument that time will soon make this fix pointless is a bit ironic.


Whether it is actually worthwhile to make a change that is only
significant for people who are barely using Postfix isn't a judgment I
can make. It's obvious that Dr. Venema takes significantly more care
with his code than I can really relate to, so I don't really know what
effort a conceptually small change in Postfix really entails.

Wietse will make his own decisions as he always has.

I'm simply making the point that issues such as power/cooling,
wake/sleep, etc should be addressed at the hardware platform/OS level,
or system or network architecture level, at the application level,
especially if the effort to implement it is more than trivial.

See his discussion of the details. The code exists, what remains is the harder work of testing and getting all the defaults right.


P.S.: Note that I have respected with your Reply-To header. Please return that courtesy.

Reply via email to