Robin,
I was always taught that if a number is below zero, you put a zero before the decimal point. This is just to avoid any confusion about the decimal point.
If the number is less than zero, you use a minus sign (-) to designate it. The convention is for metric values (or certain academic applications) to prefix values between -1 and +1 with a zero. This does not apply to inch-based values and serves to differentiate the units in use in the mechanical engineering world.
I was also taught that you put as many decimal points as your accuracy. This is to stop confusion within calculations and knowing the accuracy of your data.
The ISO and ANSI specifications in engineering disallow "trailing zeros" on metric values. The standards call for tolerances and allowances to be established independently of the numeric value for metric usage. The convention (dating to 1915) for inch-based dimensions is to allow "general tolerances" to be assigned based on decimal places used in a dimension.
If you have 3 decimal point accuracy, then you have to use 3 decimal places even if the number reads 3.000 so any further calculations know that you can work to at least three decimal place accuracy.
You are speaking in terms of general, conventional mathematics. Engineering has different definitions.
I have not read this documentation (It isn't free), "ASME/ANSI Y14.5 - Geometric Dimensioning and Tolerancing" but doing a search shows that it is for engineering drawings. What I found on Wikipedia helps me understand your comments. It does state that tolerances also need to be indicated. Do you want the tolerances as well?
Unless you do engineering drawing (drafting), my reference is going to be of little value. The generalized convention (i.e. for most industries, but exceptions exist) call for an inch-based dimension given to three decimal places to have a ±.005 (inch) tolerance, a dimension given to two decimal places to have a ±.01 (inch) tolerance, and a dimension give to a single decimal place to have a ±.05 (inch) tolerance. The equivalent conventions for metric dimensions are to apply "flag note" standard tolerances to dimensions. Tolerances varying from the nominal standard must be applied individually.

The reason this cropped up for me is that I just wrote a tutorial on the differences between inch-based and metric-based gearing. I spent more than three hours "adjusting" values in the metric tables as I could find no format for a "leading zeros with no trailing zeros" setting. I don't normally worry about this in spreadsheet operations as they rarely become "public" and I know what I mean when doing calculations that way. It's tables in documents that get me slapped on the wrist if I don't follow the conventions.

While I admit that this is a fairly obscure application, I just watched a fairly large American company lose a NATO development contract because they applied the inch-based convention of using trailing zeros to define tolerances to a metric drawing. Most people will never see such persnickety application of standards, but they do exist.
.

  Lew


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to