u99127 commented on a change in pull request #6:
URL: https://github.com/apache/tvm-rfcs/pull/6#discussion_r677863248



##########
File path: rfcs/0001-AMP_pass.md
##########
@@ -0,0 +1,137 @@
+- Feature Name: Automatic Mixed Precision Pass
+- Start Date: 2021-06-08 
+- RFC PR: TODO
+- GitHub Issue: TODO
+
+# Summary
+[summary]: #summary
+
+Many pieces of hardware support operation not only on 32 bit floating point, 
but also 16 bit floating point. 
+These 16 bit operations typically have higher theoretical throughput and 
involve less use of memory bandwidth.
+As a result, we can see significant increases from changing normal 32 bit 
operations with 16 bit analogs. 
+Surprisingly, for many operations this has little effect on the results, 
though some care must had when changing 
+operations. Some 16 bit floating point operations such as `exp` and `log` for 
example are considered less safe 
+due to loss of [numerical 
precision](https://on-demand.gputechconf.com/gtcdc/2019/pdf/dc91247-automatic-mixed-precision-in-tensorflow.pdf).
 
+In general for a function `f`, if `|f(x)| >> |x|` for expected 
+ranges of input we probably do not want to use the 16 bit floating point 
versions.
+
+This feature will be a relay pass which automatically converts a 32 bit 
floating point model into a reduced bit 
+floating point analog. For the initial pass IEEE's 16 bit floating point will 
be targeted though future support
+for bfloat16 should be in mind.
+
+# Motivation
+[motivation]: #motivation
+
+Many machine learning models can move significant portions of their 
computational graphs into the FP16 space 
+without significant loss of accuracy. For many pieces of hardware this also 
comes with a boost in speed. In 
+the past utilizing FP16 in mixed precision training saw significant [increases 
in convergence 
speed](https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/).
 
+
+We should expect similar increases for inference. This speed increase without 
accuracy loss is highly desirable
+for many users.
+
+# Guide-level explanation
+[guide-level-explanation]: #guide-level-explanation
+
+Operations are partitioned into colors denoted "Green", "Red", and "Gray" 
which represents the benefit 
+of using a reduced floating point version of the operation. "Green" operations 
are compute intensive
+and almost always see hardware memory and latency savings by utilizing a 
reduced floating point form.
+Examples of these operations are matrix multiplies and convolutions. "Gray" 
operations see little to 
+no savings in using reduced floating point forms -- at least not enough to 
justify the overhead of 
+casting values back and forth from FP32. "Red" operations meanwhile are 
operations we do not want to 
+use reduced floating point forms on, usually due to numerical precision 
reasons.
+
+In general we always want to insert casts into reduced floating point space 
for "Green" operations, 
+are fine with transforming "Gray" operations into reduced floating point space 
if their inputs are already
+in that form, and want to explicitly cast back into full floating point space 
for "Red" operations. 
+Each operation will be placed into one of these lists via a "coloring" 
function which take in Relay `CallNodes`
+and returns a color. For example, we might have a function which colors only a 
convolution as "Green" if it 
+has a large enough kernel and "Gray" otherwise. For the default implementation 
we will keep things simple
+however and do something like place all convolutions in the "Green" list, all 
element-wise operations in 
+the "Gray" list, and so on. Still, the code will be designed to be easily 
extensible via overwriting 
+this "coloring" function.
+
+The final variable we must keep in mind is the fact that some hardware 
platforms can operate on reduced
+floating point types. However, while they for example may take two FP16 
operands they may accumulate the 
+result in a 32 bit buffer. An example of this are the Tensor Cores in Nvidia's 
Turing architecture. 
+The final knob we give is a control over how operations accumulate their 
result. For this, we have 
+a function, which maps operation types like `conv2d` to an accumulation 
datatype as well as an output 
+datatype. The output datatype is the type other operations down the line will 
likely ingest from the previous
+calculation while the accumulation datatype describes the size of buffer where 
the results are initially
+stored. For NVidia's tensor cores for example many operations accumulate in 
FP32 but have an output datatype
+of FP16. The default implementation will follow this guideline closely and 
will by default have all 
+operations output FP16 and accumulate in FP32 only if TVM supports mixed 
datatypes for that particular
+operation.
+
+# Reference-level explanation
+[reference-level-explanation]: #reference-level-explanation

Review comment:
       The M6G for instance also has FP16 instructions from the AArch64 ISA 
including Advanced SIMD fixed length 128 bit vectors- perhaps also investigate 
the suitability of the representation to support more than 1 backend with this 
? I also expect that with the SVE (Scalable Vector extensions) you'd see FP16 
instructions there as well but that's a different kettle of fish.
   
   I also expect that in the uTVM case it would be interesting in a later date 
for the MVE instruction set with FP16 being present there as well. I'd like to 
look at whether there is an multiply and accumulate instruction there that 
matches with an FP32 accumulator.  Off hand I'm not sure about the answer to 
that question.
   
   Just something to consider.
   
   My 2 cents 
   
   Ramana




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to