Equality Saturation for Tensor Graph Superoptimization

MLSys 2021, January 2021
BibTeX
@inproceedings{yang2021equality,
  title={Equality Saturation for Tensor Graph Superoptimization},
  author={Yichen Yang and Phitchaya Mangpo Phothilimtha and Yisu Remy Wang and Max Willsey and Sudip Roy and Jacques Pienaar},
  eprint={2101.01332},
  booktitle={Proceedings of Machine Learning and Systems},
  year={2021}
}
Search Time (sec) Runtime Speedup (%)
TASO Tensat TASO Tensat
BERT 13.6 1.4 8.5 9.2
ResNeXt-50 25.3 0.7 5.5 8.8
NasNet-A 1226 10.6 1.9 7.3
NasRNN 177.3 0.5 45.4 68.9
Inception-v3 68.6 5.1 6.3 10.0
SqueezeNet 16.4 0.3 6.7 24.5
VGG-19 8.9 0.4 8.9 8.9
Compared to a previous state-of-the-art tool (TASO, Jia et. al. 2019), Tensat can find up to 16% better optimizations in 48x less time on average.

Abstract

One of the major optimizations employed in deep learning frameworks is graph rewriting. Production frameworks rely on heuristics to decide if rewrite rules should be applied and in which order. Prior research has shown that one can discover more optimal tensor computation graphs if we search for a better sequence of substitutions instead of relying on heuristics. However, we observe that existing approaches for tensor graph superoptimization both in production and research frameworks apply substitutions in a sequential manner. Such sequential search methods are sensitive to the order in which the substitutions are applied and often only explore a small fragment of the exponential space of equivalent graphs. This paper presents a novel technique for tensor graph superoptimization that employs equality saturation to apply all possible substitutions at once. We show that our approach can find optimized graphs with up to 16% speedup over state-of-the-art, while spending on average 48x less time optimizing.