Home page

Deconstructing Deep Learning + ╬┤eviations

Drop me an email | RSS feed link : Click
Format : Date | Title
  TL; DR

Total number of posts : 89

Go To : PAPERS o ARTICLES o BOOKS o SPACE

View My GitHub Profile


Go to index

FP16

Reading time : ~1 min

by Subhaditya Mukherjee

Lets look at how to increase efficiency using FP16.

When do we face an issue

Why do we really care

What are we doing

Simple implementation

#

Resources

https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/

https://developer.nvidia.com/blog/apex-pytorch-easy-mixed-precision-training/

https://pytorch-lightning.readthedocs.io/en/latest/common/lightning_module.html#training

https://arxiv.org/pdf/1710.03740

Related posts:  AI Superpowers Kai Fu Lee  Digital Minimalism Cal Newport  More Deep Learning, Less Crying - A guide  Super resolution  Federated Learning  Taking Batchnorm For Granted  A murder mystery and Adversarial attack  Thank you and a rain check  Pruning  Documentation using Documenter.jl