Home page

Deconstructing Deep Learning + δeviations

Drop me an email | RSS feed link : Click
Format : Date | Title
  TL; DR

Total number of posts : 89

Go To : PAPERS o ARTICLES o BOOKS o SPACE

View My GitHub Profile


Go to index

Activation functions

Reading time : ~4 mins

by Subhaditya Mukherjee

Implementing activation functions.

Activation functions are an extremely important part of any neural network. But they are actually much simpler than we make them out to be. Here are some of them. Lets define a test matrix.

test = [100 1.0 0.0 -300.0;100 1.0 0.0 300.0]

Relu

relu(mat) = max.(0, mat)

Leaky relu

lrelu(x) = max.(0.01x, x)

PRelu

#export
prelu(x,a) = max.(x, x.*a)
prelu(test,0.10)

Maxout

maxout(x,a) = max.(x, x.*a)
maxout(test,0.10)

Sigmoid

σ(x) = 1 ./(1 .+exp.(-x))
σ(test)

Noisy Relu

using Distributions
noisyrelu(x) = max.(0, x.+rand(Distributions.Normal(), 1))
noisyrelu(test)

Softplus

softplus(x) = log.(exp.(test).+1)
softplus(test)

Elu

elu(x,a) = max.(x, a.*(exp.(x) .-1))
elu(test,0.1)

Swish

swish(x,β) = x ./(1 .+exp.(-β.*x))
swish(test,0.1)
Related posts:  FP16  AI Superpowers Kai Fu Lee  Digital Minimalism Cal Newport  More Deep Learning, Less Crying - A guide  Super resolution  Federated Learning  Taking Batchnorm For Granted  A murder mystery and Adversarial attack  Thank you and a rain check  Pruning