Figure 4: Huang, X., Kwiatkowska, M., Wang, S., & Wu, M. (2017, July). Safety verification of deep neural networks. In International conference on computer aided verification (pp. 3-29). Springer, Cham.
Given some classifier model
Iris dataset - classifier aim: predict type of flower from 4 dimensional vector of
Sepal Length, Sepal Width, Petal Length, and Petal Width.
I.e.
Figure 10: geometric complexity of class boundaries
Figure 11: sparsity/density of sampling from data manifold that consistutes the training data.
In our method we provide an algorithm to iteratively expand the maximum
Expansion is modulated by the estimated density of data samples. Using an inverse multiquadric radial basis function (RBF) to estimate the density at a given location.
The estimated density for a single point is the sum of RBFS, centered on each point, at this location.
Morgan, J., Paiement, A., Pauly, A., & Seisenberger, M. (2021). Adaptive neighbourhoods for the discovery of adversarial examples. arXiv preprint arXiv:2101.09108.
https://github.com/jaypmorgan/NeuralVerifier.jl - built on top of Z3 solver to provide an interface to verify Neural Network properties, such as: output bounds checking and adversarial robustness.
Take a very simple example of a 3-layer neural network.
Where
Z3 provides support for real linear arithmetic and provides operations for the basic multiplication and addition. Thus, we need only to apply these elementwise.
function dense(x, W, b)
out = fill!(Array(undef, size(W,1), size(x,2)), 0)
for i = 1:size(out,1), j = 1:size(W,2)
out[i] += W[i,j] * x[j]
end
out = out .+ b
return out
end
Moving onto non-linear functions, we must consider how such non-linearities are encoded in the model. For some of the activation functions, it could be as easy as simple boolean logic.
If(x > 0, x, 0)
function sigmoid(x)
If(x < 0,
If(x < -2, 0.0, 0.4),
If(x > 2, 1.0, 0.6))
end
function dense(x, W, b)
out = fill!(Array(undef, size(W,1), size(x,2)), 0)
for i = 1:size(out,1), j = 1:size(W,2)
out[i] += W[i,j] * x[j]
end
out = out .+ b
return out
end
function relu(x)
If(x > 0, x, 0)
end
y = relu(dense(x, W, b))
encoding(x) = begin
y = dense(x,
neural_network[1].W,
neural_network[1].b) |> relu;
y = dense(y,
neural_network[2].W,
neural_network[2].b) |> relu;
y = dense(y,
neural_network[3].W,
neural_network[3].b) |> softmax;
end
for (idx, (x_i, r_i)) in enumerate(zip(x, r))
m = Optimize() # create an optimisation procedure (model)
add!(m, (eps > 0) ∧ (eps <= r_i)) # bound condition on epsilon
y = encoding(x_i) # get initial condition of y in our encoding
add!(m, y != f(xi)) # add the adversarial example condition
minimize!(m, eps) # find the smallest eps
check(m) # check for satisfiability
m.is_sat == "sat" && @info "#$(idx): Adversarial found!"
end
r = epsilon_expand(x_train', y_train;
ϵ = 1e-7, # the initial step size
ε = 1.0, # RBF width parameter
func = inverse_multiquadric, # RBF function to use
decay = exponential_decay) # Decay function based on density
stable_area, adv_examples = stable_region(Optimize, f, x_train', r;
timeout = 100,
build_fn = encoding)
Running on MNIST dataset.
[ Info: #1: Adversarial found! [ Info: #3: Adversarial found! [ Info: #4: Adversarial found! [ Info: #5: Adversarial found! [ Info: #7: Adversarial found! ...
You can find these slides on my personal website below. Additionally follow the github link for more documentation and usage on NeuralVerifier.jl