On the Depth of Monotone ReLu Neural Networks and ICNNs

We exploit a beautiful correspondence between ReLU acivated neural networks and polytopes to prove expressivity lower bounds for the depth required in a monotone neural network that exactly computes the maximum of n inputs. We also prove depth separations between ReLU networks and ICNN, namely fo every kk there exists a depth 22 ReLU network of size O(k2)O(k^2) that cannot be simulated by a depth-kk ICNN.

picture