We have analized the vanishing-gradient problem in deep learning and potential remedies here. Congratulations, Leni! 🏄
We have composed an overview of activation functions in artificial neural networks here.
We analyze the optimization landscapes of feedforward neural networks here. We show especially that the landscapes of wide networks do not have spurious local minima.
We have discussed the role of statistics in artificial intelligence here.
We have established risk bounds for robust deep learning here.
We have put a new paper about layer sparsity in neural networks on arXiv.
We have now put our paper “A pipeline for variable selection and false discovery rate control with an application in labor economics” on arXiv. The paper will be part of the Annual Congress of the Swiss Society of Economics and Statistics in 2021. Congratulations Sophie-Charlotte!
We have derived statistical guarantees for deep learning here. Well done, Mahsa and Fang! ⚡⚡⚡
We have established a new strategy for calibrating the graphical lasso here. Great work, Mike! 🍷