Publications

Software

See the links below for software directly related to a paper. We also refer to our group repository LedererLab@Github.

Preprints

Affine invariance in continuous-domain convolutional neural networks
Joint work with A. Mohaddes
preprint
[arxiv]

Keywords: CNNs; symmetries

Extremes in high dimensions: methods and scalable algorithms
Joint work with M. Oesting
preprint
[arxiv]

Keywords: Hüsler-Reiss model; score matching; graphical models

Set-membership inference attacks using data watermarking
Joint work with A. Fischer, M. Laszkiewicz, and D. Lukovnikov
preprint
[arxiv]

Keywords: Deep learning; generative models; watermarking

Lag selection and estimation of stable parameters for multiple autoregressive processes through convex programming
Joint work with S. Chakraborty and R. von Sachs
preprint
[arxiv]

Keywords: time series; stability; stationarity

The DeepCAR method: forecasting time-series data that have change points
Joint work with A. Jungbluth
preprint
[arxiv]

Keywords: time series; DeepAR; transformer

Reducing computational and statistical complexity in machine learning through cardinality sparsity
Joint work with A. Mohades
preprint
[arxiv]

Keywords: neural networks; tensor regression; optimization

Statistical guarantees for approximate stationary points of simple neural networks
Joint work with M. Taheri and F. Xie
preprint
[arxiv]

Keywords: neural networks; statistical guarantees; optimization

VC-PCR: a prediction method based on supervised variable selection and clustering
Joint work with B. Govaerts, R. Marion, and R. von Sachs
preprint
[arxiv]

Keywords: variable clustering; dimension reduction; transcriptomics

Regularization and reparameterization avoid vanishing gradients in sigmoid-type networks
Joint work with L. Ven
preprint
[arxiv]

Keywords: deep learning; non-convex optimization

Optimization landscapes of wide deep neural networks are benign
preprint
[arxiv]

Keywords: neural networks; deep learning; non-convex optimization

Risk bounds for robust deep learning
preprint
[arxiv]

Keywords: neural networks; robust deep learning; risk bounds

Layer sparsity in neural networks
Joint work with M. Hebiri
preprint
[arxiv]

Keywords: neural networks; deep learning; sparsity

Published / In Press

Single-model attribution via final-layer inversion
Joint work with A. Fischer, M. Laszkiewicz, and J. Ricker
ICML, 2024
[arxiv]

Keywords: Deep learning; generative models; diffusion models

Ayla Jungbluth and Johannes Lederer’s contribution to the discussion of ‘the Discussion Meeting on Probabilistic and Statistical Aspects of Machine Learning’
Joint work with A. Jungbluth
J. R. Stat. Soc. Ser. B. Stat. Methodol., 86(2), 2024, pages 322–326
[journal]

Keywords: time series; deep learning; change-point detection

Benchmarking the fairness of image upsampling methods
Joint work with M. Laszkiewicz, I. Daunhawer, J. Vogt, A. Fischer, J. Lederer
ACM FAccT, 2024
[arxiv]

Keywords: Computer vision; fairness; conditional generative models

Targeted deep learning: framework, methods, and applications
Joint work with S.-T. Huang
STAT, 12(1), 2023, e556
[arxiv]

Keywords: neural networks; deep learning; targeted deep learning

Statistical guarantees for sparse deep learning
AStA Adv. Stat. Anal., 2023
[arxiv]

Keywords: neural networks; statistical guarantees; deep learning

Balancing statistical and computational precision: a general theory and applications to sparse regression
Joint work with N. Lim and M. Taheri
IEEE Trans. Inform. Theory 69(1), 2023, pages 316–333
[arxiv]
| [software]
Keywords: convex optimization; Big Data

DeepMoM: robust deep learning with median-of-means
Joint work with S.-T. Huang
J. Comput. Graph. Statist. 32(1), 2023, pages 181–195
[arxiv]

Keywords: neural networks; median-of-means; robust deep learning

Marginal tail-adaptive normalizing flows
Joint work with A. Fischer and M. Laszkiewicz
ICML, 2022
[arxiv]

Keywords: heavy-tails; normalizing flows

Depth normalization of small RNA sequencing: using data and biology to select a suitable method
Joint work with Y. Düren and L-X. Qin
Nucleic Acids Res. 50(10), 2022, page e56
[arxiv]

Keywords: transcriptomics; biostatistics; graphical models

Is there a role for statistics in artificial intelligence?
Joint work with S. Friedrich and others
Adv. Data Anal. Classif. 16, 2022, 823–846
[arxiv]

Keywords: statistics; artificial intelligence; review

Integrating additional knowledge into the estimation of graphical models
Joint work with Y. Bu
Int. J. Biostat. 18(1), 2022, pages 1–17
[arxiv]
| [data and software]
Keywords: brain connectivity networks; reproducible graph estimation

Topology adaptive graph estimation in high dimensions
Joint work with C. Müller
Mathematics 10(8):1244, 2022
[arxiv]
Keywords: high-dimensional networks; Gaussian graphical models

Estimating the lasso’s effective noise
Joint work with M. Vogt
J. Mach. Learn. Res. 22(276), 2021, pages 1–32
[arxiv]

Keywords: lasso; tuning-parameter calibration; inference

Copula-based normalizing flows
Joint work with A. Fischer and M. Laszkiewicz
ICML workshop on Invertible Neural Networks, Normalizing Flows, and Explicit Likelihood Models, 2021
[arxiv]

Keywords: heavy-tails; normalizing flows

Tuning parameter calibration for prediction in personalized medicine
Joint work with Y. Düren, S.-T. Huang, and K. Hellton
Electron. J. Stat. 15(2), 2021, pages 5310–5332
[arxiv]
| [software]
Keywords: precision medicine; ridge regression; tuning-parameter calibration

Statistical guarantees for regularized neural networks
Joint work with M. Taheri and F. Xie
Neural Networks 142, 2021, pages 148–161
[arxiv]

Keywords: neural networks; deep learning; oracle inequalities

Tuning-free ridge estimators for high-dimensional generalized linear models
Joint work with S.-T. Huang and F. Xie
Comput. Statist. Data Anal. 159, 2021
[arxiv]
| [software]
Keywords: generalized linear models; ridge regression; tuning-parameter calibration

False discovery rates in biological networks
Joint work with L. Yu and T. Kaufmann
AISTATS, 2021
[arxiv]
| [software]
Keywords: biological systems; FDR control; Gaussian graphical models

Thresholded adaptive validation: tuning the graphical lasso for graph recovery
Joint work with M. Laszkiewicz and A. Fischer
AISTATS, 2021
[arxiv]

Keywords: graphical models; tuning-parameter calibration; biological applications

A pipeline for variable selection and false discovery rate control with an application in labor economics
Joint work with S.-C. Klose
accepted at Annual Congress of the Swiss Society of Economics and Statistics, 2021
[arxiv]

Keywords: economics; high-dimensional inference; FDR control

Aggregating knockoffs for false discovery rate control with an application to gut microbiome data
Joint work with F. Xie
Entropy 23(2):230, 2021
[arxiv]
| [software]
Keywords: FDR control; multiple testing; variable selection

Activation functions in artificial neural networks: a systematic overview
Technical report, 2021
[arxiv]

Keywords: neural networks; deep learning

Inference for high-dimensional nested regression
Joint work with D. Gold and J. Tao
J. Econometrics 217(1), 2020, pages 79–111
[arxiv]
| [software]
Keywords: instrumental variables; lasso; one-step correction, high-dimensional inference

Tuning parameter calibration for l1-regularized logistic regressions
Joint work with W. Li
J. Statist. Plann. Inference 202, 2019, pages 80–98
[arxiv]
Keywords: tuning parameters; high-dimensional logistic regression

Prediction error bounds for linear regression with the TREX
Joint work with J. Bien, I. Gaynanova, and C. Müller
TEST 28(2), 2019, pages 451–474
[arxiv]
Keywords: tuning parameters; oracle inequalities

Oracle inequalities for high-dimensional prediction
Joint work with L. Yu and I. Gaynanova
Bernoulli 25(2), 2019, pages 1225–1255
[arxiv]
Keywords: high-dimensional prediction; correlations

Graphical models for discrete and continuous data
Joint work with R. Zhuang and N. Simon
Technical report, 2019
[arxiv]
| [software]
Keywords: graphical models; exponential families

Maximum regularized likelihood estimators: a general prediction theory and applications
Joint work with R. Zhuang
STAT 7(1), 2018, e186
[arxiv]
Keywords: oracle inequalities; maximum regularized likelihood

Non-convex global minimization and false discovery rate control for the TREX
Joint work with J. Bien, I. Gaynanova, and C. Müller
short version presented at ICML workshop, long version J. Comput. Graph. Statist. 27(1), 2018, pages 23–33
[arxiv]
| [software and data examples]
Keywords: non-convex optimization; feature selection; tuning parameters

Optimal two-step prediction in regression
Joint work with D. Chételat and J. Salmon
Electron. J. Stat. 11(1), 2017, pages 2519–2546
[arxiv]
| [software]
Keywords: tuning parameters; high-dimensional prediction; refitting

On the prediction performance of the lasso
Joint work with A. Dalalyan and M. Hebiri
Bernoulli 23(1), 2017, pages 552–581
[arxiv]
Keywords: high-dimensional prediction; correlations

A practical scheme and fast algorithm to tune the lasso with optimality guarantees
Joint work with M. Chichignoud and M. Wainwright
J. Mach. Learn. Res. 17(229), 2016, pages 1–20
[arxiv]
Keywords: tuning parameters; high-dimensional statistics; oracle inequalities

Trust, but verify: benefits and pitfalls of least-squares refitting in high dimensions
Technical report, 2016
[arxiv]
Keywords: high-dimensional regression; refitting

Compute less to get more: using ORC to improve sparse filtering
Joint work with S. Guadarrama
AAAI-15
[arxiv]
Keywords: computer vision; feature learning

Don’t fall for tuning parameters: tuning-free variable selection in high dimensions with the TREX
Joint work with C. Müller
AAAI-15
[arxiv]
Keywords: tuning parameters; feature selection

New Concentration inequalities for suprema of empirical processes
Joint work with S. van de Geer
Bernoulli 20(4), 2014, pages 2020–2038
[arxiv]

Keywords: empirical processes; concentration inequalities

A robust, adaptive M-estimator for pointwise estimation in heteroscedastic regression
Joint work with M. Chichignoud
Bernoulli 20(3), 2014, pages 1560–1599
[arxiv]

Keywords: pointwise estimation; robust regression; adaptive regression

The group square-root lasso: theoretical properties and fast algorithms
Joint work with F. Bunea and Y. She
IEEE Trans. Inform. Theory 60(2), 2014, pages 1313–1325
[arxiv]

Keywords: tuning parameters; oracle inequalities; convex optimization

How correlations influence lasso prediction
Joint work with M. Hebiri
IEEE Trans. Inform. Theory 59(3), 2013, pages 1846–1854
[arxiv]

Keywords: high-dimensional prediction; correlations

The Lasso, correlated design, and improved oracle inequalities
Joint work with S. van de Geer
IMS Collections 9, 2013, pages 303–316
[arxiv]

Keywords: high-dimensional prediction; correlations

The Bernstein-Orlicz norm and deviation inequalities
Joint work with S. van de Geer
Probab. Theory Related Fields 157, Issue 1-2, 2013, pages 225–250
[arxiv]

Keywords: empirical processes; concentration inequalities

Nonasymptotic bounds for empirical processes and regression
Under supervision of S. van de Geer and P. Bühlmann
PhD thesis, 2012
Keywords: empirical processes; high-dimensional regression; robust statistics

Bounds for Rademacher processes via chaining
Technical report, 2010
[arxiv]

Keywords: empirical processes, chaining

Production of charged vector boson pairs in hadronic collisions
Under supervision of C. Anastasiou
Master’s thesis, 2009
Keywords: particle physics; Higgs boson