We have developed an approach for integrating additional knowledge into parameter estimation in graphical models. The main idea is to funnel the knowledge into the tuning parameters. Find the paper here. Well done, Yunqi! 😁
RRF Grant Awarded
Johannes, Jing, and David have been awarded with a grant from the UW Royalty Research Fund (RRF) for their Big Data research in Econometrics. Second hit: it pays to work with Jing and David… 🍸
STAT 582 Upcoming
Important dates and further information about the course STAT 582 have been uploaded in the teaching section. Note the change of location to LOW 101.
A practical scheme for Lasso calibration
Our paper on tuning parameter calibration for the Lasso has been accepted at JMLR. You can find an updated version here.
Tuning Parameter Calibration in High-dimensional Logistic Regression With Theoretical Guarantees
We have extended the AV-scheme for tuning parameter calibration to logistic regression. Our approach provides efficient feature selection and is supported by theoretical guarantees. The paper can be downloaded here. Great job, Wei! 😎
Efficient Feature Selection With Large and High-dimensional Data
After presenting it in a couple of talks, we have now submitted the manuscript about efficient feature selection. In particular, we introduce a simple, yet effective algorithm based on Lasso optimization steps. The paper is available here. Congrats to Néhémy! 🙂
Graphical Models for Discrete and Continuous Data
We have uploaded a paper on graphical models. In this paper, we introduce a general framework that allows for discrete, continuous, and combined types of data. Find the manuscript here.
Funded Project “Statistical Inference in High Dimensions”
Johannes, Jing (UW Econ), and David are awarded the grant “AWS Cloud Credits for Research” sponsored by Amazon! Congratulations especially to David, first year PhD student in the Stats Department and key researcher in the team.
Oracle Inequalities for High-dimensional Prediction
We have just uploaded a new paper on high-dimensional prediction. It provides bounds for a wide range of penalized estimators – importantly, without making assumptions on the design matrix. Check it out here.
New Page Online
This is the first post on our brand new homepage – hopefully many more to come.
Gina and Kimberly did all the work: thank you for the wonderful job!! 🙂