High-dimensional generalized linear models
Web16 de mar. de 2024 · This generalized form is an expansion and the resulting discriminant function is not linear in x, but it is linear in y. The d’-functions yi(x) merely map points in … WebThe problem of obtaining an optimal spline with free knots is tantamount to minimizing derivatives of a nonlinear differentiable function over a Banach space on a compact …
High-dimensional generalized linear models
Did you know?
WebThis article proposes a bootstrap-assisted procedure to conduct simultaneous inference for high-dimensional sparse linear models based on the recent desparsifying Lasso estimator. Our procedure allows the dimension of the parameter vector of interest to be exponentially larger than sample size, and it automatically accounts for the dependence … WebHigh-dimensional data and linear models: a review M Brimacombe Department of Biostatistics, University of Kansas Medical Center, Kansas City, KS, USA Abstract: The …
WebTony Cai's Papers. Estimation and Inference for High-dimensional Generalized Linear Models with Knowledge Transfer. Sai Li, Linjun Zhang, Tony Cai, and Hongzhe Li. Abstract: Transfer learning provides a powerful tool for incorporating related data into a target study of interest. In epidemiology and medical studies, the classification of a ... WebThis study proposes a novel complete subset averaging (CSA) method for high-dimensional generalized linear models based on a penalized Kullback–Leibler (KL) loss. All models under consideration can be potentially misspecified, and the dimension of covariates is allowed to diverge to infinity.
http://www.personal.psu.edu/ril4/research/AOS1761PublishedVersion.pdf Web7 de ago. de 2013 · This paper studies generalized additive partial linear models with high-dimensional covariates. We are interested in which components (including parametric and nonparametric components) are nonzero. The additive nonparametric functions are approximated by polynomial splines.
http://www-stat.wharton.upenn.edu/~tcai/paper/Transfer-Learning-GLM.pdf
Web1 de out. de 2024 · In this paper, we propose to use a penalized estimator for the homogeneity detection in the high-dimensional generalized linear model (GLM), that composed of two non-convex penalties: individual sparsity and sparsity of pairwise difference. We consider a class of non-convex penalties that includes most of existing … how to split up a relationshiphttp://www-stat.wharton.upenn.edu/~tcai/paper/html/Transfer-Learning-GLM.html how to split upWebGeneralized linear model; High-dimensional inference; Matrix uncertainty selector; Measurement error; Sparse estimation; Acknowledgments. The authors would like to thank Prof. Bin Yu for discussions and Dr. Sjur Reppe for providing the bone density data. Reprints and Corporate Permissions. reach a milestoneWebIn this paper, a graphic model-based doubly sparse regularized estimator is discussed under the high dimensional generalized linear models, that utilizes the graph structure among the predictors. The graphic information among predictors is incorporated node-by-node using a decomposed representation and the sparsity is encouraged both within and ... reach a new peakWeb1 de jul. de 2024 · Many current intrinsically interpretable machine learning models can only handle the data that are linear, low-dimensional, and relatively independent attributes and often with discrete attribute values, while the models that are capable of handling high-dimensional nonlinear data, like deep learning, have very poor interpretability. reach a mutual understandingWeb19 de jul. de 2006 · Steffen Fieuws, Geert Verbeke, Filip Boen, Christophe Delecluse, High Dimensional Multivariate Mixed Models for Binary Questionnaire Data, Journal of the … reach a new stageWeb10 de abr. de 2024 · In both cases, models that are based on pairwise covariances can be used on their own or as an element in a larger model, such as a spatial generalized linear model. In this work, we are mainly concerned with using spatial information to improve the accuracy of predictions, rather than reducing bias in parameter estimates ( LeSage, 2008 ). reach a new high