Show simple item record

dc.contributor.authorVANNUCCI, GIULIA
dc.contributor.authorGOTTARD, ANNA
dc.contributor.authorGrilli, Leonardo
dc.contributor.authorRampichini, Carla
dc.date.accessioned2022-06-01T12:19:54Z
dc.date.available2022-06-01T12:19:54Z
dc.date.issued2021
dc.identifierONIX_20220601_9788855183048_524
dc.identifier.issn2704-5846
dc.identifier.urihttps://library.oapen.org/handle/20.500.12657/56339
dc.description.abstractMixed or multilevel models exploit random effects to deal with hierarchical data, where statistical units are clustered in groups and cannot be assumed as independent. Sometimes, the assumption of linear dependence of a response on a set of explanatory variables is not plausible, and model specification becomes a challenging task. Regression trees can be helpful to capture non-linear effects of the predictors. This method was extended to clustered data by modelling the fixed effects with a decision tree while accounting for the random effects with a linear mixed model in a separate step (Hajjem & Larocque, 2011; Sela & Simonoff, 2012). Random effect regression trees are shown to be less sensitive to parametric assumptions and provide improved predictive power compared to linear models with random effects and regression trees without random effects. We propose a new random effect model, called Tree embedded linear mixed model, where the regression function is piecewise-linear, consisting in the sum of a tree component and a linear component. This model can deal with both non-linear and interaction effects and cluster mean dependencies. The proposal is the mixed effect version of the semi-linear regression trees (Vannucci, 2019; Vannucci & Gottard, 2019). Model fitting is obtained by an iterative two-stage estimation procedure, where both the fixed and the random effects are jointly estimated. The proposed model allows a decomposition of the effect of a given predictor within and between clusters. We will show via a simulation study and an application to INVALSI data that these extensions improve the predictive performance of the model in the presence of quasi-linear relationships, avoiding overfitting, and facilitating interpretability.
dc.languageEnglish
dc.relation.ispartofseriesProceedings e report
dc.subject.otherRegression trees
dc.subject.otherMultilevel models
dc.subject.otherRandom effects
dc.subject.otherHierarchical data
dc.titleChapter Random effects regression trees for the analysis of INVALSI data
dc.typechapter
oapen.identifier.doi10.36253/978-88-5518-304-8.07
oapen.relation.isPublishedBybf65d21a-78e5-4ba2-983a-dbfa90962870
oapen.relation.isbn9788855183048
oapen.series.number127
oapen.pages6
oapen.place.publicationFlorence


Files in this item

Thumbnail

This item appears in the following Collection(s)

Show simple item record