WiDS Datathon 2021 – Hyperparameter Tuning

This is a really short post since I didn't actually tune too many hyperparameters of the Explainable Boosting Model. In fact, I don't think there were actually too many that I wanted to tune anyway haha. Plus, hyperparameter tuning in an explainable model felt weird to me, but I didn't get much sleep last night so my words aren't great right now haha.

Anyway, I used Bayesian Optimization to tune the minimum samples in a leaf as well as the maximum leaves. I didn't run it too much, and then I got something that gave me a slightly higher AUC! Huzzah!

It's convenient because I also had this tweet retweeted onto my timeline today – it feels so true that initial data prep has much more of a difference on how the model performs than parameter tuning or even picking more “sophisticated” models.

I have a draft post going that I'll clean up and post once this is all done about “lessons learned” since I have so many thoughts about this that don't belong in my nice little “workflow” type posts haha. I've realized that there's a pretty big difference between a datathon to get perfection vs how I work in the real world, after all.

#datascience #wids2021