At the next Bergen Machine Learning meetup I will lecture on some of my work with gradient boosting algorithms: by coupling information theory, the frequency domain and tree-boosting, the algorithm can adaptively learn the optimal structure of individual trees, and how many trees that should be added; regularization is redundant. This is nice as there are no worries of overfitting, the computational cost is drastically reduced, and it facilitates the democratization of machine learning.
A repository for the presentation can be found at here.