Classification and regression tree learning on massive datasets is a common data mining task at Google, yet many state of the art tree learning algorithms require training data to reside in memory on a single machine. While more scalable implementations of tree learning have been proposed, they typically require specialized parallel computing architectures. In contrast, the majority of Google’s computing infrastructure is based on commodity hardware.
In this paper, Herbach describes PLANET: a scalable distributed framework for learning tree models over large datasets. PLANET defines tree learning as a series of distributed computations, and implements each one using the MapReduce model of distributed computation.
Herbach shows how this framework supports scalable construction of classification and regression trees, as well as ensembles of such models.
Herbach also discusses the benefits and challenges of using a MapReduce compute cluster for tree learning, and demonstrate the scalability of this approach by applying it to a real world learning task from the domain of computational advertising.
Josh Herbach is an engineer at Google where he works on ads quality. Prior to joining Google in June 2008, he received his bachelors degree in computer science from Princeton University where he did research in clustering evaluation, electronic voting systems and autonomous vehicles.
When he isn't busy making self-driving cars that can hack elections and run k-means, he occasionally spends his time puzzling, backpacking, or hunting for good dim sum restaurants.