Max Lin on finishing second in the R Challenge

Kaggle Team
Kaggle Blog
Published in
8 min readFeb 22, 2011

--

I participated in the R package recommendation engine competition on Kaggle for two reasons. First, I use R a lot. I cannot learn statistics without R. This competition is my chance to give back to the community a R package recommendation engine. Second, during my day job as an engineer behind a machine learning service in the cloud, product recommendation is one of the most popular applications our early adopters want to use the web service for. This competition is my opportunity to stand in users’ shoes and identify their pain points associated with building a recommendation system.

I treat R package recommendation as a binary classification task: a classifier $latex f(u, p)$ takes as input a user $latex u$ and a package $latex p$, and predicts whether the user will install the package or not. My final submission (team name: Record Me Men) combines four classifiers, labeled as large dots in Figure 1. The four classifiers share the same form of objective function that minimizes loss $latex L$ plus regularizers $latex R$:

$latex J(\theta) = \sum_i L(y_i, f(u_i, p_i; \theta)) + \lambda R(\theta)$

, where $latex f(u, v)$ is a classification model, $latex \theta$ is the parameters of the classification model, $latex y_i$, $latex u_i$, $latex p_i$ are the class label (package installed or not), user and package of the i-th training example, respectively. $latex \lambda$ controls the penalty from the regularizing function, and is chosen using cross validation.

Model 1: Baseline. Model 1 is example_model_2.R that the competition organizer provides as a baseline. In this model, a user is encoded as dummy variables $latex \mathbf{u}$, and a package is represented by seven features $latex \mathbf{p} = \{p_1, \dots, p_7\}$ (e.g, the logarithmic count of dependency. See the R script for the complete list). The classification model is a linear combination of user dummy variables and package features with weight parameters, $latex f(u, p) = \mathbf{\theta_u}^T \mathbf{u} + \mathbf{\theta_p}^T \mathbf{p}$. The parameters are $latex \theta_u$ and $latex \theta_p$ (and intercept). The loss function $latex L$ is negative logistic log likelihood. There is no regularizer in the model.

The first model establishes a strong baseline, achieving AUC of ~0.94, labeled as M1b2 in Figure 1. A variant of this model that omits the user feature (see example_model_1.R, also provided by the contest organizers) achieves noticeable lower AUC of 0.81 (not shown in Figure 1). This suggests that some users are more likely to install R packages than others. In the next model, I will explore a classification model that incorporates not only user variations but also package variations.

Model 2: Latent factor Models. In contrast to Model 1 with features derived from metadata, I focus on the user-package rating matrix. Model 2 consists of two components: baseline estimates and latent factors. Baseline estimates are linear combinations of three parts: global (one parameter $latex \mu$), user (one parameter per user $latex \mu_u$), and package (one parameter per package $latex \mu_p$). For latent factors, I assumes that there are K latent factors for each user, $latex \mathbf{\beta}_u$ and each package $latex \mathbf{\beta}_p$, and the inner product of these two factors captures the interaction between a user and a package. The classifier model in Model 2 is $latex f(u, p) = \mu + \mu_u + \mu_p + \mathbf{\beta}_u^T \mathbf{\beta}_p$. This kind of latent factor model, also known as “singular value decomposition”, has been reported with great success in previous collaborative filtering studies and at the Netlifx Prize. I choose an exponential loss function for Model 2, $latex L(y, f(u, p)) = \exp(- y f(u, p))$,

where $latex y_i \in \{1, -1\}$. I choose exponential loss over squared loss because 1) exponential loss matches 0–1 loss better than squared loss 2) exponential loss is differentiable. I apply L2 regularizers, $latex R(\cdot) = ||\mu_u||² + ||\mu_p||² + ||\beta_u||² + ||\beta_p||²$. I minimize the objective function using stochastic gradient descent. The number of latent factors, K, is chosen by cross validation.

The latent factor model works very well on the R package recommendation data, achieving AUC of ~0.97 (See the M2 family in Figure 1). I plot the performance of five latent factor models with different Ks, ranging from 10 to 50, labeled as M2k10, M2k20, …, M2K50 in Figure 1. As K increases, the latent factor model becomes more expressive and fit the data better, resulting in higher AUC.

Model 3: Package LDA topic. In Model 3, I explore new features not used in Model 1 and 2. The new feature is a package’s topic based LDA (Latent Dirichlet Allocation). The LDA topic of a package is inferred from the word counts of its man pages. I use the topics.csv, kindly prepared by the contest organizers, to map a R package to one of the 25 LDA topics (See topic_models.R for details on running LDA, provided by the contest organizer).

The classification model of Model 3 is similar to Model 2. Model 3 replaces user factors in Model 2 with T LDA factors weights $latex \mathbf{t}_u$ (T=25 here), and replaces package factors in Model 3 with T dummy variables $latex \mathbf{p}$. The classification model is $latex f(u, v) = \mu + \mu_u + \mu_p + \mathbf{t}_u^T \mathbf{p}$. The loss function is the same as Model 2, and the regularizer is L2, $latex R(\cdot) = ||\mu_u||² + ||\mu_p||² + ||\mathbf{t}_u||² $.

Model 3 achieves better AUC than Model 1, resulting in AUC of ~0.97 (labeled as M3u in Figure 1). Prior to M3u, I explore a simpler model that users share the same weights $latex \mathbf{t}$. The parameter space becomes smaller, but Model 3 with shared parameters (labeled as M3b in Figure 1) performs slightly worse than Model 3 with user-specific parameters. This makes sense because it is unlikely every R user shares the same interest in the same set of LDA topics.

Model 4: Package task view. R packages are organized into task views (e.g., high-performance computing, survival analysis, and time series. See the page for the complete list of views). It is possible that a user interested in a particular task would install not just one but many, even all, packages in the same task view (e.g., using install.views() R function). We could improve recommendation if we know how much a user is interested in a particular task view. I use the views.csv, provided by the contest organizers, to map a package to its task view.

The classification model of Model 4 is similar to Model 3, except that LDA topics in Model 3 are replaced with task views (T=29, 28 task views plus 1 unknown view). The performance of Model 4 is labeled as M4u in Figure 1. Model 4 performs well and better than Model 1. Similarly, I experiment with a variant of Model 4 that all users share the same task view parameters $latex \mathbf{t}$, labeled as M4b i n Figure 1, and it performs worse than Model 4 with per-user parameters. The finding is consistent with Model 3, and R users, at least on the R recommendation data set, seem to have different preferences in task views.

Ensemble learning. I combine four classifiers using logistic regression. To collect out-of-sample training data for the ensemble learner, I divide the training data into three sets: 80%, 10%, and 10%. I train individual classifier on the 80%. I then apply the trained classifiers on the 20%, and combine the scores from individual classifiers as training data for the ensemble learner. Both individual classifiers and the combiner are evaluated on the last 10% data set (as those shown in Figure 1 and Figure 2). After evaluating one fold, I shift data and conduct another folds until all data are used, resulting in a total of 10 ensemble learners. The output of these 10 ensemble learners are averaged to produce final predictions.

The ensemble learning works very well, as shown in Figure 2. Combining M1 and M2 achieves AUC of 0.9721, which is higher than either M1 0.94 or M2 0.9702 alone. When I combine more models and include M3 and M4, the performance gets even better. The submission of combining four models achieves best performance on the training set among all models. On the test set the final model achieves AUC of 0.983289 (after post-processing, see below), the highest among my submissions. The success of ensemble training is possibly because the individual models are strong performers, and models are diverse with different classification models and features such that they complement each other.

Post-processing. I apply two post-processing steps before submitting each entry. First, the label and (user, package) association on the training set is memorized. For any (user, package) pairs in the test set that are already seen in the training set, their labels on the training set are recalled and used instead. Second, I assume when a user install a package P, the user also installs the packages that P depends on. I record all packages a user installs on the training set as well as their dependent packages. Given a pair (U, P) on the test set, if the package P is found in the set of the packages on which the user U’ installed packages depend, I ignore the prediction and output 1.0 instead. Although the dependency assumption does not completely hold true (there are known contradictory examples on the training set), the two filters combined increases absolute AUC ~0.004, which matters in a competition with razor-thin margins between leading submissions.

I develop the classification training programs for Model 2, 3, and 4 in Python. I use R to explore data, run logistic regression (glm() in the stats library), calculate AUC (performance() in the ROCR library), and plot results (ggplot() in the ggplot2 library). All programs are developed and run on a commodity PC with dual-core 2GHz CPU and 4G RAM. I make the source codes available at github.

Although my submissions are based on only the data the contest organizers provide and simple linear models, by no means you should stop here. There are many directions worth exploring, for examples, crawling CRAN to analyze the rich graphs of depends, imports, suggests, and enhances between R packages. In an open-end contest like this, the sky is the limit.

Max Lin is a software engineer with Google Research in New York City office, and the tech lead of the Google Prediction API. Prior to Google, he published research work in video content analysis, sentiment analysis, machine learning, and cross-lingual information retrieval. He has a PhD in Computer Science from Carnegie Mellon University. He would like to thank the organizers of the R package recommendation engine competition for their hard work and efforts in putting this competition together. He participates the competition as individuals, and writes all codes in his spare time. Nothing expressed here may or should be attached to his employer.

Originally published at blog.kaggle.com on February 22, 2011.

--

--

Kaggle Team
Kaggle Blog

Official authors of Kaggle winner’s interviews + more! Kaggle is the world’s largest community of data scientists. Join us at kaggle.com.