Quan Sun on finishing in second place in Predict Grant Applications

Kaggle Team
Kaggle Blog
Published in
6 min readFeb 22, 2011

--

I’m a PhD student of the Machine Learning Group in the University of Waikato, Hamilton, New Zealand. I’m also a part-time software developer for 11ants analytics. My PhD research focuses on meta-learning and the full model selection problem. In 2009 and 2010, I participated the UCSD/FICO data mining contests.

What I tried and What ended up working

I tried many different algorithms (mainly weka and matlab implementations) and feature sets in nearly 80 submissions. This report will briefly introduce two approaches that worked for this competition. Each of them will be discussed sequentially in the order of submissions.

After the first 10 testing submissions, I realised that there was a concept drift happening between 2007 and 2008. The success rates decline gradually from 2007. Also, on the information page of the contest, it states that “In Australia, success rates have fallen to 20–25 per cent…”. To me, this probably means, the decision rules for grant applications were somehow changed during 2007 and 2008. Here are some consequences that I could think of, including but not limited to:

  • The overall success rates will continue to drop
  • Successful applications in 2005/2006 would be declined in 2007/2008, so for 2009/2010
  • Success patterns becoming to be “more” random
  • Decision rules for year 2009/2010 will be close to that for 2007/2008, compared with rules for year 2006 and prior.

Based on the information and assumptions above, I decided to mainly use data points from 2007 and 2008 for training my classifiers, which turns out to be a reasonable choice.

Approach A: Ensemble Selection with transformed feature set (used in the first 20 submissions) Data engineering/transformation part

Start.date

to numeric, year, month, day in numbers

RFCD.Code.X (X=1 to 5)

to nominal

Person.ID.X (X=1 to 15)

to nominal

Number.of.Grant.X (X=1 to 15)

Total number of successful/unsuccessful grants per application

Publications AA, A, B, C

Total number of AA, A, B, C publications per application

Role.X

Total number of CHIEF_INVESTIGATORs, PRINCIPAL_SUPERVISORs, DELEGATED_RESEARCHER, EXT_CHIEF_INVESTIGATORs per application

Country.of.Birth.X

Total number of Asia_Pacific born, Australia, Great_Britain, Western_Europe, Eastern_Europe, North_America, New_Zealand, Middle_East_and_Africa per application

With.PHD

Total number of PhDs per application

Years.IN.UNI

Total number of people who has been in the University for more than 5 years

After all those transformations are done, I also had a java program to transform all nominal attributes to its corresponding frequency. The frequency counting is based on all the available data points. So, the final feature set consists of the original features, transformed features and frequency.

Modeling part

My main method is called Ensemble Selection, originally proposed by Rich Caruana and co-authors of Cornell University (http://portal.acm.org/citation.cfm?id=1015432). The following pseudocode demonstrates the basic idea of Ensemble Selection:

0. Split the data into two parts: The build set and the hillclimb set

1. Start with the empty ensemble.

2. Add to the ensemble the model (trained on “build” set) in the library that maximizes the ensemble’s performance to the error metric (AUC for this contest) on a “hillclimb” (validation) set.

3. Repeat Step 2 for a fixed number of iterations or until all the models have been used.

4. Return the ensemble from the nested set of ensembles that has maximum performance on the hillclimb (validation) set.

Model library used for my Ensemble Selection system:

AdaBoost, LogitBoost, RealAdaBoost, DecisionTable, RotationForest, BayesNet, NaiveBayes, 7 algorithms with different parameters, in total 28 base classifiers.

Building set and hillclimb set for Ensemble Selection:

Data points from year 2007 are used as the “build set”

Data points from year 2008 are used as the “hillclimb set”

Or

Data points from year 2007/01/01 to 2008/04/30 are used as the “build set”

Data points after year 2008/04/30 are used as the “hillclimb set”

Both setups worked well for the Ensemble Selection approach.

In summary, the final system for Approach A consists of three main components:

Data points from 2007 for training and 2008 for hillclimbing.

Ensemble Selection, num of bags: 10, hillclimb iterations = size of the model library.

In total 352 features.

Learderboard AUC: 0.956X, Best final test set AUC: 0.961X

From submission 20 to the end of the competition, the following features are added to Approach A feature set:

Number of missing values

Number of non-missing values

Missing value rate

Transform “Contract.Value.Band” to numeric values

Average contract value

RFCD.CODE mean, sum, max, min, standard deviation per application based

RFCD.PCT mean, sum, max, min, std per application based

SEO.CODE mean, sum, max, min, std per application based

SEO.PCT mean, sum, max, min, std per application based

Successful.grant mean, sum, max, min, std per application based

Unsuccessful.grant mean, sum, max, min, std per application based

Successful.grant mean average per application based

Successful.grant sum average per application based

All the above features for the first three applicants

All the above features for Unsuccessful.grant

Success rate of applicant 1, applicant 2, and applicant 3 per application based

Success rate of all applicants per application based

Mean, max, std success rates of all applicants per application based

Number of publications mean, sum, max, min, std per application based

Except the frequency counting described in Approach A, only “row-based (per-application-based)” statistical features were gradually introduced to my system during the competition, because I thought that, compared with “time based/column based features”, “row-based” statistical features would reduce the chance of overfitting.

Also, the following algorithms (with different/diverse parameter settings) were gradually added to the model library while the competition:

RandomForest

RacedIncrementalLogitBoost

Bagging with trees

ADTree

Linear Regression

RandomCommittee with Random Trees

Dagging

J48

Approach B: Rotation Forest with the feature set from Approach A

I tried using only Rotation forest (http://www.computer.org/portal/web/csdl/doi/10.1109/TPAMI.2006.211) with the following setup:

Base classifier: M5P model tree (weka default is J48)

Rotation method: Random Projection with Gaussian distribution (weka default is PCA)

The Rotation forest classifier was trained on data points from 2007 and 2008 with the feature set from Approach A. Here are the results:

Leaderboard AUC: 0.947X, Final test set AUC: 0.962X

Averaging the two approaches could improve the final test set AUC to 0.963X.

What tools I used

Software/Tools used for modelling and data analysis:

Weka 3.7.1 is used for modelling (with my own improved version of the Ensemble Selection algorithm)

Matlab and SAS are used for data visualization and statistical analysis

Java is used as the main programming language for this project

Most experiments were done on my home PC: AMD 6-core, 16G ram on Windows system.

Originally published at blog.kaggle.com on February 22, 2011.

--

--

Kaggle Team
Kaggle Blog

Official authors of Kaggle winner’s interviews + more! Kaggle is the world’s largest community of data scientists. Join us at kaggle.com.