When we try to build classification models from training data, the proportion of target classes do impact the accuracy levels of predictions. This is an experiment to measure the level of impact of these proportions.

Let us say you are trying to predict which visitors to your website would buy a product. You collect historical data about the visitor's characteristics and actions and also whether they brought something or not. This is the model building data set. The "Buy Decision" variable becomes the target variable we are trying to predict. It has two possible values - "yes" and "no". If 70% of the records in the training data set have "no" in them, then the proportion of classes is 70-30 between "no" and "yes".

If we build a model using this data set, what is the impact of this proportion on overall accuracy of predictions using this model? Will the accuracy be higher if the ratio is 50-50 than 90-10? To test this, we performed multiple iterations of classifications using this base data set. For each iteration, we choose a random data set from a base data set with different proportions between "no" and "yes". The total number of records remains the same for all iterations. Then we split the data set into training and testing sets. The training and testing sets will retain the same proportion of class values. We then built a classification model on the training data set and predicted the test data set. For each iteration, we measured the following

- Overall accuracy
- Accuracy of "No" predictions - how well we predict "No"
- Accuracy of "Yes" predictions - how well we predict "Yes".

The results are shown in this chart. The X-axis shows the % of "Yes" in the data for that iteration. The 3 lines show the various accuracy levels being measured

The findings are as follows

1. When the proportion of a specific class is high, its prediction accuracy is also very high. On the contrary, if the proportion of that class is low, its accuracy is also very low. This goes to show that the larger class "biases" the model towards it, since it has more samples in the training data set.

2. The overall accuracy is higher when one of the classes has a higher proportion than the other. It is lower when the classes are of equal proportion. This is again because, the higher class skews the accuracy computation towards it since it has more representation in the numerator and denominator.

3. When the proportions are equal,all three accuracy levels are the same. While this is a lower level, this might be the desired equilibrium because we have a model that can predict all classes equally well.

It goes to show that we should be sensitive to the target class proportions in the data set. To build models, its recommended that we choose a data set that has equal proportions of all classes. This way the model equally "represents" the characteristics of each class.