How to Structure Your Machine Learning Projects – Part 4- Model Improvement Methods

Overview

Machine learning algorithms, while powerful, are not perfect, or at least not perfect “right out of the box.” The complex math that controls the building of machine learning models requires adjustments based on the nature of your data in order to give the best possible results. There are no hard and fast rules to this process of tuning, and frequently it will simply come down to the nature of your algorithm and your data set, although some guidelines can be provided to make the process easier. Furthermore, while tuning the parameters of your model is arguably the most important method of improving your model, other methods do exist, and can be critical for certain applications and data sets.

Tuning Your Model

As stated above, there are no hard and fast rules for tuning your model. Which parameters you can adjust will come down to your choice of algorithm and implementation. Generally, though, regardless of your parameters, it’s desirable to start with a wide range of values for your tuning tests, and then narrow your ranges after finding baseline values and performance. It’s also generally desirable, in your first ‘round’ of testing, to test more than one value at once, as the interactions between different parameters can mean that as you adjust one, it only becomes effective within a certain range of values for another. The result is that some kind of comprehensive testing is needed, at least at first. If you’re using the sci-kit learn library, I highly recommend using their GridSearchCV class to find your baseline values, as it will perform all of the iterations of your parameters for you, with a default of three fold cross-validation.

Once you have your baseline, you can begin testing your parameter values one at a time, and within smaller ranges. In this round of testing, it’s important to pay close attention to your evaluation metrics, as your parameter changes will, hopefully, affect the various performance features of your model. As you adjust, you will find patterns, relationships, and learn more about the way in which your individual algorithm and data respond to the changes that you’re making. The best advice I can give is to test extensively, listen to your metrics, and trust your intuitions.

The next best advice I can give is don’t be afraid to start over. It may be that after running several rounds of fine tuning that your performance plateaus. Let yourself try new baseline values. The machine learning algorithms out there today are very complex, and may respond to your testing and your data in ways that only become clear after many failures. Don’t let it discourage you, and don’t be nervous about re-treading old ground once you’ve learned a little more from your previous experiments. It may end up being the key you need to find a breakthrough.

Alternate Methods – Boosting

Aside from tuning your model’s parameters, there are many other ways to try to improve your model’s performance. I’m going to focus on just a few here, the first of them being the concept of boosting. Boosting is the artificial addition of records to your data set, usually to boost the presence of an under-represented or otherwise difficult to detect class. Boosting works by taking existing records of the class in question and either copying them or altering them slightly, and then adding some desired number of duplicates to the training data. It is important that this process happens only with the training data and not the testing data when using cross-validation, or you will add biased records to your testing set.

It is important also only to use boosting on your data set up to a certain degree, as altering the class composition too drastically (for example, by making what was initially a minority class into the majority class) can throw off the construction of your models as well. However, it is still a very useful tool when handling a difficult to detect target class, imbalanced class distribution, or when using a decision tree model, which generally works best with balanced class distributions.

Alternate Methods – Bagging

This method of model improvement isn’t actually a way to improve your models, per se, but instead to improve the quality of your system. Bagging is a technique that can be used with ensemble methods, classification systems that employ a large number of weak classifiers to build a powerful classification system. The process of bagging involves either weighting or removing classifiers from your ensemble set based on their performance. As you decrease the impact of the poorly performing classifiers (“bagging” them), you increase the overall performance of your whole system. Bagging is not a method that can always be used, but it is an effective tool for getting the absolute most out of using an ensemble classification method.

Alternate Methods – Advanced Cleaning

As discussed in our pre-processing post, advanced pre-processing methods, such as feature selection and outlier removal, can also increase the effectiveness of your models.  Of these two methods, feature selection is frequently the easier and more effective tool, although if your data set is particularly noisy, removing outliers may be more helpful to you. Rather than reiterate what was said before, I recommend that you read our full post on these techniques, located here.

Conclusion

The process of model improvement is a slow but satisfying one. Proposing, running, and analyzing your experiments takes time and focus, but yields the greatest rewards, as you begin to see your models take shape, improve, and reveal the information hidden in your data.

Stay in Touch

Get the latest news, posts, and in-depth articles from dbSeer in your inbox.

"*" indicates required fields

This field is for validation purposes and should be left unchanged.