How do you evaluate the performance of a Neural Network? Focus on AzureML

I read the Microsoft blog entitled ‘How to evaluate model performance in Azure Machine Learning‘. It’s a nice piece of work, and it got me thinking. I didn’t see that the blog post contained anything about neural network evaluation, so this topic is covered here.

How do you evaluate the performance of a Neural Network? This blog focuses on Neural Networks in AzureML, in order to help you to understand what they mean.

What are Neural Networks?

Would you like to know how to make predictions from a dataset? Alternatively, would you like to find exceptions, or outliers, that you need to watch out for? Neural networks are used in business to answer the business questions. They are used to make predictions from a dataset, or to find unusual patterns. They are best used for regression or classification business problems.

What are the different types of Neural Networks?

I’m going to credit the Asimov Institute with this amazing diagram:

Neural Network Types

In AzureML, we can review the output from a neural network experiment that we created previously. We can see the results by clicking on the Evaluation Model task, and clicking on the Visualise option.

Once we click on Visualise, we can see a number of charts, which are described here:

  • Receiver Operating Curve
  • Precision / Recall
  • Lift visualization

The Receiver Operating Curve

Here is an example:

ROC Curve

In our example, we can see that the curve well up into the left hand corner for the ROC curve. When we look on the precision and recall curve, we can see that precision and recall are high figures, and this leads to a high F1 score. This means that the model is effective in terms of how precisely it classifies the data, and that it covers a good proportion of the cases that it should have classified correctly.

Precision and Recall

Precision and recall are very useful for assessing models in terms of business questions. They offer more detail and insights into the model’s performance. Here is an example:

Precision and Recall Precision can be described as the fraction of times that the model classifies the number of cases correctly. It can be considered as a measure of confirmation, and it indicates how often the model is correct. Recall is a measure of utility, which means that it identifies how much that the model finds of all that there is to find within the search space. Both scores combine to make the F1 score. The F1 score combines Precision and Recall. If either precision and recall are small, then the F1 score value will be small.

Lift Visualisation

Lift Chart visually represents the improvement that a model provides when compared against a random guess.This is called a lift score. With a lift chart, you can compare the accuracy of predictions for the models that have the same predictable attribute. Lift Visualisation

Summary

In my next blog, I’ll talk a little about how we can make the Neural Network perform better.

To summarise, we have examined various key metrics in evaluating a neural network in AzureML. These scores also apply to other technologies, such as R.

These criteria can help us to evaluate our models, which, in turn, can help us to fundamentally evaluate our business questions. Understanding the numbers helps to drive the business forward, and visualizing these numbers helps to convey the message of the numbers.

Data Preparation in AzureML – Where and how?

messy-officeOne question that keeps popping up in  myc customer AzureML projects is ‘How do I conduct data preparation on my data?’ For example, how can we join the data, clean it, and shape it so that it is ready for analytics? Messy data is a problem for every organisation. If you don’t think it is an issue for your organisation, perhaps you haven’t looked hard enough.

To answer the question properly, we need to stand back a little, and see the problem as a part of a larger technology canvas. From the enterprise architecture perspective, that it is best to do data preparation as close to the source as possible. The reason for this is that the cleaned data would act as a good, consistent source for other systems, and you would only have to do it once. You have cleaned data that you can re-use, rather than re-do for every place where you need to use the data.

Let’s say you have a data source, and you want to expose the data in different technologies, such as Power BI, Excel and Tableau. Many organisations have a ‘cottage industry’ style of enterprise architecture, where they have different departments using different technologies. It is difficult to align data and analytics across the business, since the interpretation of the data may be implemented in a manner that is technology-specific rather than business-focused. If you take a ‘cottage industry’ approach, you would have to repeat your data preparation steps across different technologies.

dt960131dhc0

When we come to AzureML, the data preparation perspective isn’t forgotten, but it isn’t a strong data preparation tool like Paxata or Datameer, for example. It’s the democratization of data for the masses, yes, and I see the value it brings to businesses. It’s meant for machine learning and data science, so you should expect to use it for those purposes. It’s not a standalone data preparation tool, although it does help you partway.

The data preparation facilities in AzureML can be found here. If you have to clean up the data in AzureML, my futurology ‘dream’ scenario for AzureML is that Microsoft have weighty data preparation as a task, like other tasks in AzureML. You could click on the task, and then have roll-your-own data preparation pop up in the browser (all browser based) provided by Microsoft or perhaps have Paxata or Datameer pop out as a service, hosted in Azure as part of your Azure portal services. Then, you would go back to AzureML, all in the browser. In the meantime, you would be better trying to follow the principles of cleaning it up close to the course.

crisp-dm_process_diagramDon’t be downhearted if AzureML isn’t giving you the data preparation that you need. Look back to the underlying data, and see what you can do. The answer might be as simple as writing a view in SQL Server. AzureML is for operations and machine learning further downstream. If you are having serious data preparation issues, then perhaps you are not ready for the modelling phase of CRISP-DM so you may want to take some time to think about those issues.