Getting started in Machine Learning: Google vs Databricks vs AzureML vs R

Executive Summary

MLpngMachine learning is high on the agenda of cloud providers. From startups to global companies, technology decision makers are watching the achievements of Google and Amazon Alexa with a view to implementing Machine Learning in their own organizations. In fact, as you read this article, it is highly likely that you have interacted with Machine learning in some way today. Organizations such as Google, Netflix, Amazon, and Microsoft have Machine learning as part of their services.  Machine Learning has become the ‘secret sauce’ in business and consumer facing spheres, such as online retail, recommendation systems, fraud detection and even Digital Personal Assistants such as Cortana, Siri and Amazon’s Echo.

The goal of this paper is to provide the reader with the tools necessary to select wisely between the range of open source, hybrid and proprietary machine learning technologies to meet the technical needs for providing business benefit. The report will offer support for devising a long-term strategy for machine learning for existing and future implementations. Here, we compare the following technologies:

  • Google Tensorflow
  • R
  • Databricks
  • AzureML
  • Google Cloud Machine Learning

 

Introduction and Methodology

A major challenge facing most organizations today is the decision whether to go open-source, hybrid, or proprietary with their technology vision and selection process.

Machine learning refers to a series of techniques where a machine is trained how to solve a problem. Machine Learning algorithms often do not require to be explicitly programmed, but they respond flexibly to the environment after receiving intensive training. Broader experiences work to improve the efficiency and the capabilities of the machine learning algorithms.  Machine Learning is proving immensely useful to help cope with the sheer speed of results required by the business, along with more advanced techniques.

The decisions on machine learning technology choice goes beyond regular technology choice, since it involves a leap of faith that the technology will offer the promised insights.  Machine Learning requires a process of creating, collating, refining, modelling, training and evaluating models on an ongoing process. It is also determined by how organizations intend to use machine learning technology.

Clearly, organizations see Machine Learning as a growing asset for the future, and they are adding the capability now. Machine Learning will increase in adoption in tandem with other opportunities in related technologies, such as Big Data and cloud technologies, and open source becomes more trusted in organizations. This data takes the form of clickstreams, and logs, sensor data from various machines, images and videos. The business teams will want to know more about deceptively simple business questions, where the answer lies in Big Data sources. However, data sources can be difficult to analyze for in. Using insights from this data, companies across various industries can improve business outcomes.

What opportunities are missed if it is not used? By adopting ML, enterprises are looking to improve their business, or even radically transform it. Organizations are potentially losing ground against competitors, if they are not working towards automation or machine learning in some way. They are also not making use of their existing historical data, and their data going forward.

In terms of recent developments, Machine Learning has changed to adapt to the new types of structured and unstructured data sources in the industry. It is also being utilized in real-time deployments. Machine Learning has become more popular as organizations are now able to collect more data, including big data sources, cheaply through cloud computing environments.

In this Roadmap, we will examine the options and opportunities available to businesses as they move forward into Machine Learning opportunities, with a focus on whether organizations should use open source, proprietary or hybrid solutions. The Roadmap focuses on the important decisions that the organization can make is on the choice of technology, and whether this should be open source, proprietary, or a hybrid architecture. The Roadmap also introduces a maturity map to investigate how the choice of machine learning technology can be influenced by the maturity of the organization in delivering machine learning solutions overall.

Evolution of Machine Learning

Though machine learning existed for a long time, it is the cloud that made the technology more accessible and usable to businesses of every size. The cloud offers a complete data storage solution with everything that machine learning needs to run, such as tools, libraries, code, runtime, models and data.

According to Google Trends, the term Machine Learning has increased in popularity six-fold since July 2012.  In response to this interest, established Machine Learning organizations are leading the way by provisioning their technology through open source. For example, Google has TensorFlow, the open source set of machine learning libraries that Google open sourced in 2015.  Amazon has made its Deep Scalable Sparse Tensor Network Engine (DSSTNE – pronounced ‘Destiny’) library available on GitHub under the Apache 2.0 license. Technology innovator and industry legend Elon Musk has ventured out with OpenAI, which bills itself as a ‘non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.’ The technical community has a great deal of Machine Learning energy, evidenced by the fact that Google recently announced its acquisition of online data scientist community Kaggle, which has an established community of data scientists and potential employee pool, as well as one of the largest repositories of datasets that will help train the next generation of machine-learning algorithms.

Evolution of Open Source

Why has Open Source achieved so much prevalence in Machine Learning in recent years? Open Source has been a part of Machine Learning, right from its inception, but it has gained attention in recent years due to significant successes. For example, AlphaGo, produced using Torch software. AlphaGo’s victory over the human Go champion, Lee Sedol, wasn’t simply an achievement for artificial intelligence; it was also a triumph for Open Source software.

Evolution of Hybrid and Proprietary Systems

What problems are hybrid and proprietary systems trying to solve? The overarching theme is that proprietary organizations are aiming themselves at the democratization of data, or the democratization of artificial intelligence. This is fundamentally changing the Machine Learning industry, as organizations are taking Machine Learning away from academic institutions and into the hands of business users to support business decision making.

Proprietary solutions can be deployed quickly, and they are designed to be scalable and work at global scale. Machine Learning is de-coupled from the on-premises solution to a solution that can be easy to manage, administer and cost. Vendors must respond nimbly to these changes as data centers make the transition towards powerhouses for analytics and machine learning at scale.

In the battle for market share, innovation is expended at the cloud level to ensure that standards in governance and compliance are met, with government bodies in mind. As the threat of cybercrime increases, standards of compliance and governance have become a flashpoint for competition.

How are they distinguished? Open source refers to source code that is freely available on the internet, and is created and maintained by the technical community. A well-known example of open source software is Linux. On the other hand, proprietary software could also be known as closed-source software, which is distributed and sold under strict licensing terms and the associated copyright and intellectual property is owned by a specific organization. Hybrid architectures are based on a mix of open source and proprietary software.

Methodology

For this analysis, we have identified and assessed the relative importance of some key differentiators. These are the key technologies and market forces that will redefine the sector in which technologies will strive to gain an advantage. Technology decision makers can use the Disruption Vector analysis in choosing the approach that aligns with their business requirements.

Decision Makers can use the Disruption Vector analysis to support them in selecting the right technologies that best suit their own requirements.

Here, I assigned a score from 1 – 5 to each company for each vector. The combination of these scores and the relative weighting and importance of the vectors drives the company index across all differentiators.

Usage Scenarios

Machine learning, regardless of approach, has several common usage scenarios.

hand-3044387_1280

Finance

One popular use case of machine learning is Finance. As with other areas of Machine Learning, a trend perpetuated by more accessible computing power and more accessible machine learning tools and open source packages dedicated to finance. Machine Learning is pervasive in Finance in terms of business and consumer solutions. Machine Learning is used in activities such as approving loans, risk management, asset management, and currency forecasting.

The term robo-advisor is a new term, which was unheard of, five years ago.

Robo-advisors are used to adjust a financial portfolio to the goals and risk tolerance of the consumer based on factors such as attitude to saving, age, income, and current financial assets. The robo-advisor then adapts these factors to reach user’s financial goals.

Bots are also used in providing customer service in the Finance industry. For example, they can interact with consumers using natural language processing and speech recognition. The bot capability for language, combined with the robo-advisor capability for predicting factors that meet financial goals, mean that the Finance world is fundamentally impacted by machine learning from the customer perspective, and the finance professionals’ perspective as millennials fuel the uptake of automated financial planning services.

Healthcare

Why should enterprises care about healthcare? Enterprises have an interest in keeping healthcare costs low as the hard cost of healthcare premiums increase, and they can even impact the enterprises’ ability to invest in itself. Employee sickness costs affect productivity, and it is in the interests of the enterprise to invest in employee health. As an industry, U.S. health care costs were $3.2 trillion in 2015, making healthcare one of the country’s largest industries, equaling to 17.8 percent of US gross domestic product. Rising healthcare issues, such as diabetes and heart disease, are caused by lifestyle factors.

Machine learning is one of the tools being deployed to reduce healthcare costs. As the use of health apps and smartphones increases, there is increased data from the Internet of Things technology. This is set to drive health virtual assistants, which can take advantage of IoT data, increased natural language processing and sophisticated healthcare algorithms to provide quick patient advice for current health ailments, as well as monitor for future potential issues.

Machine Learning can assist healthcare in improving patient outcomes, preventative medicine and predicting diagnoses. In the healthcare industry, it is used for the reduction of patient harm events, reduction of hospital acquired infections, right through to more strategic inputs such as revenue cycle management, and patient care management. Machine Learning in healthcare specifically focuses on long-term and longitudinal questions, and helps to evaluate patient care through risk-adjusted comparisons. For the healthcare professional, it can help to have simple data visualizations which display the message of the data, and deploy models for repeated use to help patients long-term.

There are open source machine learning offerings which are aimed specifically at healthcare. For example, healthcare.ai is accessible to the thousands of healthcare professionals who do not have data science skills, but would like to use machine learning technology to help patients.

Marketing

Machine learning has a wide range of applications in marketing. These range from techniques to understand existing and potential customers, such as social media monitoring, search engine optimization and quality evaluation.

There are also opportunities to offer excellent customer service, such as tailoring customers, personalized customer recommendations and improved cross-selling and up-selling opportunities.

There are a few open source marketing tools which use machine learning. These include Datumbox Machine Learning Framework. Most machine learning tools aimed at marketing are proprietary, however, such as IBM Watson Marketing.

Key Differentiators

Machine learning decisions are crucial in developing a forward-thinking machine learning strategy that ensures success throughout the enterprise.

Well-known organizations have effectively used machine learning as a way of demonstrating prominence and technical excellence through flagship achievements. Enterprises will need a cohesive strategy that facilitates adoption of machine learning right across the enterprise, from delivering machine learning results from business users through to the technical foundations.

In this section, we discuss the five most important vectors that contribute to this disruption in the industry, which also correspond to factors that are crucial to consider when adopting machine learning as part of an enterprise strategy. The selected disruption vectors are focused on the transition of an organization towards the generation of a successful enterprise strategy of implementing and deploying machine learning technology.

The differentiators identified are the following:

  • Ease of Use
  • Core Architecture
  • Data Integration
  • Monitoring and Management
  • Sophisticated Modelling

Ease of Use

Technology name Approach Commentary Score
R Open Source Data Scientists 1
Databricks Hybrid Data Scientists but it has interfaces for the business user 4
Microsoft AzureML Hybrid Business Analysts through to Data Scientists 5
Google Cloud Machine Learning Proprietary Data Scientists 3
Google Tensorflow Proprietary Data Scientists 3

Generally, Machine Learning technology is still primarily aimed at data scientists to deliver and productize the technology, but there is a recognition that other roles can play a part, such as data engineers, devops engineers and business experts.

In this disruption vector, one clear differentiator between Microsoft and the other technologies is that they locate the non-technical business analyst at the front and center of the AzureML technology. In AzureML, more complex models can be built in R and loaded up to AzureML., with a drag-and-drop interface and embeddable R.  Databricks focus on a more end-to-end solution, which utilizes different roles at different points in the workflow. Databricks have different tools for different parts of the process. The need for a data scientist is balanced out by the provision of tools specifically targeted at the business analyst. Both AzureML and Databricks allow for the consumption of machine learning models by the business user. Google Cloud Machine Learning Engine, Google Tensorflow and the open-source R have firmly placed the development of machine learning models in the data scientist and developer spheres. As Google Tensorflow and R are both open-source, this is to be expected.

Google’s Cloud Machine Learning Engine combines the managed infrastructure of Google Cloud Platform with the power and flexibility of open-source Google TensorFlow. Google Cloud Machine Learning Engine has a clear roadmap in terms of people starting off in R and Google TensorFlow open source, and then porting those models into Google Cloud Machine Learning. RStudio, an R IDE, allows Tensorflow to be used with R, so this enables R models to be imported into Google Tensorflow, and then into Google Cloud Machine Learning Engine.

Business users can access their data through a variety of integrations with Google, including Informatica, Tableau, looker, Qlik, snapLogic and the Google Analytics 360 Suite. This means that Machine Learning is embedded in the user’s choice of interface for the data.

The risk for enterprises is that the multiple Google integration points introduce multiple points of failure and numerous points of complexity in putting the different Google pieces of the jigsaw together, which is further exacerbated with the presence of third-party integrations into tools which are visible from the user perspective. In the future, this scenario may change, however, as Google put Machine Learning at the heart of their data centers and user-oriented applications. The business user is seeing an increase of the presence of machine learning in their everyday activities, even including the creation and development of business documents. Google is aimed firmly at the data scientist and the developer, but it does offer its pre-build machine learning intelligence platform. That said, the competition is heating in this space as Google are now bringing machine learning to improve data visualization for business users so that they can make better use of their business data. Microsoft are also adding some machine learning into Microsoft Word for Windows, so that it now offers intelligent grammar suggestions.

Core Architecture

Machine Learning solutions should be resilience, robust and have a clear separation between storage and compute so that models are portable.

There are variations in the core technology which differentiate open source technologies from the large vendors. Embarrassingly parallel workloads separate a technical problem into many parallel tasks with ease. The tasks are run in parallel, with no interdependencies between the tasks or data. R does not naturally work easily with embarrassingly parallel workloads. Many scientific and data analysis require parallel workloads, and packages such as Snow, Multicore, RHadoop and RHIPE can help R to provision embarrassingly parallel workloads.

As an open-source technology, R is not resilient or robust to failure. R works in-memory, and it is only able to hold the data that resides in memory. Its power comes from open-source packages, but these can over-write one another’s functions. This can be an issue because it can cause problems at the point of execution, which can be difficult to troubleshoot without real support.

On the other hand, proprietary cloud-based machine learning solutions offer the separation between storage and compute with ease. Google Tensorflow can use the Graphical Processing Unit (GPUs) that Google has in place in its data centres, which Google are now rebranding as AI centres. To mitigate against technical debt, both Databricks and Google Cloud Machine Learning Engine both have a clear trajectory from the open source technology of Google Tensorflow towards the full enterprise solution which provides confidence for widespread enterprise adoption. As a further step, Databricks also allow porting models from Apache Spark ML and Keras, as well as other languages such as R, Python or Java.  As a further signal of the importance of the open source path with a view to reducing technical debt, Google have released a neural networking package, Sonnet, as a partner to Tensorflow to reduce friction to model switching during model development.

Technology name Approach Commentary Score
R Open Source No; liable to errors 1
Databricks Hybrid Reduce technical debt by being open to most tech 4
Microsoft AzureML Hybrid R and Python are embedded. Solution is robust. Not open to Tensorflow or other packages 4
Google Tensorflow Open Source APIs are not yet stable. 2
Google Cloud Machine Learning Engine Proprietary Cloud architecture with a clear onboarding process of open source technology 4

Data Integration

Data is more meaningful when it is put with other data. Here is the ways in which the technologies differentiate:

Technology name Approach Commentary  
R Not Open R is one of the spoke programming languages, but it is not a hub in itself. 1
Databricks Highly Open Databricks is highly open, facilitating SQL, R, Python, Scala and

Java. It also facilitates machine learning frameworks/libraries such as Scikit-learn, Apache Spark ML, TensorFlow,

Keras.

5
Microsoft AzureML Moderately Open R and Python are embedded. Solution is robust 3
Google Tensorflow Open Source Tensorflow offers a few APIs but these are not yet stable. Python is considered stable, but the others, C++, Java and Go are not considered stable. 3
Google Cloud Machine Learning Engine Proprietary APIs are offered through Tensorflow 3

To leverage Machine Learning on the cloud without significant rework, solutions should support data import to machine learning systems. We need to see improved support for databases, time period definition, referential integrity and other enhancements.

The Machine Learning models run in IaaS and PaaS environments, which consume the APIs and services exposed by the cloud vendors and produce an output of data, which can be interpreted as the results. The cloud environment prevents portability of workloads, and organizations are concerned about vendor lock-in of cloud platforms for machine learning.

The modelling process itself involves taking a substantial amount of inbound data, analyzing it, and determining the best model. The machine learning also needs to be robust and recover gracefully from failures during processing, such as network failures.

In terms of the enterprise transition to the cloud for machine learning, it should not impact the optimization of the machine learning technology, and it should not impact the structures used or created by the machine learning process.

Machine learning solutions should be able to ingest data in different formats, such as JSON, Xml, Avro and Parquet.  The models themselves should be able to be in a portable format, such as PMML.

The range of modelling approaches within data science means that data scientists can approach a modelling problem in many ways. For example, data scientists can use scikit-learn, Apache Spark ML, TensorFlow or Keras. Data Scientists can also choose from a number of different languages: SQL, R, Python, Scala or Java, to name a few.

Of all the packages and frameworks, Databricks scored the best in terms of data integration. Dependent on the skill set, data scientists can use scikit-learn, Apache Spark ML, TensorFlow or Keras. Data Scientists can also choose from several different languages – R, Python, Scala or Java.  AzureML and Google Cloud Machine Learning Engine are more restrictive in terms of their onboarding approach. AzureML will integrate with R and Python languages, and it will ingest data from Azure data sources and Odata. Google Cloud Machine Learning Engine will accept data that it serialized in a format that Tensorflow can accept, normally CSV format. Further, like Azure ML, Google data has to be stored in a location where the data can be accessed, such as Google BigQuery or Cloud Storage.

AzureML’s dependency on R may be the popular choice, but there is a risk of inheriting issues in R code that is not high quality. The sheer volume of R packages is often given as a reason to adopt R. However, these does not mean that the packages are quality; some of these packages are small and not developed often, but they are still maintained on the R repository, CRAN, with no real distinction between these packages and the well-maintained, more robust packages. Google Tensorflow, on the other hand, has a large, well-maintained library which is extending to Sonnet.

Monitoring and Management

What you don’t monitor, you can’t manage.

Technology name Score
R 1
Databricks 5
Microsoft AzureML 5
Google Tensorflow 2
Google Cloud Machine Learning Engine 5

Machine Learning has a rich set of techniques to develop and optimise all aspects of Machine Learning. This ranges from the onerous task of cleaning the inbound data, to deploying the final model to production.

As Machine Learning becomes embedded in the enterprise data estate, it should also be robust. Machine learning models should not be able to run out of space. Instead, machine learning solutions should be able to accommodate elasticity of cloud-based data storage. Since many queries will span clouds and on-premises as the business requirements for expanded data sources will increase, the machine learning solution needs to keep up with ‘data virtualisation’ requirement.

As a result of its natural operation, machine learning modelling and data processing can be time-consuming. If the machine learning algorithm takes extensive delay to conduct functions such as repartitioning data, scaling up or down, or copying large data sources around, then this will add unnecessary delay to the machine learning model processing. From the business perspective, lengthy machine learning process will adversely impact user adoption, subsequently introducing an obstacle to business value. The more intervention required by the machine learning solution needs to support its operation, then the less automated and less elastic the solution it becomes. It will also mean that the business isn’t taking advantage of the key benefits of cloud. Systems that are built for the cloud should dynamically adapt to the business need.

If the machine learning solution is built for the cloud, then there should be efficient separation of compute and storage. This means that the storage and the compute can be managed and costed independently. In many cases, the data will be stored, but the actual machine learning model creation and processing will require bursts of processing and model usage. This requirement is perfect for cloud, allowing customers to take advantage of cloud computing so that they can dynamically adapt the technology to meet the business need.

R has no ability to monitor itself, and issues are resolved by examining logs and restarting the R processes. There is no obvious way to identify where the process started or stopped processing data so it is best to simply start again, which means that time can be lost. Both AzureML and Google Cloud Machine Learning provide user-focused management and monitoring via a browser based option, with Google providing a command line option too.

Sophisticated Modelling

Data modelling is still a fundamental aspect of handling data. The technologies differ in terms of their levels of sophistication in producing models of data.

Technology name Approach Commentary  
R Open Source No; liable to errors 1
Databricks 5
Microsoft AzureML Hybrid R and Python are embedded. Solution is robust 4
Google Tensorflow Proprietary 5
Google Cloud Machine Learning Engine Proprietary

Google Cloud Machine Learning Engine allows users to create models with Tensorflow, which are then onboarded to produce cloud-based models. TensorFlow can be used via Python or C++ APIs, while its core functionality is provided by a C++ backend. The Tensorflow library comes with a range of in-built operations, such as matrix multiplications, convolutions, pooling and activation functions, loss functions and optimizers. Once a graph of computations has been defined, TensorFlow executes it efficiently across different platforms.

The Tensorflow modelling facility is flexible, and it is aimed at the deep learning community. Tensorflow is created in well-known languages of Python and C++ so there is a fast ramp-up of skill sets to reach a high level of Tensorflow model sophistication. Tensorflow allows data scientists to roll their own models but they will need a deep understanding of machine learning and optimization in order to be effective in generating models.

Azure

Azure ML Studio is a framework to develop machine learning and Data Science applications. It has an easy to use graphical interface that allows you to quickly develop machine learning apps. It saves you a lot of time by making easier to do tasks like data cleaning, feature engineering and test different ML algorithms. It allows to add scripts in python and R and also includes deep learning.

Further, AzureML comes ready-prepared with some pre-baked machine learning models to help the business analyst on their way. AzureML offers in-built models, but it is not always very clear how the models got to their results. As the data visualization aspect grows in Power BI and in the AzureML Studio, this is a challenge which will be handled in the future.

Company Analysis

In the recent history of the industry, Sophos acquired Invincia, Radware bought Seculert and Hewlett Packard bought Niara.

R

R is possibly the most well-known data science open source tool. It was developed in academia, and has had widespread adoption throughout the academic and business community.  R has a range of machine learning packages, which are downloadable from the CRAN repository. These include MICE for imputing values, rpart and caret for classification and regression models, and PARTY for partitioning models. It is completely open source, and it forms part of the offerings discussed in this company analysis.

When did R start? Who are the R customers? Are they a leader, in terms of installations? Average size of customers? Enterprise scale customers? What are customers’ concerns and benefits? Are companies just flirting with R because it’s the buzzword at the moment?

Google

Google is appealing to enterprises as it solves many different enterprise technology solutions from infrastructure consolidation and data storage right through to business user-focused solutions in Google office technologies. As an added bonus, enterprise customers can leverage Google’s own machine learning technology which underpins functionality such Google Photos.

Tensorflow is open source, and it can be used on its own. Tensorflow appears in Google Cloud Machine Learning capabilities, which is a paid solution. Google also has a paid offering, which clearly chains together cloud APIs with machine learning, and unstructured data sources such as video, images and speech.

Google’s Tensorflow has packages aimed at verticals, too, such as Finance and cybercrime. It is also used for social good projects. For example, Google made the headlines recently when a self-taught teenager used Google Tensorflow to diagnose breast cancer.

Comparison

Now that Databricks is now in Azure, it is well worth a look for streamlined data science and machine learning at scale. This will appeal more to the coders, but this brings the benefit of customization. AzureML is a great place to get started, and many Business Intelligence professionals are moving into Data Science via this route.

Open Source tools such as R and Google’s Tensorflow are not enterprise tools. They are missing key features, such as the ability to connect with a wide range of APIs. Further, it does not have key enterprise features such as security, management, monitoring and optimization. It is projected that Tensorflow will start to have more of these enterprise features in the future. Also, organisations do not always want open source used in their production systems.

Despite an emphasis on being proprietary, both IBM Watson and Microsoft offer free tiers of their solutions, which are limited by size. In this way, both organizations compete with the open source, free offerings with the bonus of robust cloud infrastructure to support the efforts of the data scientists. Databricks offer a free trial, which converts into a paid offering. The Databricks technology is based on Amazon, and they distinguish between data engineering and data analytics workloads. The distinction may not always be clear to business people, however, as it is couched in terms of UIs, APIs and notebooks and these terms will be more meaningful to technical people.

Future Considerations

In the future, there will need to be more reference architecture for common scenarios. In addition, best practice design patterns will become more commonplace as the industry grows in terms of experience.

How do containers impact machine learning? Kubernetes is an open source container cluster orchestration system. Containers allow for the automation and deployment, scaling, of operations, and it’s possible to envisage machine learning manifested in container clusters. Containers are built for a multi-cloud world: public, private, hybrid. Machine Learning is going through a data renaissance, and there may be more to come. From this perspective, the winners will be the organizations who are most open to changes in data, platform and other systems, but this does not necessarily mean open source. Enterprises will be concerned about issues which are core to other software operations, such as robustness, mitigation of risk and other enterprise dependencies.

Conclusion

Machine learning is high on the agenda of cloud providers. From startups to global companies, technology decision makers are watching the achievements of Google and Amazon Alexa with a view to implementing Machine Learning in their own organizations. In fact, as you read this article, it is highly likely that you have interacted with Machine learning in some way today. Organizations such as Google, Netflix, Amazon, and Microsoft have Machine learning as part of their services.  Machine Learning has become the ‘secret sauce’ in business and consumer facing spheres, such as online retail, recommendation systems, fraud detection and even Digital Personal Assistants such as Cortana, Siri and Amazon’s Echo.

From the start of this century, machine learning has grown from belonging to academia and large organizations who could afford it, to a wide range of options from well-known and trusted vendors who propose a range of solutions aimed at small and large organizations alike. The vendors are responding to the driving forces behind the increasing demand for machine learning solutions as organizations are inspired by the promise that Machine Learning offers, the accessibility offered by cloud computing, open source machine learning and big data technologies as well as perceived low cost and easily-accessible skills.

In this paper, we with the tools necessary to select wisely between the range of open source, hybrid and proprietary machine learning technologies to meet the technical needs for providing business benefit. The report has offered some insights into how the software vendors stack up against one another.

How do you evaluate the performance of a Neural Network? Focus on AzureML

I read the Microsoft blog entitled ‘How to evaluate model performance in Azure Machine Learning‘. It’s a nice piece of work, and it got me thinking. I didn’t see that the blog post contained anything about neural network evaluation, so this topic is covered here.

How do you evaluate the performance of a Neural Network? This blog focuses on Neural Networks in AzureML, in order to help you to understand what they mean.

What are Neural Networks?

Would you like to know how to make predictions from a dataset? Alternatively, would you like to find exceptions, or outliers, that you need to watch out for? Neural networks are used in business to answer the business questions. They are used to make predictions from a dataset, or to find unusual patterns. They are best used for regression or classification business problems.

What are the different types of Neural Networks?

I’m going to credit the Asimov Institute with this amazing diagram:

Neural Network Types

In AzureML, we can review the output from a neural network experiment that we created previously. We can see the results by clicking on the Evaluation Model task, and clicking on the Visualise option.

Once we click on Visualise, we can see a number of charts, which are described here:

  • Receiver Operating Curve
  • Precision / Recall
  • Lift visualization

The Receiver Operating Curve

Here is an example:

ROC Curve

In our example, we can see that the curve well up into the left hand corner for the ROC curve. When we look on the precision and recall curve, we can see that precision and recall are high figures, and this leads to a high F1 score. This means that the model is effective in terms of how precisely it classifies the data, and that it covers a good proportion of the cases that it should have classified correctly.

Precision and Recall

Precision and recall are very useful for assessing models in terms of business questions. They offer more detail and insights into the model’s performance. Here is an example:

Precision and Recall Precision can be described as the fraction of times that the model classifies the number of cases correctly. It can be considered as a measure of confirmation, and it indicates how often the model is correct. Recall is a measure of utility, which means that it identifies how much that the model finds of all that there is to find within the search space. Both scores combine to make the F1 score. The F1 score combines Precision and Recall. If either precision and recall are small, then the F1 score value will be small.

Lift Visualisation

Lift Chart visually represents the improvement that a model provides when compared against a random guess.This is called a lift score. With a lift chart, you can compare the accuracy of predictions for the models that have the same predictable attribute. Lift Visualisation

Summary

In my next blog, I’ll talk a little about how we can make the Neural Network perform better.

To summarise, we have examined various key metrics in evaluating a neural network in AzureML. These scores also apply to other technologies, such as R.

These criteria can help us to evaluate our models, which, in turn, can help us to fundamentally evaluate our business questions. Understanding the numbers helps to drive the business forward, and visualizing these numbers helps to convey the message of the numbers.

Data Preparation in AzureML – Where and how?

messy-officeOne question that keeps popping up in  myc customer AzureML projects is ‘How do I conduct data preparation on my data?’ For example, how can we join the data, clean it, and shape it so that it is ready for analytics? Messy data is a problem for every organisation. If you don’t think it is an issue for your organisation, perhaps you haven’t looked hard enough.

To answer the question properly, we need to stand back a little, and see the problem as a part of a larger technology canvas. From the enterprise architecture perspective, that it is best to do data preparation as close to the source as possible. The reason for this is that the cleaned data would act as a good, consistent source for other systems, and you would only have to do it once. You have cleaned data that you can re-use, rather than re-do for every place where you need to use the data.

Let’s say you have a data source, and you want to expose the data in different technologies, such as Power BI, Excel and Tableau. Many organisations have a ‘cottage industry’ style of enterprise architecture, where they have different departments using different technologies. It is difficult to align data and analytics across the business, since the interpretation of the data may be implemented in a manner that is technology-specific rather than business-focused. If you take a ‘cottage industry’ approach, you would have to repeat your data preparation steps across different technologies.

dt960131dhc0

When we come to AzureML, the data preparation perspective isn’t forgotten, but it isn’t a strong data preparation tool like Paxata or Datameer, for example. It’s the democratization of data for the masses, yes, and I see the value it brings to businesses. It’s meant for machine learning and data science, so you should expect to use it for those purposes. It’s not a standalone data preparation tool, although it does help you partway.

The data preparation facilities in AzureML can be found here. If you have to clean up the data in AzureML, my futurology ‘dream’ scenario for AzureML is that Microsoft have weighty data preparation as a task, like other tasks in AzureML. You could click on the task, and then have roll-your-own data preparation pop up in the browser (all browser based) provided by Microsoft or perhaps have Paxata or Datameer pop out as a service, hosted in Azure as part of your Azure portal services. Then, you would go back to AzureML, all in the browser. In the meantime, you would be better trying to follow the principles of cleaning it up close to the course.

crisp-dm_process_diagramDon’t be downhearted if AzureML isn’t giving you the data preparation that you need. Look back to the underlying data, and see what you can do. The answer might be as simple as writing a view in SQL Server. AzureML is for operations and machine learning further downstream. If you are having serious data preparation issues, then perhaps you are not ready for the modelling phase of CRISP-DM so you may want to take some time to think about those issues.