Being a Microsoft Regional Director: faith, trust and pixie dust for good

I’m still learning about being a Microsoft Regional Director and I’m figuring things out. I’d like to thank Microsoft here for this opportunity and I’d like to thank the great RD team at Microsoft for their seemingly-endless patience with my questions!

Here is my opinion. I don’t represent anyone other than myself, and this is in no way official. I am extremely honored to be a Regional Director and an MVP and I think that the RD role is worth exploring further. This is just an opinion, and that I’m still learning about the RD role since I am new to it. I might add that i’m still figuring out being an MVP as well. Actually, I’m still wondering what job I want to do when I grow up!

Let’s take an example. Recently, an email popped up in my mailbox from a senior executive and decision maker, who asked for a hiring strategy for Azure team members and a commentary about POs for Azure, including Power BI. So I made a huge impact at that customer site, which was a large organization and a big ship to steer around. In fact, it takes faith, hope and a little dash of pixie dust as well as joining hands with the team in order to make the jump in digital transformation; people, processes and technology. And then, I rinse and repeat at other organizations so that everyone has a good leap of faith in the direct direction.

Recently I was on the BBC, talking about a different client where I am helping with a data science for good project, which focuses on homelessness and other aspects of social care. I’ve put the video here, in case you’re interested:

You probably think that any one-woman-band projects mean much, but they do. In fact, it’s huge. I have been working with the first client for months, on and off, combining my time with other customers in an ad-hoc way. I am convinced that Azure is the right solution and the role was born out of the roadmap to Azure that I had worked with them to produce, as part of a larger strategy piece; and it’s just the first role and more will be added later.

For the second customer case, the work we are doing, using Microsoft technologies, is going to have a good impact on people’s lives. The data overrides your perceptions. When we think of homeless people, we think of the tramps on the street, right? Wrong. What about victims of domestic violence, who become victims of unexpected homelessness because they are in fear for their lives? What about their children? That’s how hard people have it in their lives, and in the tech world, we are so blessed, often. What are we complaining about, really?

I don’t think that the RD role or the MVP role are sales roles at all. I don’t benefit financially from these recommendations. I am entirely independent and, if I recommend a solution, it’s because I believe that it is the correct solution.

So I think an RD is partially about having that strategic impact that Microsoft can really see and feel, in a good way. There will be nothing to tie me to the purchase of Azure at all, because I didn’t receive anything and I don’t sell Azure, and I didn’t make anything from the sale. I’m an independent consultant so I get paid for my time, not the fruits of my recommendations. But people will feel the results; the new hires, for example.

So I think an RD is partially about having that strategic impact that other people can really see and feel, in a good way. In these digital transformation pieces, I’m making people’s work easier for them through better processes, great technology and mentoring, supporting and helping people. For the work I’m doing in data science for good, I’m using Microsoft Data Science technologies as part of an amazing, amazing team who are doing great things and making people’s lives better. I think that is it, really: about using your pixie dust to do good things. It’s not about ‘bigger is better’ – bigger business, higher github admissions, higher turnover, larger number of hires, bigger number of Azure VMs, bigger number of forum answers or bigger profile on Stackoverflow. I think it’s about having the same pixie dust as anyone else, but throwing it liberally on the right things.

Rule your mind or it will rule you – Buddha

I think it’s about personal growth. It’s also about striving to have a maturity of outlook and a cool head, and I am trying very hard to heal and be the clean person I’d like to be. I’m doing my MBA and it’s all about personal growth and development. It’s unlike any other course I’ve done, since it means I get really hard feedback about myself as a person as well as my work. Some of the feedback is great, and other feedback is uncomfortable and provokes cognitive dissonance, but the self-honesty means that I can work on it through reflexive and reflective leadership techniques. For example, I’ve written before about having Imposter Syndrome but now I am learning to watch my thoughts better (mindfulness and my Buddhist journey) which means I’m starting to understand better if it really is Imposter Syndrome, or perhaps it’s a reality check, or perhaps I am just being silly? I have grown so much in the past few years and my Buddhist journey tells me that I have a long way to go.

When others go low, you go high

Kirk D Borne, who is an immensely insightful gentleman, asked me a deceptively simple question: what does this actually mean for you? I’d like to thank Kirk here because his generous and insightful question provoked me to think of  for days. I love it when someone challenges me with a wise question and one that I hadn’t considered before, which was kind of the point! I’ve decided on an answer: what being an RD means for me is the opportunity to network, learn and share with people who are brilliant, mature, optimistic, knowledgeable, willing to share freely and with no reward in it, know when to speak and when to stay silent, experienced in business and in the tech sphere. I’m with a great set of people who I admire.


Accountability is a very tough thing to learn and it’s something that I ask myself every day: who is accountable? Professionally or personally, you can’t shrug off personal accountability. To lead by example, you have to be accountable, which means that people can have faith and trust in you.

It’s about people you can have faith and trust in, and striving to be that person. The RD program inspires me to work towards being all of these things and to consider accountability.

It also means that I am working to make sure that nobody steals my pixie dust. Michelle Obama inspires me here: when others go low, I go high. Words to live by!

Don’t let anyone steal your Pixie Dust

Following on from accountability, it’s about being an authentic you and striving to be a better  you. On my office wall, I have a picture of Tinkerbell, as follows.


My onboarding to the RD Program has been incredible and people outside and inside of Microsoft have been amazing. So I’d like to thank everyone who has congratulated me and I can promise that I will do my best.

Modelling your Data in Azure Data Lake

One of my project roles at the moment (I have a few!) is that I am architecting a major Azure implementation for a global brand. I’m also helping with the longer-term ‘vision’ of how that might shape up. I love this part of my job and I’m living my best life doing this piece; I love seeing a project take shape until the end users, whether they are business people or more strategic C-level, get the benefit of the data. At Data Relish, I make your data work for different roles organizations of every purse and every purpose, and I learn a lot from the variety of consulting pieces that I deliver.

If you’ve had even the slightest look at the Azure Portal, you will know that it has oodles of products that you can use in order to create an end-to-end solution. I selected Azure Data Lake for a number of reasons:

  • I have my eye on the Data Science ‘prize’ of doing advanced analytics later on, probably in Azure Databricks as well as Azure Data Lake. I want to make use of existing Apache Spark skills and Azure Data Lake is a neat solution that will facilitate this option.
  • I need a source that will cater for the shape of the data…. or the lack of it….
  • I need a location where the data can be accessed globally since it will be ingesting data from global locations.

In terms of tooling, there is always the Azure Data Lake tools for Visual Studio. You can watch a video on this topic here. But how do you get started with the design approach? So how do I go about the process of designing solutions for the Azure Data Lake? There are many different approaches and I have been implementing Kimball methodologies for years.


With this particular situation, I will be using the Data Vault methodology. I know that there are different schools of thought but I’ve learned from Dan Lindstedt in particular, who has been very generous in sharing his expertise; here is Dan’s website here. I have delivered this methodology elsewhere previously for an organization who have billions USD turnover, and they are still using the system that I put in place; it was particularly helpful approach for an acquisition scenario, for example.


Building a Data Vault starts with the modeling process, and this starts with a view of the existing datamodel of a transactional source system. The purpose of the data vault modelling lifecycle is to produce solutions to the business faster, at lower cost and with less risk, that also have a clear supported afterlife once I’ve moved onto another project for another customer.


Data Vault is a database modeling technique where the data is considered to belong to one of three entity types: hubs, links,and satellites:


  • Hubs contain the key attributes of business entities (such as geography, products, and customers)
  • Links define the relations between the hubs (for example, customer orders or product categories).


  • Satellites contain all other attributes related to hubs or links. Satellites include all attribute change history.


The result is an Entity Relationship Diagram (ERD), which consists of Hubs, Links and Satellites. Once I’d settled on this methodology, I needed to hunt around for something to use.

How do you go about designing and using an ERD tool for a Data Vault? I found a few options. For the enterprise, I found  WhereScape® Data Vault Express. That looked like a good option, but I had hoped to use something open-source so other people could adopt it across the team. It wasn’t clear how much it would cost, and, in general, if I have to ask then I can’t afford it! So far, I’ve settled on SQL Power Architect so that I can get the ‘visuals’ across to the customer and the other technical team, including my technical counterpart at the customer who picks up when I’m at a conference. This week I’m at Data and BI Summit in Dublin so my counterpart is picking up activities during the day, and we are touching base during our virtual stand-ups.

StockSnap_DotsSo, I’m still joining dots as I go along.

If you’re interested in getting started with Azure Data Lake, I hope that this gets you some pointers from the design process.

I’ll go into more detail in future blogs but I need to get off writing this blog and do some work!

Cloud computing as a leveler and an enabler for Diversity and Inclusion

I had the honour and pleasure of meeting a young person with autism recently who is interested in learning about Azure and wanted some advice on extending his knowledge.
It was a great reminder that we can’t always see people who have conditions such as autism. It also extends to disability, particularly those that you can’t see; examples include epilepsy or even Chronic Fatigue Syndrome.

Diversity gives us the opportunity to become more thoughtful, empathetic human beings.



I love cloud because it’s a great leveler for people who want to step into technology. It means that these personal quirks, or differences, or ranges of abilities can be sidestepped since we don’t need to all fit the brogrammer model in order to be great at cloud computing. Since we can do so many things remotely, it means that people can have flexibility to work in ways that suit them.

In my career, I couldn’t lift a piece of Cisco kit to rack it, because I was not strong enough. With cloud, it’s not a problem. The literally heavy lift-and-shift is already done. It really comes down to a willingness to learn and practice. I can also learn in a way that suits me, and that was the main topic of conversation with the autistic youth that I had the pleasure to meet.

I believe that people should be given a chance. Diversity gives us the opportunity to become more thoughtful, empathetic human beings. In this world, there is nothing wrong with wanting more of that humanness.

Getting started in Machine Learning: Google vs Databricks vs AzureML vs R

Executive Summary

MLpngMachine learning is high on the agenda of cloud providers. From startups to global companies, technology decision makers are watching the achievements of Google and Amazon Alexa with a view to implementing Machine Learning in their own organizations. In fact, as you read this article, it is highly likely that you have interacted with Machine learning in some way today. Organizations such as Google, Netflix, Amazon, and Microsoft have Machine learning as part of their services.  Machine Learning has become the ‘secret sauce’ in business and consumer facing spheres, such as online retail, recommendation systems, fraud detection and even Digital Personal Assistants such as Cortana, Siri and Amazon’s Echo.

The goal of this paper is to provide the reader with the tools necessary to select wisely between the range of open source, hybrid and proprietary machine learning technologies to meet the technical needs for providing business benefit. The report will offer support for devising a long-term strategy for machine learning for existing and future implementations. Here, we compare the following technologies:

  • Google Tensorflow
  • R
  • Databricks
  • AzureML
  • Google Cloud Machine Learning


Introduction and Methodology

A major challenge facing most organizations today is the decision whether to go open-source, hybrid, or proprietary with their technology vision and selection process.

Machine learning refers to a series of techniques where a machine is trained how to solve a problem. Machine Learning algorithms often do not require to be explicitly programmed, but they respond flexibly to the environment after receiving intensive training. Broader experiences work to improve the efficiency and the capabilities of the machine learning algorithms.  Machine Learning is proving immensely useful to help cope with the sheer speed of results required by the business, along with more advanced techniques.

The decisions on machine learning technology choice goes beyond regular technology choice, since it involves a leap of faith that the technology will offer the promised insights.  Machine Learning requires a process of creating, collating, refining, modelling, training and evaluating models on an ongoing process. It is also determined by how organizations intend to use machine learning technology.

Clearly, organizations see Machine Learning as a growing asset for the future, and they are adding the capability now. Machine Learning will increase in adoption in tandem with other opportunities in related technologies, such as Big Data and cloud technologies, and open source becomes more trusted in organizations. This data takes the form of clickstreams, and logs, sensor data from various machines, images and videos. The business teams will want to know more about deceptively simple business questions, where the answer lies in Big Data sources. However, data sources can be difficult to analyze for in. Using insights from this data, companies across various industries can improve business outcomes.

What opportunities are missed if it is not used? By adopting ML, enterprises are looking to improve their business, or even radically transform it. Organizations are potentially losing ground against competitors, if they are not working towards automation or machine learning in some way. They are also not making use of their existing historical data, and their data going forward.

In terms of recent developments, Machine Learning has changed to adapt to the new types of structured and unstructured data sources in the industry. It is also being utilized in real-time deployments. Machine Learning has become more popular as organizations are now able to collect more data, including big data sources, cheaply through cloud computing environments.

In this Roadmap, we will examine the options and opportunities available to businesses as they move forward into Machine Learning opportunities, with a focus on whether organizations should use open source, proprietary or hybrid solutions. The Roadmap focuses on the important decisions that the organization can make is on the choice of technology, and whether this should be open source, proprietary, or a hybrid architecture. The Roadmap also introduces a maturity map to investigate how the choice of machine learning technology can be influenced by the maturity of the organization in delivering machine learning solutions overall.

Evolution of Machine Learning

Though machine learning existed for a long time, it is the cloud that made the technology more accessible and usable to businesses of every size. The cloud offers a complete data storage solution with everything that machine learning needs to run, such as tools, libraries, code, runtime, models and data.

According to Google Trends, the term Machine Learning has increased in popularity six-fold since July 2012.  In response to this interest, established Machine Learning organizations are leading the way by provisioning their technology through open source. For example, Google has TensorFlow, the open source set of machine learning libraries that Google open sourced in 2015.  Amazon has made its Deep Scalable Sparse Tensor Network Engine (DSSTNE – pronounced ‘Destiny’) library available on GitHub under the Apache 2.0 license. Technology innovator and industry legend Elon Musk has ventured out with OpenAI, which bills itself as a ‘non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.’ The technical community has a great deal of Machine Learning energy, evidenced by the fact that Google recently announced its acquisition of online data scientist community Kaggle, which has an established community of data scientists and potential employee pool, as well as one of the largest repositories of datasets that will help train the next generation of machine-learning algorithms.

Evolution of Open Source

Why has Open Source achieved so much prevalence in Machine Learning in recent years? Open Source has been a part of Machine Learning, right from its inception, but it has gained attention in recent years due to significant successes. For example, AlphaGo, produced using Torch software. AlphaGo’s victory over the human Go champion, Lee Sedol, wasn’t simply an achievement for artificial intelligence; it was also a triumph for Open Source software.

Evolution of Hybrid and Proprietary Systems

What problems are hybrid and proprietary systems trying to solve? The overarching theme is that proprietary organizations are aiming themselves at the democratization of data, or the democratization of artificial intelligence. This is fundamentally changing the Machine Learning industry, as organizations are taking Machine Learning away from academic institutions and into the hands of business users to support business decision making.

Proprietary solutions can be deployed quickly, and they are designed to be scalable and work at global scale. Machine Learning is de-coupled from the on-premises solution to a solution that can be easy to manage, administer and cost. Vendors must respond nimbly to these changes as data centers make the transition towards powerhouses for analytics and machine learning at scale.

In the battle for market share, innovation is expended at the cloud level to ensure that standards in governance and compliance are met, with government bodies in mind. As the threat of cybercrime increases, standards of compliance and governance have become a flashpoint for competition.

How are they distinguished? Open source refers to source code that is freely available on the internet, and is created and maintained by the technical community. A well-known example of open source software is Linux. On the other hand, proprietary software could also be known as closed-source software, which is distributed and sold under strict licensing terms and the associated copyright and intellectual property is owned by a specific organization. Hybrid architectures are based on a mix of open source and proprietary software.


For this analysis, we have identified and assessed the relative importance of some key differentiators. These are the key technologies and market forces that will redefine the sector in which technologies will strive to gain an advantage. Technology decision makers can use the Disruption Vector analysis in choosing the approach that aligns with their business requirements.

Decision Makers can use the Disruption Vector analysis to support them in selecting the right technologies that best suit their own requirements.

Here, I assigned a score from 1 – 5 to each company for each vector. The combination of these scores and the relative weighting and importance of the vectors drives the company index across all differentiators.

Usage Scenarios

Machine learning, regardless of approach, has several common usage scenarios.



One popular use case of machine learning is Finance. As with other areas of Machine Learning, a trend perpetuated by more accessible computing power and more accessible machine learning tools and open source packages dedicated to finance. Machine Learning is pervasive in Finance in terms of business and consumer solutions. Machine Learning is used in activities such as approving loans, risk management, asset management, and currency forecasting.

The term robo-advisor is a new term, which was unheard of, five years ago.

Robo-advisors are used to adjust a financial portfolio to the goals and risk tolerance of the consumer based on factors such as attitude to saving, age, income, and current financial assets. The robo-advisor then adapts these factors to reach user’s financial goals.

Bots are also used in providing customer service in the Finance industry. For example, they can interact with consumers using natural language processing and speech recognition. The bot capability for language, combined with the robo-advisor capability for predicting factors that meet financial goals, mean that the Finance world is fundamentally impacted by machine learning from the customer perspective, and the finance professionals’ perspective as millennials fuel the uptake of automated financial planning services.


Why should enterprises care about healthcare? Enterprises have an interest in keeping healthcare costs low as the hard cost of healthcare premiums increase, and they can even impact the enterprises’ ability to invest in itself. Employee sickness costs affect productivity, and it is in the interests of the enterprise to invest in employee health. As an industry, U.S. health care costs were $3.2 trillion in 2015, making healthcare one of the country’s largest industries, equaling to 17.8 percent of US gross domestic product. Rising healthcare issues, such as diabetes and heart disease, are caused by lifestyle factors.

Machine learning is one of the tools being deployed to reduce healthcare costs. As the use of health apps and smartphones increases, there is increased data from the Internet of Things technology. This is set to drive health virtual assistants, which can take advantage of IoT data, increased natural language processing and sophisticated healthcare algorithms to provide quick patient advice for current health ailments, as well as monitor for future potential issues.

Machine Learning can assist healthcare in improving patient outcomes, preventative medicine and predicting diagnoses. In the healthcare industry, it is used for the reduction of patient harm events, reduction of hospital acquired infections, right through to more strategic inputs such as revenue cycle management, and patient care management. Machine Learning in healthcare specifically focuses on long-term and longitudinal questions, and helps to evaluate patient care through risk-adjusted comparisons. For the healthcare professional, it can help to have simple data visualizations which display the message of the data, and deploy models for repeated use to help patients long-term.

There are open source machine learning offerings which are aimed specifically at healthcare. For example, is accessible to the thousands of healthcare professionals who do not have data science skills, but would like to use machine learning technology to help patients.


Machine learning has a wide range of applications in marketing. These range from techniques to understand existing and potential customers, such as social media monitoring, search engine optimization and quality evaluation.

There are also opportunities to offer excellent customer service, such as tailoring customers, personalized customer recommendations and improved cross-selling and up-selling opportunities.

There are a few open source marketing tools which use machine learning. These include Datumbox Machine Learning Framework. Most machine learning tools aimed at marketing are proprietary, however, such as IBM Watson Marketing.

Key Differentiators

Machine learning decisions are crucial in developing a forward-thinking machine learning strategy that ensures success throughout the enterprise.

Well-known organizations have effectively used machine learning as a way of demonstrating prominence and technical excellence through flagship achievements. Enterprises will need a cohesive strategy that facilitates adoption of machine learning right across the enterprise, from delivering machine learning results from business users through to the technical foundations.

In this section, we discuss the five most important vectors that contribute to this disruption in the industry, which also correspond to factors that are crucial to consider when adopting machine learning as part of an enterprise strategy. The selected disruption vectors are focused on the transition of an organization towards the generation of a successful enterprise strategy of implementing and deploying machine learning technology.

The differentiators identified are the following:

  • Ease of Use
  • Core Architecture
  • Data Integration
  • Monitoring and Management
  • Sophisticated Modelling

Ease of Use

Technology name Approach Commentary Score
R Open Source Data Scientists 1
Databricks Hybrid Data Scientists but it has interfaces for the business user 4
Microsoft AzureML Hybrid Business Analysts through to Data Scientists 5
Google Cloud Machine Learning Proprietary Data Scientists 3
Google Tensorflow Proprietary Data Scientists 3

Generally, Machine Learning technology is still primarily aimed at data scientists to deliver and productize the technology, but there is a recognition that other roles can play a part, such as data engineers, devops engineers and business experts.

In this disruption vector, one clear differentiator between Microsoft and the other technologies is that they locate the non-technical business analyst at the front and center of the AzureML technology. In AzureML, more complex models can be built in R and loaded up to AzureML., with a drag-and-drop interface and embeddable R.  Databricks focus on a more end-to-end solution, which utilizes different roles at different points in the workflow. Databricks have different tools for different parts of the process. The need for a data scientist is balanced out by the provision of tools specifically targeted at the business analyst. Both AzureML and Databricks allow for the consumption of machine learning models by the business user. Google Cloud Machine Learning Engine, Google Tensorflow and the open-source R have firmly placed the development of machine learning models in the data scientist and developer spheres. As Google Tensorflow and R are both open-source, this is to be expected.

Google’s Cloud Machine Learning Engine combines the managed infrastructure of Google Cloud Platform with the power and flexibility of open-source Google TensorFlow. Google Cloud Machine Learning Engine has a clear roadmap in terms of people starting off in R and Google TensorFlow open source, and then porting those models into Google Cloud Machine Learning. RStudio, an R IDE, allows Tensorflow to be used with R, so this enables R models to be imported into Google Tensorflow, and then into Google Cloud Machine Learning Engine.

Business users can access their data through a variety of integrations with Google, including Informatica, Tableau, looker, Qlik, snapLogic and the Google Analytics 360 Suite. This means that Machine Learning is embedded in the user’s choice of interface for the data.

The risk for enterprises is that the multiple Google integration points introduce multiple points of failure and numerous points of complexity in putting the different Google pieces of the jigsaw together, which is further exacerbated with the presence of third-party integrations into tools which are visible from the user perspective. In the future, this scenario may change, however, as Google put Machine Learning at the heart of their data centers and user-oriented applications. The business user is seeing an increase of the presence of machine learning in their everyday activities, even including the creation and development of business documents. Google is aimed firmly at the data scientist and the developer, but it does offer its pre-build machine learning intelligence platform. That said, the competition is heating in this space as Google are now bringing machine learning to improve data visualization for business users so that they can make better use of their business data. Microsoft are also adding some machine learning into Microsoft Word for Windows, so that it now offers intelligent grammar suggestions.

Core Architecture

Machine Learning solutions should be resilience, robust and have a clear separation between storage and compute so that models are portable.

There are variations in the core technology which differentiate open source technologies from the large vendors. Embarrassingly parallel workloads separate a technical problem into many parallel tasks with ease. The tasks are run in parallel, with no interdependencies between the tasks or data. R does not naturally work easily with embarrassingly parallel workloads. Many scientific and data analysis require parallel workloads, and packages such as Snow, Multicore, RHadoop and RHIPE can help R to provision embarrassingly parallel workloads.

As an open-source technology, R is not resilient or robust to failure. R works in-memory, and it is only able to hold the data that resides in memory. Its power comes from open-source packages, but these can over-write one another’s functions. This can be an issue because it can cause problems at the point of execution, which can be difficult to troubleshoot without real support.

On the other hand, proprietary cloud-based machine learning solutions offer the separation between storage and compute with ease. Google Tensorflow can use the Graphical Processing Unit (GPUs) that Google has in place in its data centres, which Google are now rebranding as AI centres. To mitigate against technical debt, both Databricks and Google Cloud Machine Learning Engine both have a clear trajectory from the open source technology of Google Tensorflow towards the full enterprise solution which provides confidence for widespread enterprise adoption. As a further step, Databricks also allow porting models from Apache Spark ML and Keras, as well as other languages such as R, Python or Java.  As a further signal of the importance of the open source path with a view to reducing technical debt, Google have released a neural networking package, Sonnet, as a partner to Tensorflow to reduce friction to model switching during model development.

Technology name Approach Commentary Score
R Open Source No; liable to errors 1
Databricks Hybrid Reduce technical debt by being open to most tech 4
Microsoft AzureML Hybrid R and Python are embedded. Solution is robust. Not open to Tensorflow or other packages 4
Google Tensorflow Open Source APIs are not yet stable. 2
Google Cloud Machine Learning Engine Proprietary Cloud architecture with a clear onboarding process of open source technology 4

Data Integration

Data is more meaningful when it is put with other data. Here is the ways in which the technologies differentiate:

Technology name Approach Commentary  
R Not Open R is one of the spoke programming languages, but it is not a hub in itself. 1
Databricks Highly Open Databricks is highly open, facilitating SQL, R, Python, Scala and

Java. It also facilitates machine learning frameworks/libraries such as Scikit-learn, Apache Spark ML, TensorFlow,


Microsoft AzureML Moderately Open R and Python are embedded. Solution is robust 3
Google Tensorflow Open Source Tensorflow offers a few APIs but these are not yet stable. Python is considered stable, but the others, C++, Java and Go are not considered stable. 3
Google Cloud Machine Learning Engine Proprietary APIs are offered through Tensorflow 3

To leverage Machine Learning on the cloud without significant rework, solutions should support data import to machine learning systems. We need to see improved support for databases, time period definition, referential integrity and other enhancements.

The Machine Learning models run in IaaS and PaaS environments, which consume the APIs and services exposed by the cloud vendors and produce an output of data, which can be interpreted as the results. The cloud environment prevents portability of workloads, and organizations are concerned about vendor lock-in of cloud platforms for machine learning.

The modelling process itself involves taking a substantial amount of inbound data, analyzing it, and determining the best model. The machine learning also needs to be robust and recover gracefully from failures during processing, such as network failures.

In terms of the enterprise transition to the cloud for machine learning, it should not impact the optimization of the machine learning technology, and it should not impact the structures used or created by the machine learning process.

Machine learning solutions should be able to ingest data in different formats, such as JSON, Xml, Avro and Parquet.  The models themselves should be able to be in a portable format, such as PMML.

The range of modelling approaches within data science means that data scientists can approach a modelling problem in many ways. For example, data scientists can use scikit-learn, Apache Spark ML, TensorFlow or Keras. Data Scientists can also choose from a number of different languages: SQL, R, Python, Scala or Java, to name a few.

Of all the packages and frameworks, Databricks scored the best in terms of data integration. Dependent on the skill set, data scientists can use scikit-learn, Apache Spark ML, TensorFlow or Keras. Data Scientists can also choose from several different languages – R, Python, Scala or Java.  AzureML and Google Cloud Machine Learning Engine are more restrictive in terms of their onboarding approach. AzureML will integrate with R and Python languages, and it will ingest data from Azure data sources and Odata. Google Cloud Machine Learning Engine will accept data that it serialized in a format that Tensorflow can accept, normally CSV format. Further, like Azure ML, Google data has to be stored in a location where the data can be accessed, such as Google BigQuery or Cloud Storage.

AzureML’s dependency on R may be the popular choice, but there is a risk of inheriting issues in R code that is not high quality. The sheer volume of R packages is often given as a reason to adopt R. However, these does not mean that the packages are quality; some of these packages are small and not developed often, but they are still maintained on the R repository, CRAN, with no real distinction between these packages and the well-maintained, more robust packages. Google Tensorflow, on the other hand, has a large, well-maintained library which is extending to Sonnet.

Monitoring and Management

What you don’t monitor, you can’t manage.

Technology name Score
R 1
Databricks 5
Microsoft AzureML 5
Google Tensorflow 2
Google Cloud Machine Learning Engine 5

Machine Learning has a rich set of techniques to develop and optimise all aspects of Machine Learning. This ranges from the onerous task of cleaning the inbound data, to deploying the final model to production.

As Machine Learning becomes embedded in the enterprise data estate, it should also be robust. Machine learning models should not be able to run out of space. Instead, machine learning solutions should be able to accommodate elasticity of cloud-based data storage. Since many queries will span clouds and on-premises as the business requirements for expanded data sources will increase, the machine learning solution needs to keep up with ‘data virtualisation’ requirement.

As a result of its natural operation, machine learning modelling and data processing can be time-consuming. If the machine learning algorithm takes extensive delay to conduct functions such as repartitioning data, scaling up or down, or copying large data sources around, then this will add unnecessary delay to the machine learning model processing. From the business perspective, lengthy machine learning process will adversely impact user adoption, subsequently introducing an obstacle to business value. The more intervention required by the machine learning solution needs to support its operation, then the less automated and less elastic the solution it becomes. It will also mean that the business isn’t taking advantage of the key benefits of cloud. Systems that are built for the cloud should dynamically adapt to the business need.

If the machine learning solution is built for the cloud, then there should be efficient separation of compute and storage. This means that the storage and the compute can be managed and costed independently. In many cases, the data will be stored, but the actual machine learning model creation and processing will require bursts of processing and model usage. This requirement is perfect for cloud, allowing customers to take advantage of cloud computing so that they can dynamically adapt the technology to meet the business need.

R has no ability to monitor itself, and issues are resolved by examining logs and restarting the R processes. There is no obvious way to identify where the process started or stopped processing data so it is best to simply start again, which means that time can be lost. Both AzureML and Google Cloud Machine Learning provide user-focused management and monitoring via a browser based option, with Google providing a command line option too.

Sophisticated Modelling

Data modelling is still a fundamental aspect of handling data. The technologies differ in terms of their levels of sophistication in producing models of data.

Technology name Approach Commentary  
R Open Source No; liable to errors 1
Databricks 5
Microsoft AzureML Hybrid R and Python are embedded. Solution is robust 4
Google Tensorflow Proprietary 5
Google Cloud Machine Learning Engine Proprietary

Google Cloud Machine Learning Engine allows users to create models with Tensorflow, which are then onboarded to produce cloud-based models. TensorFlow can be used via Python or C++ APIs, while its core functionality is provided by a C++ backend. The Tensorflow library comes with a range of in-built operations, such as matrix multiplications, convolutions, pooling and activation functions, loss functions and optimizers. Once a graph of computations has been defined, TensorFlow executes it efficiently across different platforms.

The Tensorflow modelling facility is flexible, and it is aimed at the deep learning community. Tensorflow is created in well-known languages of Python and C++ so there is a fast ramp-up of skill sets to reach a high level of Tensorflow model sophistication. Tensorflow allows data scientists to roll their own models but they will need a deep understanding of machine learning and optimization in order to be effective in generating models.


Azure ML Studio is a framework to develop machine learning and Data Science applications. It has an easy to use graphical interface that allows you to quickly develop machine learning apps. It saves you a lot of time by making easier to do tasks like data cleaning, feature engineering and test different ML algorithms. It allows to add scripts in python and R and also includes deep learning.

Further, AzureML comes ready-prepared with some pre-baked machine learning models to help the business analyst on their way. AzureML offers in-built models, but it is not always very clear how the models got to their results. As the data visualization aspect grows in Power BI and in the AzureML Studio, this is a challenge which will be handled in the future.

Company Analysis

In the recent history of the industry, Sophos acquired Invincia, Radware bought Seculert and Hewlett Packard bought Niara.


R is possibly the most well-known data science open source tool. It was developed in academia, and has had widespread adoption throughout the academic and business community.  R has a range of machine learning packages, which are downloadable from the CRAN repository. These include MICE for imputing values, rpart and caret for classification and regression models, and PARTY for partitioning models. It is completely open source, and it forms part of the offerings discussed in this company analysis.

When did R start? Who are the R customers? Are they a leader, in terms of installations? Average size of customers? Enterprise scale customers? What are customers’ concerns and benefits? Are companies just flirting with R because it’s the buzzword at the moment?


Google is appealing to enterprises as it solves many different enterprise technology solutions from infrastructure consolidation and data storage right through to business user-focused solutions in Google office technologies. As an added bonus, enterprise customers can leverage Google’s own machine learning technology which underpins functionality such Google Photos.

Tensorflow is open source, and it can be used on its own. Tensorflow appears in Google Cloud Machine Learning capabilities, which is a paid solution. Google also has a paid offering, which clearly chains together cloud APIs with machine learning, and unstructured data sources such as video, images and speech.

Google’s Tensorflow has packages aimed at verticals, too, such as Finance and cybercrime. It is also used for social good projects. For example, Google made the headlines recently when a self-taught teenager used Google Tensorflow to diagnose breast cancer.


Now that Databricks is now in Azure, it is well worth a look for streamlined data science and machine learning at scale. This will appeal more to the coders, but this brings the benefit of customization. AzureML is a great place to get started, and many Business Intelligence professionals are moving into Data Science via this route.

Open Source tools such as R and Google’s Tensorflow are not enterprise tools. They are missing key features, such as the ability to connect with a wide range of APIs. Further, it does not have key enterprise features such as security, management, monitoring and optimization. It is projected that Tensorflow will start to have more of these enterprise features in the future. Also, organisations do not always want open source used in their production systems.

Despite an emphasis on being proprietary, both IBM Watson and Microsoft offer free tiers of their solutions, which are limited by size. In this way, both organizations compete with the open source, free offerings with the bonus of robust cloud infrastructure to support the efforts of the data scientists. Databricks offer a free trial, which converts into a paid offering. The Databricks technology is based on Amazon, and they distinguish between data engineering and data analytics workloads. The distinction may not always be clear to business people, however, as it is couched in terms of UIs, APIs and notebooks and these terms will be more meaningful to technical people.

Future Considerations

In the future, there will need to be more reference architecture for common scenarios. In addition, best practice design patterns will become more commonplace as the industry grows in terms of experience.

How do containers impact machine learning? Kubernetes is an open source container cluster orchestration system. Containers allow for the automation and deployment, scaling, of operations, and it’s possible to envisage machine learning manifested in container clusters. Containers are built for a multi-cloud world: public, private, hybrid. Machine Learning is going through a data renaissance, and there may be more to come. From this perspective, the winners will be the organizations who are most open to changes in data, platform and other systems, but this does not necessarily mean open source. Enterprises will be concerned about issues which are core to other software operations, such as robustness, mitigation of risk and other enterprise dependencies.


Machine learning is high on the agenda of cloud providers. From startups to global companies, technology decision makers are watching the achievements of Google and Amazon Alexa with a view to implementing Machine Learning in their own organizations. In fact, as you read this article, it is highly likely that you have interacted with Machine learning in some way today. Organizations such as Google, Netflix, Amazon, and Microsoft have Machine learning as part of their services.  Machine Learning has become the ‘secret sauce’ in business and consumer facing spheres, such as online retail, recommendation systems, fraud detection and even Digital Personal Assistants such as Cortana, Siri and Amazon’s Echo.

From the start of this century, machine learning has grown from belonging to academia and large organizations who could afford it, to a wide range of options from well-known and trusted vendors who propose a range of solutions aimed at small and large organizations alike. The vendors are responding to the driving forces behind the increasing demand for machine learning solutions as organizations are inspired by the promise that Machine Learning offers, the accessibility offered by cloud computing, open source machine learning and big data technologies as well as perceived low cost and easily-accessible skills.

In this paper, we with the tools necessary to select wisely between the range of open source, hybrid and proprietary machine learning technologies to meet the technical needs for providing business benefit. The report has offered some insights into how the software vendors stack up against one another.

Dynamic Data Masking in Azure SQL Datawarehouse

I’m leading a project which is using Azure SQL Datawarehouse, and I’m pretty excited to be involved.  I love watching the data take shape, and, for the customer requirements, Azure SQL Datawarehouse is perfect.

secret-3037639_640 Note that my customer details are confidential and that’s why I never give details away such as the customer name and so on. I gain – and retain – my customers based on trust, and, by giving me their data, they are entrusting me with detailed information about their business.

One question they raised was in respect to dynamic data masking, which is present in Azure SQL Database. How does it manifest itself in Azure SQL Datawarehouse? What are the options regarding the management of personally identifiable information?


As we move ever closer to the implementation of GDPR, more and more people will be asking these questions. With that in mind, I did some research and found there are a number of options, which are listed here. Thank you to the Microsoft people who helped me to come up with some options.

1. Create an Azure SQL Database spoke as part of a hub and spoke architecture.

The Azure SQL Database spoke can create external tables over Azure SQL Datawarehouse tables for moving data into Azure SQL Database to move data into the spoke. One note of warning: It isn’t possible to use DDM over an external table, so the data would have to move into Azure SQL Database.
2. Embed masking logic in views and restrict access.

This is achievable but it is a manual process.
3. Mask the data through the ETL processes creating a second, masked, column.

This depends on the need to query the data. Here, you may need to limit access through stored procs.
On balance, the simplest method overall is to use views to restrict access to certain columns. That said, I an holding a workshop with the customer in the near future in order to see their preferred options. However, I thought that this might help someone else, in the meantime. I hope that you find something that will help you to manage your particular scenario.

How do you evaluate the performance of a Neural Network? Focus on AzureML

I read the Microsoft blog entitled ‘How to evaluate model performance in Azure Machine Learning‘. It’s a nice piece of work, and it got me thinking. I didn’t see that the blog post contained anything about neural network evaluation, so this topic is covered here.

How do you evaluate the performance of a Neural Network? This blog focuses on Neural Networks in AzureML, in order to help you to understand what they mean.

What are Neural Networks?

Would you like to know how to make predictions from a dataset? Alternatively, would you like to find exceptions, or outliers, that you need to watch out for? Neural networks are used in business to answer the business questions. They are used to make predictions from a dataset, or to find unusual patterns. They are best used for regression or classification business problems.

What are the different types of Neural Networks?

I’m going to credit the Asimov Institute with this amazing diagram:

Neural Network Types

In AzureML, we can review the output from a neural network experiment that we created previously. We can see the results by clicking on the Evaluation Model task, and clicking on the Visualise option.

Once we click on Visualise, we can see a number of charts, which are described here:

  • Receiver Operating Curve
  • Precision / Recall
  • Lift visualization

The Receiver Operating Curve

Here is an example:

ROC Curve

In our example, we can see that the curve well up into the left hand corner for the ROC curve. When we look on the precision and recall curve, we can see that precision and recall are high figures, and this leads to a high F1 score. This means that the model is effective in terms of how precisely it classifies the data, and that it covers a good proportion of the cases that it should have classified correctly.

Precision and Recall

Precision and recall are very useful for assessing models in terms of business questions. They offer more detail and insights into the model’s performance. Here is an example:

Precision and Recall Precision can be described as the fraction of times that the model classifies the number of cases correctly. It can be considered as a measure of confirmation, and it indicates how often the model is correct. Recall is a measure of utility, which means that it identifies how much that the model finds of all that there is to find within the search space. Both scores combine to make the F1 score. The F1 score combines Precision and Recall. If either precision and recall are small, then the F1 score value will be small.

Lift Visualisation

Lift Chart visually represents the improvement that a model provides when compared against a random guess.This is called a lift score. With a lift chart, you can compare the accuracy of predictions for the models that have the same predictable attribute. Lift Visualisation


In my next blog, I’ll talk a little about how we can make the Neural Network perform better.

To summarise, we have examined various key metrics in evaluating a neural network in AzureML. These scores also apply to other technologies, such as R.

These criteria can help us to evaluate our models, which, in turn, can help us to fundamentally evaluate our business questions. Understanding the numbers helps to drive the business forward, and visualizing these numbers helps to convey the message of the numbers.

Data Preparation in AzureML – Where and how?

messy-officeOne question that keeps popping up in  myc customer AzureML projects is ‘How do I conduct data preparation on my data?’ For example, how can we join the data, clean it, and shape it so that it is ready for analytics? Messy data is a problem for every organisation. If you don’t think it is an issue for your organisation, perhaps you haven’t looked hard enough.

To answer the question properly, we need to stand back a little, and see the problem as a part of a larger technology canvas. From the enterprise architecture perspective, that it is best to do data preparation as close to the source as possible. The reason for this is that the cleaned data would act as a good, consistent source for other systems, and you would only have to do it once. You have cleaned data that you can re-use, rather than re-do for every place where you need to use the data.

Let’s say you have a data source, and you want to expose the data in different technologies, such as Power BI, Excel and Tableau. Many organisations have a ‘cottage industry’ style of enterprise architecture, where they have different departments using different technologies. It is difficult to align data and analytics across the business, since the interpretation of the data may be implemented in a manner that is technology-specific rather than business-focused. If you take a ‘cottage industry’ approach, you would have to repeat your data preparation steps across different technologies.


When we come to AzureML, the data preparation perspective isn’t forgotten, but it isn’t a strong data preparation tool like Paxata or Datameer, for example. It’s the democratization of data for the masses, yes, and I see the value it brings to businesses. It’s meant for machine learning and data science, so you should expect to use it for those purposes. It’s not a standalone data preparation tool, although it does help you partway.

The data preparation facilities in AzureML can be found here. If you have to clean up the data in AzureML, my futurology ‘dream’ scenario for AzureML is that Microsoft have weighty data preparation as a task, like other tasks in AzureML. You could click on the task, and then have roll-your-own data preparation pop up in the browser (all browser based) provided by Microsoft or perhaps have Paxata or Datameer pop out as a service, hosted in Azure as part of your Azure portal services. Then, you would go back to AzureML, all in the browser. In the meantime, you would be better trying to follow the principles of cleaning it up close to the course.

crisp-dm_process_diagramDon’t be downhearted if AzureML isn’t giving you the data preparation that you need. Look back to the underlying data, and see what you can do. The answer might be as simple as writing a view in SQL Server. AzureML is for operations and machine learning further downstream. If you are having serious data preparation issues, then perhaps you are not ready for the modelling phase of CRISP-DM so you may want to take some time to think about those issues.