Advice from a tech Mom on getting teens to learn to code

Advice from a tech Mom on getting teens to learn to code

Like many teens, my son is interested in gaming and YouTube. As a tech mom, I was keen to encourage him to learn to code. He didn’t really understand why I am so passionate about tech, data and working hard at it. In the past month, we’ve made a real transition from his initial stance to having to drag him away from coding tutorials so he can get to bed at a reasonable time on a school night. I thought I’d share how I managed to turn it around, and I hope it helps.

I recognize that, as a parent, I can influence my teenager but the reality is that his peers are just as influential — if not more. So I decided to use that reality in order to help me.

Try a Hackathon. No, really. Try it.

You don’t need to code to attend a hackathon. People are friendly and happy to pitch in and show you what they are doing. It’s a great community way to learn.

As a first step, I decided that I’d take my teen to a hackathon so he could see other kids coding. He has been seeing me code since he was born and it wasn’t enough to switch him over to learning the skill by himself. Therefore, I worked out that he needed to see his peers coding. I took him to a Teens in AI event in London and it was amazing. There will be other hackathons and they are easy to find on the Microsoft events website or even Eventbrite. My son was inspired to learn more about coding simply from seeing other teens code, and how these programming teens were at the nexus of each and every project. In other words, the rest of the hackathon centred around these teens, and it was inspiring for him and also for me. At the end of the day, he was determined to learn to code and we agreed that I’d buy him a book on Python.

Books or online material?

I chose a book so my teen could learn to program. The reality is that online courses are great, but being online is a great distraction. I noted that he researched Python and saw this counting as ‘working’ but I wasn’t sure he was making the switch to actually doing coding. So I bought a Python book which really helped him to concentrate and feel a sense of achievement as he progressed through the pages. There will be many coding books which you can purchase on Amazon or EBay, or you can look in your local library.

Which language should you choose?

We chose Python because it has a focus on data and maths, plus it has good visualization so he can see the end result. The maths angle has the potential to improve his maths skills, and I believe that’s crucial for kids throughout the school years.

There are other languages and another good place to start is HTML, CSS and Javascript since new programmers can get results quickly.

Office Skills

I’d also recommend that teens learn Office really well. Everyone thinks that they are an Excel expert but there is so much scope in Excel to do different things. Microsoft offer good tutorials for beginners and that’s a good place to start.

My teen is learning to touch type and that’s helping, too.

I hope that helps.

Why I wish I’d never backed the Gemini PDA campaign

In December, I had my first experience with Indiegogo, and I backed a campaign to purchase a Gemini PDA. To date, the campaign has generated $2,294,143 which is 284% of target. So there’s plenty of money sloshing around.

As of now, it is May 2018 and despite promises, and a product tracker that seems to be only consistent in its incorrectness, I still do not have my device. This is despite the fact that the Gemini PDA is now available to buy now with delivery mid-June. Initially I was happy to wait until I understood that the device worked but now I’m concerned that I’ve been bypassed. So here’s a lesson in customer service:

In this world where we live in a culture of ‘now’ and constant updates, the silence is disconcerting.

There seems to a precedent where they answer emails reactively when asked about the device. There’s nothing proactive. There is a facility on Indiegogo whereby companies can send updates, but these are coming less and less; only one in May. I’d rather see companies hire a temp or an admin person to look after this and send out proactive emails to update customers. Marketing isn’t difficult and there are plenty of good SaaS offerings for cheap.

I have had no email communication since January when I will get my order; in Indiegogo, it still shows as ‘Order Placed’ which means that it isn’t ready to be shipped yet. I was relying on a Facebook page on when I’d get my device. I’m writing this on 29th May, and this means it is not likely to get here by end of May, which is officially two days away. I feel fobbed off with a Facebook post that said devices would be released, but nothing individual. As a backer, I expect to be treated like an individual; what about people who aren’t on the Facebook page? I expect more communication than this, particularly in this tech-savvy, ‘share everything’ world we live in.

I was an Admin on the Facebook page, but I have taken the decision to remove myself from it. I’ve been as constructive as I can in sending feedback, but my goodwill hasn’t been reciprocated. Now it’s on general release with mid-June date and I still do not have mine. The website says ‘Available now’.
I am disappointed by the lack of communication and I can’t continue to be associated with this situation. Being admin of the Facebook group for Gemini PDA makes me part of it. It feels like I’m endorsing it by continuing to be admin, and I’m absolutely not. People shouldn’t be joining an FB group to get information about their orders, particularly when it involves hundreds of pounds. It’s not a small amount of money. I think I’m being treated in this way because I’m letting them, and I hope that this sends a message back to them regarding customer care and treating backers.
As for my device, if it ever arrives, I’ll probably stick it on eBay, unopened.
It’s not just the poor service that’s made me write this post. I saw a post entitled The Gemini PDA, the Perfect PC for People on Low Incomes? No, no, and no again. Planet haven’t proved themselves worthy yet. Their solution is just out on general release. I was poor – achingly poor – growing up in a rough part of Scotland. Being poor is a hundred thousand humiliations and I have suffered many of those, and I remember what it’s like to be hungry and cold. I still buy second-hand clothes to this day, and I think about every penny. The PDA isn’t cheap. For the poor, being recommended a device which costs hundreds of pounds isn’t going to lift people out of poverty. I lifted myself out of poverty by educating myself, and that meant better access to good libraries, with great facilities and long opening hours. It also helped the loneliness of poverty; you can’t just go and sit in a posh cafe and mingle, for example, so a free reading group where you weren’t expected to part with cash was a great way to spend an evening and certainly better than living in a cold flat. I had this experience and it pains me that the New York Times reported yesterday that libraries have had their funding cut by a third, which is a short-term decision which does not harness the opportunities offered by bringing educational facilities to the poorest.

I don’t believe that Gemini PDAs are a good option for the impoverished, at that price range, and certainly not given the ‘start up’ phase that they are in – if you want to call it that. There can be other, more robust, long-term solutions that are proven and earned trust to help Britain’s vulnerable.

As for me, I’m not sure if I will keep the device or put it, unopened, on eBay. I regret having taken the decision to buy it on Indiegogo and I should probably have got myself an expensive Bluetooth keyboard for my Google Pixel instead. That would really allow me to ‘Type and create on the move.’

 

 

 

Artificial Intelligence Mentoring with Teens in AI

After listening to Satya Nadella’s BUILD keynote this year, I was inspired to do even more with my background in Artificial Intelligence. As a Microsoft Regional Director, I relish in sharing in the positive and forward-looking vision that Microsoft gives because I do think that they are changing the future in many ways.  If you want to see the highlights of Nadella’s keynote, head over to YouTube here for the official Microsoft YouTube channel.

What is Teens In AI? It’s close to my heart because it combines my two loves: technology and diversity. The objective is to increase diversity and inclusion in artificial intelligence.
Teens in AI aims to democratise AI and create pipelines for underrepresented talent through a combination of expert mentoring, talks, workshops, hackathons, accelerators, company tours and networking opportunities that give young people aged 12-18 early exposure to AI for social good. The vision is for AI to be developed by a diverse group of thinkers and doers advancing AI for humanity’s benefit. So…..

I’m excited to be an Artificial Intelligence mentor at @TeensInAI’s Artificial Intelligence Bootcamp & Hackathon.  For more information, visit the Teens in AI website.

There are still places left at the #Hackathon with code ACORNFSM free for kids on free school meals and ACORN80@ gives 80% off – come learn about AI with top industry mentors @MSFTReactor 2-3 June.

I hope to see you there!

Managing activities and productivity as a consultant

I have a hard time keeping track of my activities. It can be hard to track my availability, and my days tend to disappear in a flash. I have tried many different digital ways of doing this, and now I’m going with a mix of digital and analog.

You don’t get what you want, you get what you work for

To lead anyone, you have to have a healthy degree of self-awareness. I find that this is one quality which I don’t see very often, and it’s very hard to try and cultivate it. As a first step, it’s good to measure how you spend your time, because that shows your priorities more clearly, and in a way that you can measure.

I use Trello and Plus for Trello to log my activities over a period of time. The results showed that I spend a lot of time in email, so I worked towards getting it down to Inbox Zero. It took 30 hours of solid email writing to do it, and I did it on planes across the Atlantic to the US, and on planes across to the East. Inbox Zero doesn’t stay for very long though, so I used my last flight to Singapore to try and clear down as many as possible, and I’m down to the last 300 emails. I’m sitting in a cafe in Watford on a Sunday morning, while my dog is being groomed, to clear these down.

My Trello reports showed that I regularly do 15-16 hours of work a day. I work at every available pocket of time, with downtime only for food and for spending time with my son. Sleep gets squeezed. All of this work means that I am leaving a trail of things behind me, and that means it is difficult to unpick when it comes to invoicing and expense time.

Too busy to pick up the $50 notes that you drop as you go

I have hired a Virtual Assistant and she has been helping me a lot; it’s been worth the time investing in training her in my various home-grown systems and I’m hoping to get some time back. I was getting to the point where I was dropping things like expense claims, so, basically, I was dropping $50 dollar notes behind me as I sped along my way. Having the VA onboard means I have someone to help me to pick up the $50 dollars as I go, and it’s worth investing the time.

We are designed to have ideas, not hold them in our heads

I bought the Get Things Done book and something really spoke to me; humans are designed to have ideas, not hold them. That’s true.

I wrote down every single idea that I had. Truthfully, we forget our ideas. I didn’t bother evaluating if it is a good idea or a bad idea; I just wrote it down. I then saw that I could group these ideas, so I start to split them out. One of my headings was ‘Ideas for blog posts’. Very shortly, I had 36 blog post ideas down on the page, which I had collated over the period of a week or so. This is blog post #36. I haven’t lost the other 35; I can pencil them in my planning journal. So let’s dive in and take a look at the system.

Bullet Journalling in a Traveler’s Notebook

I started to use the bullet journal system and I’ve heavily adapted it. It was worth investiging a couple of hours to understand it, and get it set up. Here is the website here. After having tried various electronic and digital systems for the past twenty or so years, this is the only one I’ve found that works for me.

I have a Traveler’s Notebook, which is like a refillable notebook that you can customize yourself. Here’s my Traveler’s Notebook, which I took whilst I was out in the Philippines:

IMG_20180507_184026

The Traveler’s Notebook itself is customizable. I have the following sections:

  • Task List
  • Monthly Tracker with Weekly next to it
  • Daily Tracker
  • Mindsweeper (a brain dump, basically) for blog ideas, things to remember, quotes I like, tentative activities, adresses I need temporarily etc.
  • Slot for holding cards
  • card envelopes for holding things such as train tickets, boarding passes
  • Zippable wallet for holding passport

Here are some ideas to get you going.

Tracking over a Month

I use a monthly spread, which uses a vertical format. On the left hand side, I record anything that is personal. On the right hand side, I record work activities. I don’t split the page evenly, since I have less personal activities than work activities.

Here is my example below. The blue stickers are simply to cover up customer names. I took this shot half-way through my planning session, so you could see the structure before it got confusing with a lot of dates in it.

IMG_20180507_205411

Using my background in data visualization, I try to stick to the data/ink ratio. I am trying to simplify my life and declutter what my brain can take in quickly, so I find that it isn’t necessary to repeat the customer name for every day. Instead, I can just draw a vertical line that has two purposes; to point to a label, and to denote the length of the activity. In other words, it forms a pointer to the label, which shows the customer name. The length of the line covers the number of days that the activity lasts for. So, if the line has the length of four boxes, then this means that the activity lasts for four days.

Occasionally, I’ve got excited because I think I have some free days to book myself out for an unexpected request. Then, I realize it’s because I haven’t marked out the weekends. I work weekends too, and the main reason I mark weekends separately is because my customers don’t work weekends, usually. Therefore, I colour weekends in so that I can easily categorize the days as being part of a weekend or a normal working day. I have also done this for Bank Holidays because I tend to work then, too.

My personal things go on the left, and my work things go on the right.

Tracking over a Week

I put the weeks on the right hand side of the page, and this is where I combine time, tasks and scheduling. I use the grid in order to mark out the days, and, on the same line, I mark out the Task. Then, I can put a tick in the day when I have pencilled in the task itself. You’ll note that I have marked a seven day week. You can see it at the right hand side of the photo.

If I don’t manage to complete an activity that day, I just add another tick mark so the next day so that it gets tracked then.

I don’t eliminate tasks that I haven’t managed to complete that week. Instead, I just put them in the next week instead.

In my weekly list, I don’t cross things off when I’ve done them. I find that it created an unnecessary clutter, and I didn’t want to bring into focus the activities that I’ve done. I’m more interested in what I have to do next. I leave that to my daily list, and I will come onto that next.

Tracking over a Day

For the Day, I use an A6 size notebook, and I use a two-page format. On the left hand side, I go back to the vertical representation of time. This time, the day is chunked in to hours. As before, anything personal or non-work-related goes on the left hand side. On the right hand side, my work activities for the day go here. That may include stand-up meetings, retrospectives or whatever I am doing that day.

I have added in a slot for lunch. I don’t normally take lunch but I need to make sure that I eat something. It is easy for the day to slip by, and I only notice it’s lunchtime because people are not around and the office has gone a bit quieter.

On the right hand side, this forms a mini-brain-dump of activities or thoughts that occur as I proceed throughout the day. It can also form a memo pad of things that I need, such as a phone number, which I jot down before I add to contacts. This usually gets filled during the day. It is a messy space, a place to unload,

Some of these thoughts are important but they are not urgent. I can then clear these items into a less transient mindsweeper but I just need a place to hold them temporarily while I assess their urgency.

The idea of having ideas, rather than holding them in my head, was a revelation to me. I’d been worrying about my memory, and forgetting things. If you forget something, then you lose a part of yourself and you don’t get it back. I set myself memory tests, such as remembering the name of a painting, or a quote of some sorts. When I start to forget things, then I know I am starting to have problems. The reason I started to do this is because I could see the start of someone else’s memory start to go a little, as he forgot things such as who he ate dinner with; simple things like that. It made me very sad, and I realized that our memories make up so much of who are we.

It can be tremendously liberating to divest ourselves of our responsibility of trying to remember everything and to focus on the things that matter. It frees your mind to have more ideas, rather than focus effort on holding ideas, which is harder for your mind to do.

Cloud computing as a leveler and an enabler for Diversity and Inclusion

I had the honour and pleasure of meeting a young person with autism recently who is interested in learning about Azure and wanted some advice on extending his knowledge.
It was a great reminder that we can’t always see people who have conditions such as autism. It also extends to disability, particularly those that you can’t see; examples include epilepsy or even Chronic Fatigue Syndrome.

Diversity gives us the opportunity to become more thoughtful, empathetic human beings.

dyslexia-3014152_1920

Credit: https://pixabay.com/en/users/geralt-9301/

I love cloud because it’s a great leveler for people who want to step into technology. It means that these personal quirks, or differences, or ranges of abilities can be sidestepped since we don’t need to all fit the brogrammer model in order to be great at cloud computing. Since we can do so many things remotely, it means that people can have flexibility to work in ways that suit them.

In my career, I couldn’t lift a piece of Cisco kit to rack it, because I was not strong enough. With cloud, it’s not a problem. The literally heavy lift-and-shift is already done. It really comes down to a willingness to learn and practice. I can also learn in a way that suits me, and that was the main topic of conversation with the autistic youth that I had the pleasure to meet.

I believe that people should be given a chance. Diversity gives us the opportunity to become more thoughtful, empathetic human beings. In this world, there is nothing wrong with wanting more of that humanness.

Getting started in Machine Learning: Google vs Databricks vs AzureML vs R

Executive Summary

MLpngMachine learning is high on the agenda of cloud providers. From startups to global companies, technology decision makers are watching the achievements of Google and Amazon Alexa with a view to implementing Machine Learning in their own organizations. In fact, as you read this article, it is highly likely that you have interacted with Machine learning in some way today. Organizations such as Google, Netflix, Amazon, and Microsoft have Machine learning as part of their services.  Machine Learning has become the ‘secret sauce’ in business and consumer facing spheres, such as online retail, recommendation systems, fraud detection and even Digital Personal Assistants such as Cortana, Siri and Amazon’s Echo.

The goal of this paper is to provide the reader with the tools necessary to select wisely between the range of open source, hybrid and proprietary machine learning technologies to meet the technical needs for providing business benefit. The report will offer support for devising a long-term strategy for machine learning for existing and future implementations. Here, we compare the following technologies:

  • Google Tensorflow
  • R
  • Databricks
  • AzureML
  • Google Cloud Machine Learning

 

Introduction and Methodology

A major challenge facing most organizations today is the decision whether to go open-source, hybrid, or proprietary with their technology vision and selection process.

Machine learning refers to a series of techniques where a machine is trained how to solve a problem. Machine Learning algorithms often do not require to be explicitly programmed, but they respond flexibly to the environment after receiving intensive training. Broader experiences work to improve the efficiency and the capabilities of the machine learning algorithms.  Machine Learning is proving immensely useful to help cope with the sheer speed of results required by the business, along with more advanced techniques.

The decisions on machine learning technology choice goes beyond regular technology choice, since it involves a leap of faith that the technology will offer the promised insights.  Machine Learning requires a process of creating, collating, refining, modelling, training and evaluating models on an ongoing process. It is also determined by how organizations intend to use machine learning technology.

Clearly, organizations see Machine Learning as a growing asset for the future, and they are adding the capability now. Machine Learning will increase in adoption in tandem with other opportunities in related technologies, such as Big Data and cloud technologies, and open source becomes more trusted in organizations. This data takes the form of clickstreams, and logs, sensor data from various machines, images and videos. The business teams will want to know more about deceptively simple business questions, where the answer lies in Big Data sources. However, data sources can be difficult to analyze for in. Using insights from this data, companies across various industries can improve business outcomes.

What opportunities are missed if it is not used? By adopting ML, enterprises are looking to improve their business, or even radically transform it. Organizations are potentially losing ground against competitors, if they are not working towards automation or machine learning in some way. They are also not making use of their existing historical data, and their data going forward.

In terms of recent developments, Machine Learning has changed to adapt to the new types of structured and unstructured data sources in the industry. It is also being utilized in real-time deployments. Machine Learning has become more popular as organizations are now able to collect more data, including big data sources, cheaply through cloud computing environments.

In this Roadmap, we will examine the options and opportunities available to businesses as they move forward into Machine Learning opportunities, with a focus on whether organizations should use open source, proprietary or hybrid solutions. The Roadmap focuses on the important decisions that the organization can make is on the choice of technology, and whether this should be open source, proprietary, or a hybrid architecture. The Roadmap also introduces a maturity map to investigate how the choice of machine learning technology can be influenced by the maturity of the organization in delivering machine learning solutions overall.

Evolution of Machine Learning

Though machine learning existed for a long time, it is the cloud that made the technology more accessible and usable to businesses of every size. The cloud offers a complete data storage solution with everything that machine learning needs to run, such as tools, libraries, code, runtime, models and data.

According to Google Trends, the term Machine Learning has increased in popularity six-fold since July 2012.  In response to this interest, established Machine Learning organizations are leading the way by provisioning their technology through open source. For example, Google has TensorFlow, the open source set of machine learning libraries that Google open sourced in 2015.  Amazon has made its Deep Scalable Sparse Tensor Network Engine (DSSTNE – pronounced ‘Destiny’) library available on GitHub under the Apache 2.0 license. Technology innovator and industry legend Elon Musk has ventured out with OpenAI, which bills itself as a ‘non-profit AI research company, discovering and enacting the path to safe artificial general intelligence.’ The technical community has a great deal of Machine Learning energy, evidenced by the fact that Google recently announced its acquisition of online data scientist community Kaggle, which has an established community of data scientists and potential employee pool, as well as one of the largest repositories of datasets that will help train the next generation of machine-learning algorithms.

Evolution of Open Source

Why has Open Source achieved so much prevalence in Machine Learning in recent years? Open Source has been a part of Machine Learning, right from its inception, but it has gained attention in recent years due to significant successes. For example, AlphaGo, produced using Torch software. AlphaGo’s victory over the human Go champion, Lee Sedol, wasn’t simply an achievement for artificial intelligence; it was also a triumph for Open Source software.

Evolution of Hybrid and Proprietary Systems

What problems are hybrid and proprietary systems trying to solve? The overarching theme is that proprietary organizations are aiming themselves at the democratization of data, or the democratization of artificial intelligence. This is fundamentally changing the Machine Learning industry, as organizations are taking Machine Learning away from academic institutions and into the hands of business users to support business decision making.

Proprietary solutions can be deployed quickly, and they are designed to be scalable and work at global scale. Machine Learning is de-coupled from the on-premises solution to a solution that can be easy to manage, administer and cost. Vendors must respond nimbly to these changes as data centers make the transition towards powerhouses for analytics and machine learning at scale.

In the battle for market share, innovation is expended at the cloud level to ensure that standards in governance and compliance are met, with government bodies in mind. As the threat of cybercrime increases, standards of compliance and governance have become a flashpoint for competition.

How are they distinguished? Open source refers to source code that is freely available on the internet, and is created and maintained by the technical community. A well-known example of open source software is Linux. On the other hand, proprietary software could also be known as closed-source software, which is distributed and sold under strict licensing terms and the associated copyright and intellectual property is owned by a specific organization. Hybrid architectures are based on a mix of open source and proprietary software.

Methodology

For this analysis, we have identified and assessed the relative importance of some key differentiators. These are the key technologies and market forces that will redefine the sector in which technologies will strive to gain an advantage. Technology decision makers can use the Disruption Vector analysis in choosing the approach that aligns with their business requirements.

Decision Makers can use the Disruption Vector analysis to support them in selecting the right technologies that best suit their own requirements.

Here, I assigned a score from 1 – 5 to each company for each vector. The combination of these scores and the relative weighting and importance of the vectors drives the company index across all differentiators.

Usage Scenarios

Machine learning, regardless of approach, has several common usage scenarios.

hand-3044387_1280

Finance

One popular use case of machine learning is Finance. As with other areas of Machine Learning, a trend perpetuated by more accessible computing power and more accessible machine learning tools and open source packages dedicated to finance. Machine Learning is pervasive in Finance in terms of business and consumer solutions. Machine Learning is used in activities such as approving loans, risk management, asset management, and currency forecasting.

The term robo-advisor is a new term, which was unheard of, five years ago.

Robo-advisors are used to adjust a financial portfolio to the goals and risk tolerance of the consumer based on factors such as attitude to saving, age, income, and current financial assets. The robo-advisor then adapts these factors to reach user’s financial goals.

Bots are also used in providing customer service in the Finance industry. For example, they can interact with consumers using natural language processing and speech recognition. The bot capability for language, combined with the robo-advisor capability for predicting factors that meet financial goals, mean that the Finance world is fundamentally impacted by machine learning from the customer perspective, and the finance professionals’ perspective as millennials fuel the uptake of automated financial planning services.

Healthcare

Why should enterprises care about healthcare? Enterprises have an interest in keeping healthcare costs low as the hard cost of healthcare premiums increase, and they can even impact the enterprises’ ability to invest in itself. Employee sickness costs affect productivity, and it is in the interests of the enterprise to invest in employee health. As an industry, U.S. health care costs were $3.2 trillion in 2015, making healthcare one of the country’s largest industries, equaling to 17.8 percent of US gross domestic product. Rising healthcare issues, such as diabetes and heart disease, are caused by lifestyle factors.

Machine learning is one of the tools being deployed to reduce healthcare costs. As the use of health apps and smartphones increases, there is increased data from the Internet of Things technology. This is set to drive health virtual assistants, which can take advantage of IoT data, increased natural language processing and sophisticated healthcare algorithms to provide quick patient advice for current health ailments, as well as monitor for future potential issues.

Machine Learning can assist healthcare in improving patient outcomes, preventative medicine and predicting diagnoses. In the healthcare industry, it is used for the reduction of patient harm events, reduction of hospital acquired infections, right through to more strategic inputs such as revenue cycle management, and patient care management. Machine Learning in healthcare specifically focuses on long-term and longitudinal questions, and helps to evaluate patient care through risk-adjusted comparisons. For the healthcare professional, it can help to have simple data visualizations which display the message of the data, and deploy models for repeated use to help patients long-term.

There are open source machine learning offerings which are aimed specifically at healthcare. For example, healthcare.ai is accessible to the thousands of healthcare professionals who do not have data science skills, but would like to use machine learning technology to help patients.

Marketing

Machine learning has a wide range of applications in marketing. These range from techniques to understand existing and potential customers, such as social media monitoring, search engine optimization and quality evaluation.

There are also opportunities to offer excellent customer service, such as tailoring customers, personalized customer recommendations and improved cross-selling and up-selling opportunities.

There are a few open source marketing tools which use machine learning. These include Datumbox Machine Learning Framework. Most machine learning tools aimed at marketing are proprietary, however, such as IBM Watson Marketing.

Key Differentiators

Machine learning decisions are crucial in developing a forward-thinking machine learning strategy that ensures success throughout the enterprise.

Well-known organizations have effectively used machine learning as a way of demonstrating prominence and technical excellence through flagship achievements. Enterprises will need a cohesive strategy that facilitates adoption of machine learning right across the enterprise, from delivering machine learning results from business users through to the technical foundations.

In this section, we discuss the five most important vectors that contribute to this disruption in the industry, which also correspond to factors that are crucial to consider when adopting machine learning as part of an enterprise strategy. The selected disruption vectors are focused on the transition of an organization towards the generation of a successful enterprise strategy of implementing and deploying machine learning technology.

The differentiators identified are the following:

  • Ease of Use
  • Core Architecture
  • Data Integration
  • Monitoring and Management
  • Sophisticated Modelling

Ease of Use

Technology name Approach Commentary Score
R Open Source Data Scientists 1
Databricks Hybrid Data Scientists but it has interfaces for the business user 4
Microsoft AzureML Hybrid Business Analysts through to Data Scientists 5
Google Cloud Machine Learning Proprietary Data Scientists 3
Google Tensorflow Proprietary Data Scientists 3

Generally, Machine Learning technology is still primarily aimed at data scientists to deliver and productize the technology, but there is a recognition that other roles can play a part, such as data engineers, devops engineers and business experts.

In this disruption vector, one clear differentiator between Microsoft and the other technologies is that they locate the non-technical business analyst at the front and center of the AzureML technology. In AzureML, more complex models can be built in R and loaded up to AzureML., with a drag-and-drop interface and embeddable R.  Databricks focus on a more end-to-end solution, which utilizes different roles at different points in the workflow. Databricks have different tools for different parts of the process. The need for a data scientist is balanced out by the provision of tools specifically targeted at the business analyst. Both AzureML and Databricks allow for the consumption of machine learning models by the business user. Google Cloud Machine Learning Engine, Google Tensorflow and the open-source R have firmly placed the development of machine learning models in the data scientist and developer spheres. As Google Tensorflow and R are both open-source, this is to be expected.

Google’s Cloud Machine Learning Engine combines the managed infrastructure of Google Cloud Platform with the power and flexibility of open-source Google TensorFlow. Google Cloud Machine Learning Engine has a clear roadmap in terms of people starting off in R and Google TensorFlow open source, and then porting those models into Google Cloud Machine Learning. RStudio, an R IDE, allows Tensorflow to be used with R, so this enables R models to be imported into Google Tensorflow, and then into Google Cloud Machine Learning Engine.

Business users can access their data through a variety of integrations with Google, including Informatica, Tableau, looker, Qlik, snapLogic and the Google Analytics 360 Suite. This means that Machine Learning is embedded in the user’s choice of interface for the data.

The risk for enterprises is that the multiple Google integration points introduce multiple points of failure and numerous points of complexity in putting the different Google pieces of the jigsaw together, which is further exacerbated with the presence of third-party integrations into tools which are visible from the user perspective. In the future, this scenario may change, however, as Google put Machine Learning at the heart of their data centers and user-oriented applications. The business user is seeing an increase of the presence of machine learning in their everyday activities, even including the creation and development of business documents. Google is aimed firmly at the data scientist and the developer, but it does offer its pre-build machine learning intelligence platform. That said, the competition is heating in this space as Google are now bringing machine learning to improve data visualization for business users so that they can make better use of their business data. Microsoft are also adding some machine learning into Microsoft Word for Windows, so that it now offers intelligent grammar suggestions.

Core Architecture

Machine Learning solutions should be resilience, robust and have a clear separation between storage and compute so that models are portable.

There are variations in the core technology which differentiate open source technologies from the large vendors. Embarrassingly parallel workloads separate a technical problem into many parallel tasks with ease. The tasks are run in parallel, with no interdependencies between the tasks or data. R does not naturally work easily with embarrassingly parallel workloads. Many scientific and data analysis require parallel workloads, and packages such as Snow, Multicore, RHadoop and RHIPE can help R to provision embarrassingly parallel workloads.

As an open-source technology, R is not resilient or robust to failure. R works in-memory, and it is only able to hold the data that resides in memory. Its power comes from open-source packages, but these can over-write one another’s functions. This can be an issue because it can cause problems at the point of execution, which can be difficult to troubleshoot without real support.

On the other hand, proprietary cloud-based machine learning solutions offer the separation between storage and compute with ease. Google Tensorflow can use the Graphical Processing Unit (GPUs) that Google has in place in its data centres, which Google are now rebranding as AI centres. To mitigate against technical debt, both Databricks and Google Cloud Machine Learning Engine both have a clear trajectory from the open source technology of Google Tensorflow towards the full enterprise solution which provides confidence for widespread enterprise adoption. As a further step, Databricks also allow porting models from Apache Spark ML and Keras, as well as other languages such as R, Python or Java.  As a further signal of the importance of the open source path with a view to reducing technical debt, Google have released a neural networking package, Sonnet, as a partner to Tensorflow to reduce friction to model switching during model development.

Technology name Approach Commentary Score
R Open Source No; liable to errors 1
Databricks Hybrid Reduce technical debt by being open to most tech 4
Microsoft AzureML Hybrid R and Python are embedded. Solution is robust. Not open to Tensorflow or other packages 4
Google Tensorflow Open Source APIs are not yet stable. 2
Google Cloud Machine Learning Engine Proprietary Cloud architecture with a clear onboarding process of open source technology 4

Data Integration

Data is more meaningful when it is put with other data. Here is the ways in which the technologies differentiate:

Technology name Approach Commentary  
R Not Open R is one of the spoke programming languages, but it is not a hub in itself. 1
Databricks Highly Open Databricks is highly open, facilitating SQL, R, Python, Scala and

Java. It also facilitates machine learning frameworks/libraries such as Scikit-learn, Apache Spark ML, TensorFlow,

Keras.

5
Microsoft AzureML Moderately Open R and Python are embedded. Solution is robust 3
Google Tensorflow Open Source Tensorflow offers a few APIs but these are not yet stable. Python is considered stable, but the others, C++, Java and Go are not considered stable. 3
Google Cloud Machine Learning Engine Proprietary APIs are offered through Tensorflow 3

To leverage Machine Learning on the cloud without significant rework, solutions should support data import to machine learning systems. We need to see improved support for databases, time period definition, referential integrity and other enhancements.

The Machine Learning models run in IaaS and PaaS environments, which consume the APIs and services exposed by the cloud vendors and produce an output of data, which can be interpreted as the results. The cloud environment prevents portability of workloads, and organizations are concerned about vendor lock-in of cloud platforms for machine learning.

The modelling process itself involves taking a substantial amount of inbound data, analyzing it, and determining the best model. The machine learning also needs to be robust and recover gracefully from failures during processing, such as network failures.

In terms of the enterprise transition to the cloud for machine learning, it should not impact the optimization of the machine learning technology, and it should not impact the structures used or created by the machine learning process.

Machine learning solutions should be able to ingest data in different formats, such as JSON, Xml, Avro and Parquet.  The models themselves should be able to be in a portable format, such as PMML.

The range of modelling approaches within data science means that data scientists can approach a modelling problem in many ways. For example, data scientists can use scikit-learn, Apache Spark ML, TensorFlow or Keras. Data Scientists can also choose from a number of different languages: SQL, R, Python, Scala or Java, to name a few.

Of all the packages and frameworks, Databricks scored the best in terms of data integration. Dependent on the skill set, data scientists can use scikit-learn, Apache Spark ML, TensorFlow or Keras. Data Scientists can also choose from several different languages – R, Python, Scala or Java.  AzureML and Google Cloud Machine Learning Engine are more restrictive in terms of their onboarding approach. AzureML will integrate with R and Python languages, and it will ingest data from Azure data sources and Odata. Google Cloud Machine Learning Engine will accept data that it serialized in a format that Tensorflow can accept, normally CSV format. Further, like Azure ML, Google data has to be stored in a location where the data can be accessed, such as Google BigQuery or Cloud Storage.

AzureML’s dependency on R may be the popular choice, but there is a risk of inheriting issues in R code that is not high quality. The sheer volume of R packages is often given as a reason to adopt R. However, these does not mean that the packages are quality; some of these packages are small and not developed often, but they are still maintained on the R repository, CRAN, with no real distinction between these packages and the well-maintained, more robust packages. Google Tensorflow, on the other hand, has a large, well-maintained library which is extending to Sonnet.

Monitoring and Management

What you don’t monitor, you can’t manage.

Technology name Score
R 1
Databricks 5
Microsoft AzureML 5
Google Tensorflow 2
Google Cloud Machine Learning Engine 5

Machine Learning has a rich set of techniques to develop and optimise all aspects of Machine Learning. This ranges from the onerous task of cleaning the inbound data, to deploying the final model to production.

As Machine Learning becomes embedded in the enterprise data estate, it should also be robust. Machine learning models should not be able to run out of space. Instead, machine learning solutions should be able to accommodate elasticity of cloud-based data storage. Since many queries will span clouds and on-premises as the business requirements for expanded data sources will increase, the machine learning solution needs to keep up with ‘data virtualisation’ requirement.

As a result of its natural operation, machine learning modelling and data processing can be time-consuming. If the machine learning algorithm takes extensive delay to conduct functions such as repartitioning data, scaling up or down, or copying large data sources around, then this will add unnecessary delay to the machine learning model processing. From the business perspective, lengthy machine learning process will adversely impact user adoption, subsequently introducing an obstacle to business value. The more intervention required by the machine learning solution needs to support its operation, then the less automated and less elastic the solution it becomes. It will also mean that the business isn’t taking advantage of the key benefits of cloud. Systems that are built for the cloud should dynamically adapt to the business need.

If the machine learning solution is built for the cloud, then there should be efficient separation of compute and storage. This means that the storage and the compute can be managed and costed independently. In many cases, the data will be stored, but the actual machine learning model creation and processing will require bursts of processing and model usage. This requirement is perfect for cloud, allowing customers to take advantage of cloud computing so that they can dynamically adapt the technology to meet the business need.

R has no ability to monitor itself, and issues are resolved by examining logs and restarting the R processes. There is no obvious way to identify where the process started or stopped processing data so it is best to simply start again, which means that time can be lost. Both AzureML and Google Cloud Machine Learning provide user-focused management and monitoring via a browser based option, with Google providing a command line option too.

Sophisticated Modelling

Data modelling is still a fundamental aspect of handling data. The technologies differ in terms of their levels of sophistication in producing models of data.

Technology name Approach Commentary  
R Open Source No; liable to errors 1
Databricks 5
Microsoft AzureML Hybrid R and Python are embedded. Solution is robust 4
Google Tensorflow Proprietary 5
Google Cloud Machine Learning Engine Proprietary

Google Cloud Machine Learning Engine allows users to create models with Tensorflow, which are then onboarded to produce cloud-based models. TensorFlow can be used via Python or C++ APIs, while its core functionality is provided by a C++ backend. The Tensorflow library comes with a range of in-built operations, such as matrix multiplications, convolutions, pooling and activation functions, loss functions and optimizers. Once a graph of computations has been defined, TensorFlow executes it efficiently across different platforms.

The Tensorflow modelling facility is flexible, and it is aimed at the deep learning community. Tensorflow is created in well-known languages of Python and C++ so there is a fast ramp-up of skill sets to reach a high level of Tensorflow model sophistication. Tensorflow allows data scientists to roll their own models but they will need a deep understanding of machine learning and optimization in order to be effective in generating models.

Azure

Azure ML Studio is a framework to develop machine learning and Data Science applications. It has an easy to use graphical interface that allows you to quickly develop machine learning apps. It saves you a lot of time by making easier to do tasks like data cleaning, feature engineering and test different ML algorithms. It allows to add scripts in python and R and also includes deep learning.

Further, AzureML comes ready-prepared with some pre-baked machine learning models to help the business analyst on their way. AzureML offers in-built models, but it is not always very clear how the models got to their results. As the data visualization aspect grows in Power BI and in the AzureML Studio, this is a challenge which will be handled in the future.

Company Analysis

In the recent history of the industry, Sophos acquired Invincia, Radware bought Seculert and Hewlett Packard bought Niara.

R

R is possibly the most well-known data science open source tool. It was developed in academia, and has had widespread adoption throughout the academic and business community.  R has a range of machine learning packages, which are downloadable from the CRAN repository. These include MICE for imputing values, rpart and caret for classification and regression models, and PARTY for partitioning models. It is completely open source, and it forms part of the offerings discussed in this company analysis.

When did R start? Who are the R customers? Are they a leader, in terms of installations? Average size of customers? Enterprise scale customers? What are customers’ concerns and benefits? Are companies just flirting with R because it’s the buzzword at the moment?

Google

Google is appealing to enterprises as it solves many different enterprise technology solutions from infrastructure consolidation and data storage right through to business user-focused solutions in Google office technologies. As an added bonus, enterprise customers can leverage Google’s own machine learning technology which underpins functionality such Google Photos.

Tensorflow is open source, and it can be used on its own. Tensorflow appears in Google Cloud Machine Learning capabilities, which is a paid solution. Google also has a paid offering, which clearly chains together cloud APIs with machine learning, and unstructured data sources such as video, images and speech.

Google’s Tensorflow has packages aimed at verticals, too, such as Finance and cybercrime. It is also used for social good projects. For example, Google made the headlines recently when a self-taught teenager used Google Tensorflow to diagnose breast cancer.

Comparison

Now that Databricks is now in Azure, it is well worth a look for streamlined data science and machine learning at scale. This will appeal more to the coders, but this brings the benefit of customization. AzureML is a great place to get started, and many Business Intelligence professionals are moving into Data Science via this route.

Open Source tools such as R and Google’s Tensorflow are not enterprise tools. They are missing key features, such as the ability to connect with a wide range of APIs. Further, it does not have key enterprise features such as security, management, monitoring and optimization. It is projected that Tensorflow will start to have more of these enterprise features in the future. Also, organisations do not always want open source used in their production systems.

Despite an emphasis on being proprietary, both IBM Watson and Microsoft offer free tiers of their solutions, which are limited by size. In this way, both organizations compete with the open source, free offerings with the bonus of robust cloud infrastructure to support the efforts of the data scientists. Databricks offer a free trial, which converts into a paid offering. The Databricks technology is based on Amazon, and they distinguish between data engineering and data analytics workloads. The distinction may not always be clear to business people, however, as it is couched in terms of UIs, APIs and notebooks and these terms will be more meaningful to technical people.

Future Considerations

In the future, there will need to be more reference architecture for common scenarios. In addition, best practice design patterns will become more commonplace as the industry grows in terms of experience.

How do containers impact machine learning? Kubernetes is an open source container cluster orchestration system. Containers allow for the automation and deployment, scaling, of operations, and it’s possible to envisage machine learning manifested in container clusters. Containers are built for a multi-cloud world: public, private, hybrid. Machine Learning is going through a data renaissance, and there may be more to come. From this perspective, the winners will be the organizations who are most open to changes in data, platform and other systems, but this does not necessarily mean open source. Enterprises will be concerned about issues which are core to other software operations, such as robustness, mitigation of risk and other enterprise dependencies.

Conclusion

Machine learning is high on the agenda of cloud providers. From startups to global companies, technology decision makers are watching the achievements of Google and Amazon Alexa with a view to implementing Machine Learning in their own organizations. In fact, as you read this article, it is highly likely that you have interacted with Machine learning in some way today. Organizations such as Google, Netflix, Amazon, and Microsoft have Machine learning as part of their services.  Machine Learning has become the ‘secret sauce’ in business and consumer facing spheres, such as online retail, recommendation systems, fraud detection and even Digital Personal Assistants such as Cortana, Siri and Amazon’s Echo.

From the start of this century, machine learning has grown from belonging to academia and large organizations who could afford it, to a wide range of options from well-known and trusted vendors who propose a range of solutions aimed at small and large organizations alike. The vendors are responding to the driving forces behind the increasing demand for machine learning solutions as organizations are inspired by the promise that Machine Learning offers, the accessibility offered by cloud computing, open source machine learning and big data technologies as well as perceived low cost and easily-accessible skills.

In this paper, we with the tools necessary to select wisely between the range of open source, hybrid and proprietary machine learning technologies to meet the technical needs for providing business benefit. The report has offered some insights into how the software vendors stack up against one another.

Dynamic Data Masking in Azure SQL Datawarehouse

I’m leading a project which is using Azure SQL Datawarehouse, and I’m pretty excited to be involved.  I love watching the data take shape, and, for the customer requirements, Azure SQL Datawarehouse is perfect.

secret-3037639_640 Note that my customer details are confidential and that’s why I never give details away such as the customer name and so on. I gain – and retain – my customers based on trust, and, by giving me their data, they are entrusting me with detailed information about their business.

One question they raised was in respect to dynamic data masking, which is present in Azure SQL Database. How does it manifest itself in Azure SQL Datawarehouse? What are the options regarding the management of personally identifiable information?

sasint

As we move ever closer to the implementation of GDPR, more and more people will be asking these questions. With that in mind, I did some research and found there are a number of options, which are listed here. Thank you to the Microsoft people who helped me to come up with some options.

1. Create an Azure SQL Database spoke as part of a hub and spoke architecture.

The Azure SQL Database spoke can create external tables over Azure SQL Datawarehouse tables for moving data into Azure SQL Database to move data into the spoke. One note of warning: It isn’t possible to use DDM over an external table, so the data would have to move into Azure SQL Database.
2. Embed masking logic in views and restrict access.

This is achievable but it is a manual process.
3. Mask the data through the ETL processes creating a second, masked, column.

This depends on the need to query the data. Here, you may need to limit access through stored procs.
On balance, the simplest method overall is to use views to restrict access to certain columns. That said, I an holding a workshop with the customer in the near future in order to see their preferred options. However, I thought that this might help someone else, in the meantime. I hope that you find something that will help you to manage your particular scenario.