Want to learn how to skill up from #BusinessIntelligence to #ArtificialIntelligence? Join me at #Live360 in Florida

I’ll be speaking at Live! 360 Orlando, December 2-7. Surrounded by your fellow industry professionals, Live! 360 provides you with immediately usable training and education that will keep you relevant in the workforce.

SPECIAL OFFER: As a speaker, I can extend $500 savings on the 5-day package. Register here: http://bit.ly/OSPK88Reg

OSPK88

Workshop: Moving from BI to AI: Artificial Intelligence Skills for Business Intelligence Professionals

Are you interested in reusing your existing BI skillset in order to add Artificial Intelligence to your skillset? Is your organisation interested in applying AI at a practical level? If so, this precon is for you.

In this precon, you will leverage your existing BI skillset to learn Microsoft’s latest AI technologies. You will learn AzureML, using your existing SSIS expertise. You will also learn R and Python, using your knowledge of SQL in SQL Server to get a working knowledge of these languages and their use in AI. You will use your conceptual knowledge of Business Intelligence to learn how to choose the right model for your AI work, and how to identify its value and validity.

Join this session to add AI to your organization’s technical capability, springboarding from skills that you already possess.

Workshop Agenda

Moving from BI to AI: Artificial Intelligence skills for Business Intelligence professionals

9.00am AI for the Enterprise

AI for the Enterprise

What is AI? Terminology that you need to know

Blueprint for AI in the Enterprise

Technology Overview; how do you choose the best tools to provide business value?

 

In this section, we will look at what you need to know to set the scene for AI for the enterprise. There is a very wide range of technologies in the AI space, and this section will introduce the key players and how they compare with one another, along with clear explanations on how they are used best. The session will also propose a blueprint for delivering successful AI projects, from the business perspective.

 

10.00am Get started with AI in Azure

  • Introduction to AzureML
  • Build simple machine learning models with Azure Machine Learning

In this section, you will get hands-on experience in practice building a machine learning model from end-to-end, using AzureML. This is intended to formalize some of the knowledge you have learned so far. In this section, you will ingest data, select a model, train and test a model, and make it production-ready. Then, you can visualize your results in Power BI.

11.30 Selecting your model in AI

An exploration of models in AI

Selecting models in AI

Evaluating models in AI

In this section, we will cover AI models in detail. We will look at the models themselves, their differences and similarities, and how to choose between the models. We will also look at ways of evaluating models.

12.30 – 1.30 Lunch

1.30 Working with Microsoft ML Server and R

  • Fundamentals of R
  • Microsoft ML Server
  • Using R with Microsoft ML Server

In this section, we will cover the fundamentals of R, and how we can use it to create robust, production models using Microsoft ML Server. R is a first class citizen in Microsoft’s Data Platform offerings, and it touches other technologies, such as AzureML, SQL Server and Power BI. We will cover its use in Microsoft Machine Learning Server to help provision a flexible enterprise platform for analyzing data at scale, building intelligent apps, and discovering valuable insights across your business. Machine Learning Server meets the needs of all constituents of the process – from data engineers and data scientists to line-of-business programmers and IT professionals. It offers a choice of languages and features algorithmic innovation that brings the best of open-source and proprietary worlds together.

2.45 Break

3.00 Python Data Science Notebook and Labs

Python is an important skill in analysing data, data science and artificial intelligence. In the final segment, you will learn about Python, how to use it, and how to use Notebooks to work with your code.

5.00 Wrap up and QA

Fracking For the Rest of Us: Exploring with #PowerBI

Having heard of some of the news about fracking in the BBC news, I decided to try and understand it better. I thought I’d analyse some data, and pop it here.

What is Fracking?

Fracking is described as the hydraulic fracturing, or fracking, is a technique to extract gas and oil from the earth. Liquid is pumped underground at high pressure to fracture shale rock and release gas or oil within. Here is a diagram, courtesy of the BBC:

104057230_shale_gas_extraction_640-nc

Image Credit: BBC

What does Fracking have to do with Earthquakes? A 2011 study showed that it was “highly probable” the test drilling for shale gas caused the tremors at that time. Despite the findings, the UK Government and Cuadrilla decided to proceed.

People are concerned that fracking will cause more earthquakes than expected in their region. The organization responsible for the fracking is called Cuadrilla, and their chief executive Francis Egan commented that he expected more incidents to be recorded because of the sensitivity of the recording equipment used by the company.

This irked me, because it pushed the blame onto the equipment, rather than the fracking itself. So I thought I’d take a look at the data, to see if earthquakes was more prevalent in the Lancashire area.

Let’s look at the earthquake data from the British Geological Survey for the last 100 days. I noted only one error in the data; one of the records records the region as Blackpool.Lancashire with a full stop but the others were reported as Blackpool,Lancashire with a comma. It was trivial to fix the data, so I just did it.

How many earthquakes occurred in the Lancashire area over the last 100 days compared to other areas?

This chart, conducted in Microsoft’s Power BI, is fairly compelling, showing that the top number of earthquakes for the last 100 days took place in the Lancashire region:

1 Fracking Earthquake Count

This chart only shows the top eight regions, so let’s take a look at Lancashire versus the rest of the UK:

2 Fracking Lancashire vs non Lancashire

Ok, so over the last one hundred days, one third have occurred in Lancashire. From the data visualization perspective, yes, it is a pie chart but they are useful and impactful when you have a few slices only, and you want to make an impact.

Over time, what does this really mean?

The chart below shows a huge jump in the number of Earthquakes in Lancashire in October 2018, which is the month when the fracking began. Lancashire earthquakes are denoted in blue:

3 Lancashire vs Non Lancashire last 100 days

So, the chart shows that there were a total of 23 earthquakes in the Lancashire area in October. But when did they actually take place?

Since 15 October, Little Plumpton has been the first UK shale fracking site after the process was halted in 2011. How many of the 23 earthquakes took place from that date onwards? Lancashire earthquakes are given in dark teal blue; other areas are given in lighter blue. Some dates are missing on the X axis because earthquakes did not happen on those dates. You can see that they start in earnest in the second half of the month.

4 Number of Earthquakes in October 2018 from 1 to 27 Oct

We can see from this simple chart that all 23 earthquakes in Lancashire took place from October 18th onwards. Note that the fracking in Lancashire started on 15th October 2018.

Where did these earthquakes occur? Let’s look at the Lancashire earthquakes, with Lancashire highlighted in teal blue:

5 UK Map of Fracking

Let’s zoom in on Lancashire. According to the Cuadrilla site, the fracking takes place at Preston New Road.

According to the data, the earthquakes around Blackpool in October happened at this site here. I have plotted the lat long of the earthquakes, using the British Geological Survey data, and popped it on a map in Power BI and used Bing for the background maps. Note that this is in inexact because the full lat long isn’t provided by the data; it is only rounded data supplied. This is a graphical representation, just to give the reader an idea where the centre is recorded, according to the British Geological Survey data. The data is recorded as ‘Earthquakes around the British Isles in the last 100 days’ and it records seismic activity from the very small to the very large. This map only shows the location, not the severity.

7 Map of Lancashire earthquakes

There are less bubbles than 23 earthquakes total because some of the earthquakes happened repeatedly in the same place, according to the rounded lat and long data. Since it is rounded, it introduces inaccuracies but it’s been given here as an illustration.

Caveat: This map isn’t totally accurate. Future and further work would involve more detailed mapping with something like ArcGIS, and full lat long data, not just rounded. Here is what the data looks like and you can get it here:

The table also shows the magnitude, so some of the movements are recorded as tiny i.e. 0.0 to larger events i.e. 0.8.

8 Data Sample for earthquakes

On Friday 26th October, a Cuadrilla spokesman said: “Micro seismic events such as these result in tiny movements that are way below anything that would be felt at surface, much less cause any harm or damage.”

How does my data compare to the BBC? The BBC have generously provided a map, which matches mine at a high level. Mine is very detailed because I got the lat long data from the British Geological Survey.  Here is the BBC version:

103872955_fracking_640-nc

Power BI made it easy to go through the data, fix an error, and then produce simple graphs, charts and maps to tell a story. I’m not a geological scientist but it seems a simple view of the data should raise serious concerns about the human activity there, and what we are doing to the area.

Lancashire isn’t supposed to have 23 earthquakes in the space of ten days.

I’m not reassured by the Cuadrilla message at all; my argument would be that these ‘tiny movements’ should not be taking place at all and it’s obvious that human activity is doing it.

The data shows that these events wouldn’t be happening if the fracking wasn’t taking place. In Lancashire, 23 earthquakes in the space of less than ten days should be a call to everyone to wake up and stop this activity permanently.

Why Lancashire?

The Lancashire County Council’s decision process on fracking can be found here.  Note that the Council have forecasted and set out a medium term forecast funding gap of £144.084m by the end of the 4 year period (2018/19 –2021/22). Cuadrilla have invested £10m in Lancashire but this is nowhere near the funding gap that the local County Council have projected.

The Government is still looking at ways to replace the money that Lancashire will lose as a result of leaving the EU in terms of the Shared Properity Fund, but, to date, nothing has been sorted for Lancashire. My hunch – and it is a pure guess – that the Lancashire finances need to be sorted somehow and that’s why fracking has been pushed through. Lancashire is running out of options.

In Scotland, there is investment in green energy and you don’t have to look far to see windmills. I don’t see why the good people of Lancashire can’t be afforded the same investment in green energy, with jobs, reskilling and opportunities in the things that we should be great at: community, science, tech and energy. Britain needs jobs and opportunities more than ever, and the North needs to be regarded just as carefully as the South of the UK.

Opinion

My opinion is that between the Brexit debacle and fracking, we are shooting ourselves in the foot in Britain. Any hearkening back to a better age of Britain is just nonsense. We can’t sort ourselves out.

Whether you agree with Brexit or not, we still haven’t managed to get an agreement sorted and we should have done that by now. Hell, we can’t even get our rail and roads sorted. And now we are creating environmental earthquakes because we need the money. That smells of desperation and I’m sure that the stench of it will attract even more of this type of behaviour until something really bad happens.

Get Hygge with it – Your 5 Step Guide to #ESPC18

ESPC18-Speaker-Graphic-Jennifer

I’m excited to be presenting at the European Sharepoint, Office365 and Azure Conference in Copenhagen on 26th – 29th November. The conference has great content delivered by Microsoft MVPs, Regional Directors and Microsoft team members. It’s a great place to get all of your Microsoft content and news to help you with your current technical estates, as well as plan for the future. I’d recommend attending since it is THE European conference for these topics and there are thousands of attendees going along to learn, so why aren’t you?

BOOK YOUR TICKET today and use coupon code ESPC18SPK to receive a €100 discount.

I’m delighted to be giving the Artificial Intelligence keynote: Artificial Intelligence: Winning the Red Queen’s Race.

To help you to navigate the conference, here’s a set of handy tips. I hope to see you there!

PLAN Your Sessions

Explore the ESPC18 SCHEDULE to identify what sessions you’d like to attend. Create your own schedule suited to you. If you are travelling with colleagues, divide and conquer by  attending different sessions. Afterwards swap notes to increase your learning.
Don’t have time to delve through our 120+ conference schedule? We have created dedicated pages to save you time. Check out the ESPC conference schedule based on IT PRODEVBDMAZURE or MICROSOFT SESSIONS.

CONNECT with Your Peers

Join the ESPC18 App (Login details announced in the coming weeks) & follow the hashtag #ESPC18 to find out whose attending. Why not make your presence known by sending out a tweet introducing yourself. Arrange to meet new individuals at ESPC – you never know what you might discover! Helpful tip: Carry some business cards with you to share with the new people you meet.

LEARN, Learn, Learn

Before arriving at ESPC, take some time to identify what questions you want answered. Prepare them advance to ask during a session Q&A. Why not take a break from sessions and advance your skills at the ESPC labs or check out the Ask the Experts session. Remember, it’s good practice to note three important key takeaways from each session.

Have Fun

With 2,000 delegates expected to attend, ESPC offers a host of exciting day and night experiences. Check out the WOMEN IN TECHNOLOGY LUNCH, a jam packed EXPO HALL or challenge other delegates and the ESPC team to some fun games in the Community Area.
Not enough? Be sure to join the EXPO drinks on Tuesday evening or enjoy a magical night at the sell-out ESPC18 PARTY: A NIGHT AT THE CIRCUS.

SHARE with your Team

Schedule a meeting post conference to share your key learnings with your fellow co-workers. Impress colleagues with your advanced knowledge while maximizing value for your company. Share your inspiration and make plans to implement what you learned.

Still undecided about attending The leading European SharePoint, Office 365 & Azure Conference? Visit 10 REASONS TO ATTEND ESPC18 to see why you need to be there.

BOOK YOUR TICKET today and use coupon code ESPC18SPK to receive a €100 discount.

Microsoft Ignite interview with Kevin Farlee on Azure SQL Database Hyperscale

Azure SQL Database is introducing two new features to cost-effectively migrate workloads to the cloud. SQL Database Hyperscale for single databases, available in preview, is a highly scalable service tier that adapts on demand to workload needs. It auto-scales up to 100 TB per database to significantly expand potential for app growth.
What does this mean? It’s one of the most fundamental changes to SQL Server storage since 7pm. So this is big: big news, and very big data stores. I am very lucky because I got to interviewe Kevin Farlee of the SQL Server team about the latest news, and you can find the video below.

I am sorry about the sound quality and I have blogged so that the message is clear. When I find the Ignite sessions published, I will add in a link as well.
What problem are the SQL Server team solving, with Hyperscale? The fundamental problem is how do you deal with very large databases in the cloud. VLDBs is the problems that people want to do with normal operations. All the problems with VLDBs occur due to the sheer size of data, such as backups, restores, maintenance operations, scaling. Sometimes these can take days to conduct these activities, and the business will not wait for these downtimes.  If you are talking tens of terabytes, that takes day and ultimately Microsoft needed a new way to protect data and VLDBs. The SQL Team did something really smart and rethought very creatively on how they do storage, in order to take care of the issues with VLDBs in the cloud.
So, the Azure SQL Server team did something that is completely in line with one of the main benefits and key features of cloud architecture: they split out the storage engine from the relational engine. Storage implementation was completely rethought and remastered from the ground up. They took the viewpoint over how you would go about architecting, designing and building for these solutions in the cloud, if you were to start from scratch?
The Azure SQL Server database team did a smart thing: Azure SQL Server is using microservices to handle VLDBs.
The compute engine is one microservice which is taking care of it’s role, and then another microservice that is taking care of the logging, and then a series of microservices that handle data. These are called page servers, and they interface at the page level. The page servers host and maintain the data files. Each page server handles about a terabyte of data pages. You can add on as many as you need.
Ultimately, compute and storage are decoupled so you can scale compute without moving the data. This means it’s possible to keep adding more and more data, and it also means that you don’t have to deal with the movement of data. Moving data around when there are terabytes and terabytes of data isn’t a trivial task. The page servers have about a terabyte of data each, and the page servers have about a terabyte’s worth of SSD cache.
The ultimate storage is Azure Blob Storage, because blob storage is multiply redundant and it has features like snapshots, so this means that they can do simultaneous backups by just doing a snapshot across all of the blobs. This has no impact on workload.
Restores
Restores are just instantiating a new set of writeable disks from a set of snapshots, and works with the the page servers and the compute engine to take care of it, working in symphony. Since you’re not moving the data, it is faster.
I’m personally very impressed with the work that the team they’ve done, and I’d like to thank Kevin Farlee for his time. Kevin explains things exceptionally well.
It’s worth watching the video to understand it. As well as the video here, Kevin goes into detail in his Microsoft Ignite sessions, and I will publish more links when I have them.
Community
One advantage in doing the MIcrosoft Community Reporter role is that I get to learn from the experts, and I enjoyed learning from Kevin throughout the video.
It seems to me that the Azure SQL database team have really heard the voice of their technical audience and they’ve worked passionately and hard to tackle these real life issues. I don’t know if it is always very clear that Microsoft is listening but I wanted to blog about it, since I can see how much the teams take on board the technical ‘voice’ from the people who care about their solutions, and who care enough to share their opinions and thoughts so that Microsoft can improve their solutions.
From the Azure architecture perspective, it works perfectly with the cloud computing concept of decoupling the compute and the storage. I love watching the data story unfold for Azure and I’m excited by this news.

Microsoft Ignite interview with Executive Team on #ArtificialIntelligence, #Data, #OpenSource and #Cloud

There were a number of announcements across Azure Data + AI at Microsoft Ignite, and Im delighted to say that I had the opportunity to interview Rohan Kumar, Corporate Vice President, Azure Data at Microsoft, and Eric Boyd, Corporate Vice President, AI at Microsoft.

In the interview, Rohan Kumar and Eric Boyd give their opinions and thoughts to myself and Cathrine Wilhelmsen on the big picture across Data & AI.  I was super excited since it was the first time that these Microsoft executives had been interviewed together and I was particularly interested to see how Rohan and Eric cross-reference each other’s areas. It’s clear that they are working in orchestration as a team, and I’m glad to see that because I do see that data and Artificial Intelligence impact one another so much.

Rohan and Eric talk about the announcements that excited them both, and there was also a good discussion on the role of Open Source at Microsoft, and what role it plays in Microsoft’s Data and Artificial Intelligence story.

There was a great discussion on Eric and Rohan’s thoughts on its role in making insights, Artificial Intelligence and insight-driven analysis real for organizations. Every organization on the planet has got data, and Microsoft are carving a path for the organizations that want to make use of it.

I’m personally interested in Amara’s Law, which states that “We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.” So I was interested in Rohan and Eric’s perspectives on what aspects of AI are real versus hype. What is Microsoft doing to make AI real and actionable for customers?

amara

We wrapped up with a great conversation on the Microsoft and Facebook collaboration?, which I personally find interesting.

It was  real life-achievement for me to participate in the Microsoft Ignite Community Reporter team, and it was a real achievement for me to interview the Microsoft Executives. I’d like to thank Rohan Kumar and Eric Boyd for their time and for sharing their wisdom and insights.

I was also glad to be on board with Cathrine Wilhelmsen. Cathrine was a wonderful friend and support throughout the week and she’s not just an expert in her domain, but she’s a very giving person in terms of her friendship and support. So the interview holds special meaning for me since I was glad to have the opportunity to work with her.