Fixing ‘Could not create Directory’ in Azure WordPress installation of Elementor Pro

I have ported my Data Relish company website over to Azure. It isn’t finished yet, and I’m making the final touches. I’m using WordPress on Azure and it’s been good experience in understanding how the moving parts of Azure, Office 365 and WordPress all hang together.

In trying to update Elementor Pro, I got the following error message:

An error occurred while updating Elementor Pro: Could not create directory. elementor-pro/modules/assets-manager/asset-types/fonts

After some digging around, it turns out that this is due to the length of the directory name. The long filename was causing an issue, and sub-directories increased the length of the filename so they simply could not get created.

In order to sort the issue, you have to use the Windows Compatibility Fix

Once the Fix is applied, it’s possible to create directories, and the upgrade to the latest edition of Elementor Pro should proceed as expected.

I hope that helps you to fly farther with WordPress!

Being a Microsoft Regional Director: faith, trust and pixie dust for good

I’m still learning about being a Microsoft Regional Director and I’m figuring things out. I’d like to thank Microsoft here for this opportunity and I’d like to thank the great RD team at Microsoft for their seemingly-endless patience with my questions!

Here is my opinion. I don’t represent anyone other than myself, and this is in no way official. I am extremely honored to be a Regional Director and an MVP and I think that the RD role is worth exploring further. This is just an opinion, and that I’m still learning about the RD role since I am new to it. I might add that i’m still figuring out being an MVP as well. Actually, I’m still wondering what job I want to do when I grow up!

Let’s take an example. Recently, an email popped up in my mailbox from a senior executive and decision maker, who asked for a hiring strategy for Azure team members and a commentary about POs for Azure, including Power BI. So I made a huge impact at that customer site, which was a large organization and a big ship to steer around. In fact, it takes faith, hope and a little dash of pixie dust as well as joining hands with the team in order to make the jump in digital transformation; people, processes and technology. And then, I rinse and repeat at other organizations so that everyone has a good leap of faith in the direct direction.

Recently I was on the BBC, talking about a different client where I am helping with a data science for good project, which focuses on homelessness and other aspects of social care. I’ve put the video here, in case you’re interested:

You probably think that any one-woman-band projects mean much, but they do. In fact, it’s huge. I have been working with the first client for months, on and off, combining my time with other customers in an ad-hoc way. I am convinced that Azure is the right solution and the role was born out of the roadmap to Azure that I had worked with them to produce, as part of a larger strategy piece; and it’s just the first role and more will be added later.

For the second customer case, the work we are doing, using Microsoft technologies, is going to have a good impact on people’s lives. The data overrides your perceptions. When we think of homeless people, we think of the tramps on the street, right? Wrong. What about victims of domestic violence, who become victims of unexpected homelessness because they are in fear for their lives? What about their children? That’s how hard people have it in their lives, and in the tech world, we are so blessed, often. What are we complaining about, really?

I don’t think that the RD role or the MVP role are sales roles at all. I don’t benefit financially from these recommendations. I am entirely independent and, if I recommend a solution, it’s because I believe that it is the correct solution.

So I think an RD is partially about having that strategic impact that Microsoft can really see and feel, in a good way. There will be nothing to tie me to the purchase of Azure at all, because I didn’t receive anything and I don’t sell Azure, and I didn’t make anything from the sale. I’m an independent consultant so I get paid for my time, not the fruits of my recommendations. But people will feel the results; the new hires, for example.

So I think an RD is partially about having that strategic impact that other people can really see and feel, in a good way. In these digital transformation pieces, I’m making people’s work easier for them through better processes, great technology and mentoring, supporting and helping people. For the work I’m doing in data science for good, I’m using Microsoft Data Science technologies as part of an amazing, amazing team who are doing great things and making people’s lives better. I think that is it, really: about using your pixie dust to do good things. It’s not about ‘bigger is better’ – bigger business, higher github admissions, higher turnover, larger number of hires, bigger number of Azure VMs, bigger number of forum answers or bigger profile on Stackoverflow. I think it’s about having the same pixie dust as anyone else, but throwing it liberally on the right things.

Rule your mind or it will rule you – Buddha

I think it’s about personal growth. It’s also about striving to have a maturity of outlook and a cool head, and I am trying very hard to heal and be the clean person I’d like to be. I’m doing my MBA and it’s all about personal growth and development. It’s unlike any other course I’ve done, since it means I get really hard feedback about myself as a person as well as my work. Some of the feedback is great, and other feedback is uncomfortable and provokes cognitive dissonance, but the self-honesty means that I can work on it through reflexive and reflective leadership techniques. For example, I’ve written before about having Imposter Syndrome but now I am learning to watch my thoughts better (mindfulness and my Buddhist journey) which means I’m starting to understand better if it really is Imposter Syndrome, or perhaps it’s a reality check, or perhaps I am just being silly? I have grown so much in the past few years and my Buddhist journey tells me that I have a long way to go.

When others go low, you go high

Kirk D Borne, who is an immensely insightful gentleman, asked me a deceptively simple question: what does this actually mean for you? I’d like to thank Kirk here because his generous and insightful question provoked me to think of  for days. I love it when someone challenges me with a wise question and one that I hadn’t considered before, which was kind of the point! I’ve decided on an answer: what being an RD means for me is the opportunity to network, learn and share with people who are brilliant, mature, optimistic, knowledgeable, willing to share freely and with no reward in it, know when to speak and when to stay silent, experienced in business and in the tech sphere. I’m with a great set of people who I admire.

Accountability

Accountability is a very tough thing to learn and it’s something that I ask myself every day: who is accountable? Professionally or personally, you can’t shrug off personal accountability. To lead by example, you have to be accountable, which means that people can have faith and trust in you.

It’s about people you can have faith and trust in, and striving to be that person. The RD program inspires me to work towards being all of these things and to consider accountability.

It also means that I am working to make sure that nobody steals my pixie dust. Michelle Obama inspires me here: when others go low, I go high. Words to live by!

Don’t let anyone steal your Pixie Dust

Following on from accountability, it’s about being an authentic you and striving to be a better  you. On my office wall, I have a picture of Tinkerbell, as follows.

pixiedust

My onboarding to the RD Program has been incredible and people outside and inside of Microsoft have been amazing. So I’d like to thank everyone who has congratulated me and I can promise that I will do my best.

Modelling your Data in Azure Data Lake

One of my project roles at the moment (I have a few!) is that I am architecting a major Azure implementation for a global brand. I’m also helping with the longer-term ‘vision’ of how that might shape up. I love this part of my job and I’m living my best life doing this piece; I love seeing a project take shape until the end users, whether they are business people or more strategic C-level, get the benefit of the data. At Data Relish, I make your data work for different roles organizations of every purse and every purpose, and I learn a lot from the variety of consulting pieces that I deliver.

If you’ve had even the slightest look at the Azure Portal, you will know that it has oodles of products that you can use in order to create an end-to-end solution. I selected Azure Data Lake for a number of reasons:

  • I have my eye on the Data Science ‘prize’ of doing advanced analytics later on, probably in Azure Databricks as well as Azure Data Lake. I want to make use of existing Apache Spark skills and Azure Data Lake is a neat solution that will facilitate this option.
  • I need a source that will cater for the shape of the data…. or the lack of it….
  • I need a location where the data can be accessed globally since it will be ingesting data from global locations.

In terms of tooling, there is always the Azure Data Lake tools for Visual Studio. You can watch a video on this topic here. But how do you get started with the design approach? So how do I go about the process of designing solutions for the Azure Data Lake? There are many different approaches and I have been implementing Kimball methodologies for years.

cellar

With this particular situation, I will be using the Data Vault methodology. I know that there are different schools of thought but I’ve learned from Dan Lindstedt in particular, who has been very generous in sharing his expertise; here is Dan’s website here. I have delivered this methodology elsewhere previously for an organization who have billions USD turnover, and they are still using the system that I put in place; it was particularly helpful approach for an acquisition scenario, for example.

 

Building a Data Vault starts with the modeling process, and this starts with a view of the existing datamodel of a transactional source system. The purpose of the data vault modelling lifecycle is to produce solutions to the business faster, at lower cost and with less risk, that also have a clear supported afterlife once I’ve moved onto another project for another customer.

 

Data Vault is a database modeling technique where the data is considered to belong to one of three entity types: hubs, links,and satellites:

 

  • Hubs contain the key attributes of business entities (such as geography, products, and customers)
  • Links define the relations between the hubs (for example, customer orders or product categories).

 

  • Satellites contain all other attributes related to hubs or links. Satellites include all attribute change history.

 

The result is an Entity Relationship Diagram (ERD), which consists of Hubs, Links and Satellites. Once I’d settled on this methodology, I needed to hunt around for something to use.

How do you go about designing and using an ERD tool for a Data Vault? I found a few options. For the enterprise, I found  WhereScape® Data Vault Express. That looked like a good option, but I had hoped to use something open-source so other people could adopt it across the team. It wasn’t clear how much it would cost, and, in general, if I have to ask then I can’t afford it! So far, I’ve settled on SQL Power Architect so that I can get the ‘visuals’ across to the customer and the other technical team, including my technical counterpart at the customer who picks up when I’m at a conference. This week I’m at Data and BI Summit in Dublin so my counterpart is picking up activities during the day, and we are touching base during our virtual stand-ups.

StockSnap_DotsSo, I’m still joining dots as I go along.

If you’re interested in getting started with Azure Data Lake, I hope that this gets you some pointers from the design process.

I’ll go into more detail in future blogs but I need to get off writing this blog and do some work!

Cloud computing as a leveler and an enabler for Diversity and Inclusion

I had the honour and pleasure of meeting a young person with autism recently who is interested in learning about Azure and wanted some advice on extending his knowledge.
It was a great reminder that we can’t always see people who have conditions such as autism. It also extends to disability, particularly those that you can’t see; examples include epilepsy or even Chronic Fatigue Syndrome.

Diversity gives us the opportunity to become more thoughtful, empathetic human beings.

dyslexia-3014152_1920

Credit: https://pixabay.com/en/users/geralt-9301/

I love cloud because it’s a great leveler for people who want to step into technology. It means that these personal quirks, or differences, or ranges of abilities can be sidestepped since we don’t need to all fit the brogrammer model in order to be great at cloud computing. Since we can do so many things remotely, it means that people can have flexibility to work in ways that suit them.

In my career, I couldn’t lift a piece of Cisco kit to rack it, because I was not strong enough. With cloud, it’s not a problem. The literally heavy lift-and-shift is already done. It really comes down to a willingness to learn and practice. I can also learn in a way that suits me, and that was the main topic of conversation with the autistic youth that I had the pleasure to meet.

I believe that people should be given a chance. Diversity gives us the opportunity to become more thoughtful, empathetic human beings. In this world, there is nothing wrong with wanting more of that humanness.

Dynamic Data Masking in Azure SQL Datawarehouse

I’m leading a project which is using Azure SQL Datawarehouse, and I’m pretty excited to be involved.  I love watching the data take shape, and, for the customer requirements, Azure SQL Datawarehouse is perfect.

secret-3037639_640 Note that my customer details are confidential and that’s why I never give details away such as the customer name and so on. I gain – and retain – my customers based on trust, and, by giving me their data, they are entrusting me with detailed information about their business.

One question they raised was in respect to dynamic data masking, which is present in Azure SQL Database. How does it manifest itself in Azure SQL Datawarehouse? What are the options regarding the management of personally identifiable information?

sasint

As we move ever closer to the implementation of GDPR, more and more people will be asking these questions. With that in mind, I did some research and found there are a number of options, which are listed here. Thank you to the Microsoft people who helped me to come up with some options.

1. Create an Azure SQL Database spoke as part of a hub and spoke architecture.

The Azure SQL Database spoke can create external tables over Azure SQL Datawarehouse tables for moving data into Azure SQL Database to move data into the spoke. One note of warning: It isn’t possible to use DDM over an external table, so the data would have to move into Azure SQL Database.
2. Embed masking logic in views and restrict access.

This is achievable but it is a manual process.
3. Mask the data through the ETL processes creating a second, masked, column.

This depends on the need to query the data. Here, you may need to limit access through stored procs.
On balance, the simplest method overall is to use views to restrict access to certain columns. That said, I an holding a workshop with the customer in the near future in order to see their preferred options. However, I thought that this might help someone else, in the meantime. I hope that you find something that will help you to manage your particular scenario.

How do you evaluate the performance of a Neural Network? Focus on AzureML

I read the Microsoft blog entitled ‘How to evaluate model performance in Azure Machine Learning‘. It’s a nice piece of work, and it got me thinking. I didn’t see that the blog post contained anything about neural network evaluation, so this topic is covered here.

How do you evaluate the performance of a Neural Network? This blog focuses on Neural Networks in AzureML, in order to help you to understand what they mean.

What are Neural Networks?

Would you like to know how to make predictions from a dataset? Alternatively, would you like to find exceptions, or outliers, that you need to watch out for? Neural networks are used in business to answer the business questions. They are used to make predictions from a dataset, or to find unusual patterns. They are best used for regression or classification business problems.

What are the different types of Neural Networks?

I’m going to credit the Asimov Institute with this amazing diagram:

Neural Network Types

In AzureML, we can review the output from a neural network experiment that we created previously. We can see the results by clicking on the Evaluation Model task, and clicking on the Visualise option.

Once we click on Visualise, we can see a number of charts, which are described here:

  • Receiver Operating Curve
  • Precision / Recall
  • Lift visualization

The Receiver Operating Curve

Here is an example:

ROC Curve

In our example, we can see that the curve well up into the left hand corner for the ROC curve. When we look on the precision and recall curve, we can see that precision and recall are high figures, and this leads to a high F1 score. This means that the model is effective in terms of how precisely it classifies the data, and that it covers a good proportion of the cases that it should have classified correctly.

Precision and Recall

Precision and recall are very useful for assessing models in terms of business questions. They offer more detail and insights into the model’s performance. Here is an example:

Precision and Recall Precision can be described as the fraction of times that the model classifies the number of cases correctly. It can be considered as a measure of confirmation, and it indicates how often the model is correct. Recall is a measure of utility, which means that it identifies how much that the model finds of all that there is to find within the search space. Both scores combine to make the F1 score. The F1 score combines Precision and Recall. If either precision and recall are small, then the F1 score value will be small.

Lift Visualisation

Lift Chart visually represents the improvement that a model provides when compared against a random guess.This is called a lift score. With a lift chart, you can compare the accuracy of predictions for the models that have the same predictable attribute. Lift Visualisation

Summary

In my next blog, I’ll talk a little about how we can make the Neural Network perform better.

To summarise, we have examined various key metrics in evaluating a neural network in AzureML. These scores also apply to other technologies, such as R.

These criteria can help us to evaluate our models, which, in turn, can help us to fundamentally evaluate our business questions. Understanding the numbers helps to drive the business forward, and visualizing these numbers helps to convey the message of the numbers.

Data Preparation in AzureML – Where and how?

messy-officeOne question that keeps popping up in  myc customer AzureML projects is ‘How do I conduct data preparation on my data?’ For example, how can we join the data, clean it, and shape it so that it is ready for analytics? Messy data is a problem for every organisation. If you don’t think it is an issue for your organisation, perhaps you haven’t looked hard enough.

To answer the question properly, we need to stand back a little, and see the problem as a part of a larger technology canvas. From the enterprise architecture perspective, that it is best to do data preparation as close to the source as possible. The reason for this is that the cleaned data would act as a good, consistent source for other systems, and you would only have to do it once. You have cleaned data that you can re-use, rather than re-do for every place where you need to use the data.

Let’s say you have a data source, and you want to expose the data in different technologies, such as Power BI, Excel and Tableau. Many organisations have a ‘cottage industry’ style of enterprise architecture, where they have different departments using different technologies. It is difficult to align data and analytics across the business, since the interpretation of the data may be implemented in a manner that is technology-specific rather than business-focused. If you take a ‘cottage industry’ approach, you would have to repeat your data preparation steps across different technologies.

dt960131dhc0

When we come to AzureML, the data preparation perspective isn’t forgotten, but it isn’t a strong data preparation tool like Paxata or Datameer, for example. It’s the democratization of data for the masses, yes, and I see the value it brings to businesses. It’s meant for machine learning and data science, so you should expect to use it for those purposes. It’s not a standalone data preparation tool, although it does help you partway.

The data preparation facilities in AzureML can be found here. If you have to clean up the data in AzureML, my futurology ‘dream’ scenario for AzureML is that Microsoft have weighty data preparation as a task, like other tasks in AzureML. You could click on the task, and then have roll-your-own data preparation pop up in the browser (all browser based) provided by Microsoft or perhaps have Paxata or Datameer pop out as a service, hosted in Azure as part of your Azure portal services. Then, you would go back to AzureML, all in the browser. In the meantime, you would be better trying to follow the principles of cleaning it up close to the course.

crisp-dm_process_diagramDon’t be downhearted if AzureML isn’t giving you the data preparation that you need. Look back to the underlying data, and see what you can do. The answer might be as simple as writing a view in SQL Server. AzureML is for operations and machine learning further downstream. If you are having serious data preparation issues, then perhaps you are not ready for the modelling phase of CRISP-DM so you may want to take some time to think about those issues.