Search

Search this blog:

Search This Blog

Computer Vision in Microsoft Azure


 #TheMVPChallenge continues as I continue on my journey through MS Learn modules in Azure. This week I'm working through the topic MS Learn: Computer Vision in Microsoft Azure

To complete the hands on exercises in this unit, you'll need to choose an Azure resource: 

  • Custom Vision: A dedicated resource for the custom vision service, which can be training, a prediction, or both resources.
  • Cognitive Services: A general cognitive services resource that includes Custom Vision along with many other cognitive services. You can use this type of resource for training, prediction, or both.

Tips for Hands-On Labs

This lab uses a virtual machine run through LabOnDemand. I love this as an instructor, because it enables me to see students' screens and troubleshoot when doing virtual or blended courses. However, as an end user who has never used it before, it can be a bit overwhelming. 

  • You can use the same launch instance to run multiple modules (until you reach the 1hr time limit)
  • Use the icons at top left to: 
    • Expand the screen to full screen (Computer icon)
    • Paste clipboard text (Lightning bolt icon)
screenshot of LabOnDemand

Analyze images with the Computer Vision service

This module is bringing back memories of science class and classifying plants and animals into Kingdom, ... , Family, Genus, Species. Apparently there's an 86-category taxonomy for Computer Vision that images are classified into using Azure Computer Vision. 

Features

Computer Vision enables you to analyze images and return many 'features', In this exercise we get to look at: 

  • Description
  • Tags
  • Adult
  • Objects
  • Faces

The image below is the result of one of the Computer Vision predictions that you will see during this unit's hands-on learning. Can you identify the results that match each of the features in the list above? 

Screenshot of Computer Vision result

Classify images with the Custom Vision service

This module starts to get into more specifics on how we might be able to train and use the Custom Vision service to develop our own models for every day use - without needing to be experts in data science or machine learning! Pretty neat. 

What will you use Custom Vision for? 

  • Classifying products
  • Identifying key structures (power lines, bridges, skyscrapers)
  • Other?

One key takeaway that stood out to me was the importance of providing the model with enough of the right data to train it properly: 

"One of the key considerations when using images for classification, is to ensure that you have sufficient images of the objects in question and those images should be of the object from many different angles."

Again, classification is one of my favorites and this module was lots of fun and very hands-on. I even managed to classify my images in Spanish, well at least 'naranja'. 

screenshot Classification Custom Vision

Detect objects in images with the Custom Vision service

Training vs Predicting

Custom Vision and Cognitive Services resources in Azure will both let you train and predict, but in order to do both in a Custom Vision resource, you need TWO Custom Vision resources; one for training and one for prediction. While there are reasons to keep these two separate, sometimes you want a simple solution with one single key and endpoint for BOTH training and prediction, in this case you must use the Cognitive Services resource which allows training and prediction in the same resource.

Tagging

This was a laborious process - you must manually tag each image before you can train the model. 

I tried to train the model without doing the (very manual) tagging process on all the images but got an error: 

Screenshot of error: Your project can't be trained just yet. Make sure you have at least 15 images for every tag.

It would be great if we could somehow use the classification model to help with the tagging process of object detection? 

Ultimately, it's still faster than having to create your own object detection model from scratch.

Human Error and Accuracy

I made at least 2 mistakes when tagging the fruits, and didn't bother to correct them (on purpose - I wanted to see how this would impact my results). My model was still able to accurately detect the apple and orange, but it also detected a phantom banana. Does yours do the same?

screenshot of orange apple detect results

How would you use this in the real world? 

  • Evaluating the safety of a building by looking for fire extinguishers or other emergency equipment.
  • Creating software for self-driving cars or vehicles with lane assist capabilities.
  • Medical imaging such as an MRI or x-rays that can detect known objects for medical diagnosis.

Detect and analyze faces with the Face service

Most of us have used facial recognition software before, such as: 

  • Tagging photos of friends and family (on social media or cloud photo storage)
  • Device security (to unlock phone or computer)
This technology is pretty advanced - the lab exercise for this module is pretty easy and I didn't feel like I had much influence or input in what was happening. We're merely utilizing tried and tested models to identify faces, common features and identify people. 

What impressed me was that by provided a single profile view of a shopper from the left hand side, the Azure Face service was able to detect that the profile view from the right hand side was the same shopper. Facial recognition is a really cool science that fascinates me.

As a side note, did you know that there's such a thing called 'Face Blind' or Prosopagnosia? I'm interested to see how the research around this disorder develops and if we can use the same technology from Face service to help people with Prosopagnosia. 

Read text with the Computer Vision service

OCR, or Optical Character Recognition, is a term you may have heard before. This module gives some background on where OCR started and shows off the Azure capability. However, you don't need Azure to take advantage of OCR. One of my favorite apps that is FANTASTIC at OCR is Microsoft OneNote.

Key Takeaway from this module is the difference between the Read API and OCR API. 

Bonus Activity: Try OCR in Microsoft OneNote

  1. Open Microsoft OneNote (download OneNote for free)
  2. With this blog post visible, type [Windows Key] [Shift] [S] simultaneously on your keyboard.
  3. Drag a box around a portion of text in this blog - try not to cut off the left and right.
  4. Return to OneNote and paste the screenshot.
  5. Right click on the image then choose Copy Text from Picture
  6. Click somewhere else in the OneNote page and Ctrl V to paste the text.

Analyze receipts with the Form Recognizer service

This exercise is fairly basic, but I wonder if we could use this to speed up processing of expense claims? It says it works for US based receipts on taxes paid - could it still identify GST on NZ based receipts? 

Azure Machine Learning


I'm continuing on #TheMVPchallenge and working through some MS Learn Modules. This week I'm working my way through the unit: Create no-code predictive models with Azure Machine Learning

I'll update this post throughout the week as I progress through the modules.

Data scientists spend a lot of time working with regression models and manipulating large datasets to predict future values. In order to understand this hands on lab, it's helpful to have a basic understanding of data science and machine learning. 

A few terms you might want to get familiar with: 

  • Machine Learning
  • Label
  • Feature
  • Train
  • Regression
You may have some down time while waiting for the magic to happen in Azure, so use that to review the Introductions for each module and study up on some of the vocab above.

Tips for Learning

  • Set aside enough time (1-2 hours per module)
  • Save time for Clean up

These modules are hands-on. You get to work directly in Azure to train and test an Automated Machine Learning model. Set aside at least 1-2 hours to work through this module, more if you're like me and like to explore and try other things. You need to complete each module in one sitting so that you can clean up your Azure resource groups and save on cost when you're completed.

Datasets

This unit uses a few web sample datasets that are pretty cool, including one of penguins in Antarctica, a continent I'd love to visit someday. 

Use automated machine learning in Azure Machine Learning

Why reinvent the wheel? In this module you get to explore how Azure Machine Learning can automatically provide you with insights into your data. No need for you to pick and choose data, spend time cleaning or studying, or worrying about which algorithms to use. Let Azure figure it all out for you!

If you've used the AI visuals in Power BI, you may love this. I love exploring datasets with the Power BI Key Influencers visual, but in Power BI I need to select the fields and columns that I think are relevant. In the Azure Machine Learning, we can see which columns have the greatest influence over a value. All that you need to provide is a dataset, and identify the single column whose values you'd like to analyze. 

As you might expect, working day and average temp have a high correlation to bicycle rentals, whereas humidity and season have a much lower correlation and influence.

screenshot of Azure Machine Learning studio Global importance graph


Create a Regression Model with Azure Machine Learning designer

Step 1: Clean and Transform Data

Tips for cleaning data for regression models

  • Normalize all numeric attributes (speed, volume, temperature) 
  • Use text for categorical attributes that don't make sense to normalize (number of doors: two, four; not 2, 4)
  • Remove rows for missing / null values

Overview

Going through this module gave me a much greater appreciation of Power BI and all that it does to help us clean and transform our data. It's all about what you know, so I find Power Query very intuitive. Logging into the Azure Machine Learning studio is a step I'm not familiar with, but much of the Power Query functionality appears to be there. The trick is understanding the terminology.

For example, here are a few Azure Machine Learning data transformations and what they are called in Power Query: 

  • Azure Machine Learning | Power BI
  • Add Columns | Home > Merge Queries
  • Add Rows | Home > Append Queries
  • Apply Math Operation | Transform > Standard
  • Clean Missing Data | Home > Remove Rows 

Step 2: Create Training Pipeline 

What is a training pipeline? 

In order to evaluate the effectiveness of our model, we need to split the actual data into two groups - Group A that we'll use to 'train' and create the model, Group B that we'll use to score and evaluate the model.

Step 2A: Split actual data into two groups

  • use about a 70/30 random split for training/scoring 
  • use date split to evaluate forecasting 

Step 2B: Train Group A

Choose from the 19 algorithms available in the Azure Machine Learning designer to train your model.

Step 2C: Score 

Bring back the Group B and 'Score' your model. Basically this is a process of calculating the expected results for that Group B subset of data using the trained model developed by Group A. At the end of this process, you'll have two columns for your 'label' - the Actual and the Score (aka expected value).

Overview

The drag and drop user interface is has already become intuitive and I'm finding it really easy to use. What a great visual to see what's actually happening in the model - much more transparent than the Automated Machine Learning, and I love that you have more control over the actual design of the model.

Step 3: Evaluate Training Pipeline

There are a few key metrics that Azure Machine Learning Evaluate Model can provide to give you an indication of how accurate your trained model is: 

  • Mean Absolute Error (MAE)
  • Root Mean Squared Error (RMSE)
  • Relative Squared Error (RSE)
  • Relative Absolute Error (RAE)
  • Coefficient of Determination (R2)

Again, the easy drag and drop interface allows us to easily evaluate the model and get these results automatically!

Take the time to explore! How does the Decision Forest Regression compare to the Linear Regression evaluation?

Step 4: Create Inference Pipeline

Now that you've built and created the training model, we're ready to use it to predict label values on new data. 

Select the best model from your evaluation step above, and create an inference pipeline in Azure Machine Learning designer. You will need to edit the inputs so that the new data does NOT contain the label values, then run the pipeline. 

Step 5: Deploy

This is all pretty cool, but let's be honest, the people who need access to this information are not going to login to Azure Machine Learning designer to run the inference pipeline we just created, so the final step is deploying and publishing it to an external service. You'll notice that Web Service Input and Output were part of the inference pipeline we created, so now we'll get to test them in a web service environment. 

Again, explore the data. What happens if you change 'two' to 'four' door alfa-romero? How does the 'predicted price' change?

Create a classification model with Azure Machine Learning designer

By now you should be getting pretty familiar with the Azure Machine Learning designer interface, so you don't need to set aside quite as much time for this module. That being said, take advantage of the time to explore and try new things!

This module follows the same steps as the regression model, but for a classification label rather than a numeric label. In this case, we're classifying data as 'Yes' or 'No', 1 or 0. Is the patient diabetic?

Evaluate

Confusion Matrix

Because we're working with classification, the Evaluation will result in a 'Confusion Matrix', which includes key metrics: 

  • Accuracy
  • Precision
  • Recall
  • F1 Score
  • AUC

Threshold

The default threshold is 0.5, but if you want to err on the side of caution and predict that more people have diabetes, you might adjust this to something lower, like 0.3. A threshold of 0.3 will give you more false positives, but fewer false negatives. What threshold gives the greatest accuracy?

Create a clustering model with Azure Machine Learning designer

Again, the steps here are similar to those in the Regression model, except instead of 'scoring' the model, we need to 'assign data to clusters'. I LOVE hands-on learning. I'm already getting to the point where I can predict the steps before reading the instructions.

*DISCLAIMER: This often results in me breaking something, doing the wrong step, or losing my place, but it's the way I learn. Figure out the style that works for you and let me know how you go. 

Clustering models is one of my favorites - it's such a cool process that's happening in the background. Take the time while your model is running to read up on the maths behind the clustering.

Try testing this with your own sample data. It's the best way to show that you've learned something and reinforce the skills. An easy one to try is snow vs rain - give Azure Machine Learning designer a dataset of weather (use your favorite weather app) that includes temperature, wind chill, humidity and precipitation amount, but remove the precipitation type data (snow/rain) and see how well the clustering model works. 

AI on Azure


Wow! Technology has come such a long way since I was 10. I've just had a quick test run of the Seeing AI app by Microsoft. It's currently only available on iPhone or iPad, but 10 year old me is totally geeking out.

We were asked to invent something new by our 5th grade teacher, and I invented a scanning device that you could install over your trash or recycles to help you generate a shopping list as you consume items from your fridge or pantry - just scan them on the way into the trash and they'll be added to your shopping list. There are so many apps out there now that are capable of this invention and so much more. 

Seeing AI

The Seeing AI app can scan product barcodes, read documents (both printed and handwritten), identify objects in your surroundings, and identify people. In my initial testing, it works really well, and had a surprisingly high percentage of New Zealand food products in its scanning database. Unfortunately, it cannot yet ID New Zealand currency - only Brazilian Real, British Pounds, Canadian Dollars, Euros, Indian Rupees, Japanese Yen, Turkish Lira and US Dollars. It also thought I was 2 years older than I am - how rude! I guess I'll forgive it just this once, but only because 10 year old me is so impressed at how close it got. 

Responsible AI

As artificial intelligence becomes more prevalent and advances, we need to ensure it does so responsibly. This can be something as simple as using the word 'probably' in front of each AI prediction. The Seeing AI app prefaces each photo analysis with 'probably': 

  • Probably a large glass ceiling
Grand Palais ceiling, Paris France

MS Learn Modules

Follow along with my learnings by completing the Get started with AI on Azure MS Learn module. It has a great interactive activity on Responsible AI and also gives some really cool examples of AI (such as the Seeing AI app) that you can check out.

Power BI Goals


 If you've been joining the Microsoft Business Applications Summit, you'll have heard the new announcements about:

  • Power BI Automated Insights
  • Power Automate Integration with Power Automate Visual
  • Power BI Streaming Dataflows
  • Power BI Spatial Anchoring
  • Power BI Goals

I'm going to spend some time focusing on Power BI Goals as they're in public preview and available now! 

Power BI Goals

These will be available to premium capacity, including premium per user, and can dramatically improve the way you track your KPIs. 

Power BI Goals are:

  • Data driven
  • Built for teams
  • AI powered
  • Automated

I'll come back to update this post with more feedback once I've had some time to play with and use the new features, but in the meantime, read the Power BI Goals blogpost

Power BI Streaming Dataflows

Watch this space - these are on the Roadmap and are going to make it much easier to get that real-time data you need. 

Power BI Spatial Anchoring 

Hololens Moving to Mobile Phone

Hololens app for Power BI has been around for a while, but it's not going anywhere. I don't have a Hololens so can't test it first hand, but the demos are COOL! 

Well, guess what?! Soon, you won't need to have a Hololens to get this functionality. All you'll need is a mobile phone. This will be available later this year (2021).

Other Announcements

Don't forget about these cool features you should check out too:

  • Power BI Desktop Paginated Reports Visual
  • SharePoint Lists Quick Create for Power BI
  • Power BI and Teams Integration

Accessibility Fundamentals - Microsoft Accessibility Features and Tools


Color 

So I've been having some great conversations on LinkedIn about accessibility and we've spent some time focusing on colorblind safe themes. Color is a big deal in data visualizations and should always be front of mind when you're building any report. Consider your audience carefully and what impact color will have, as well as how easy it is to distinguish between the colors.

Resources

My new favorite resources that I'm going to start using for color:

Audio

If you've ever tested out live captions or watched a movie in a foreign language you've probably come across some non-sensical captions at some stage. The question here is do they add more value than they detract? I'm really keen to learn more about the AI and Azure as I progress through these MS Learn Modules, as I'm sure the Artificial Intelligence that generates live captions is learning every second of every day and constantly improving, but how? I've trained OneNote to recognize my sloppy hand-writing and I've softened my US accent and adapted to using NZ vocabulary in my lessons to make them accessible to my NZ audience. You may notice I swap between using 'color' and 'colour' depending on my audience, I do the same when speaking. 

I have historically turned off live captions as I found their inaccuracy to be distracting and frustrating at times, but am keen to see what is happening in the back end and how we can help improve this feature. 

Resources

One key resource that is really handy: 

  • PowerPoint Presenter Coach - This is an awesome tool that can give you valuable feedback about your speaking pace, word use, pitch, and more. A great way to rehearse for those important meetings without calling in a favor from the family! As an added bonus - it's judgement free. You won't be graded on your presentation, just given tips and advice for what to keep doing and how to improve.

Screen Readers

Screen readers are a fantastic accessibility tool, but they sometimes need a bit of help to ensure they get all the important info in the right order.

Windows 10 has a built in screen reader "Narrator" that you can turn on. I've never used a screen reader before, so am going to start testing it out on some of the content I write. 

Here's a few things I've learned to help the screen reader do a good job:

Alt Text - A picture is worth 150 characters

Okay, now this is a big one. I use a LOT of images, screenshots, GIFs and visuals in my blogs, courseware and training. I have heard of Alt Text, and seen it used in the Microsoft Docs (it comes through when you paste values), but haven't given it much more consideration than that. 

It turns out that Alt Text is surprisingly easy to add to any image across the Microsoft Office suite, and in many other web and computer applications. 

PAUSE NOW - and take 60 seconds to Google how to add Alt Text to the files you work with most, whether it's email, Word, blog or news posts, or even Teams messages. 

Screenshot of Microsoft Word Review tab, Check Accessibility button. Click Alt Text.

Tips for good Alt Text:

  • Identify image type - Images can be DECORATIVE, or ILLUSTRATIVE. If your image serves no purpose other than decor, and adds no meaning to your post or file, then no alt text is needed. Not all Alt Text will give you the option, but the Accessibility Checker in Microsoft Office will let you tick 'Mark as Decorative' so that it will stop reminding you to add Alt Text to that image.
  • Keep it short - Use less than 150 characters. That's heaps! To give you an idea, "Screenshot of Microsoft Word Review tab, Check Accessibility button. Click Alt Text." is 73 characters so limit yourself to a description no more than twice that long.
  • Don't repeat yourself - Alt Text should add meaning to your content. If you've already stated it somewhere else (like a caption), then you don't need to repeat it in Alt Text. It's annoying having to hear/read/see the same thing twice!

Other reasons to use Alt Text

Improving accessibility should be reason enough - it's truly that important. But if you're not convinced, Alt Text helps with search engine optimization, slow internet connections, and when previewing emails without downloading the images. Below is an example of Alt Text in an email I received recently.

There are four images that haven't loaded, so the Alt Text is displayed instead:

  • A person with curly hair
  • A person wearing headphones and looking at a computer
  • A person working on a computer
  • Colorful graphic design element

 

email with alt text in place of images


What do you think of the Alt Text used in this email? How would you rate them? Can you imagine what the picture looks like?

Now let's look at the same email with the actual images:


Is that what you imagined? How would you improve the Alt Text?

Numbered Lists

As a trainer, I spend a lot of time updating course manuals and am constantly battling with poorly formatted lists. This is usually caused by a history of too many people with too little knowledge editing the document. Word is great at creating numbered lists and keeping them within a hierarchy. You can even use this within Word styles and dictate which level each style/heading applies to. 

I had never thought about how important getting this right can be to a screen reader, so now I'm even more motivated to help people understand how to get this right.

Resources

  • Define Your Own Lists and Turn off Auto Numbering - I recommend taking the time to learn how to generate your own multi-level list styles and take control of your document, don't let Word do it for you. Pay particular attention to the last two headings in this How-to article to see how.

Send to Back/Front determines Reading Order

Of course! This makes total sense, but I hadn't thought about how the order of items in the selection pane might matter even if they aren't overlapping. Generally we compose our PowerPoint slides and docs by creating the most important and first piece of into, then adding details, so your selection pane order will likely be close to what you want the screen reader to follow, but not always. It takes 2 seconds to check this, and the Selection Pane is super handy for other things too, so make sure you know where to find it:

From PowerPoint Home tab, click Select > Selection Pane to turn this on.

Accessibility Fundamentals - Introduction to disability and accessibility


According to the World Health Organization* more than one billion people worldwide live with a disability. To put that in terms I can understand, I decided to relate that to my company. We have approximately 16 full time employees. If our company had the same percentage of people living with disability as there are worldwide, we would have 2 full time employees with disabilities. 

Graph of 2 in 16 people

This got me thinking - who are they? Have I been aware of the needs and abilities of all my coworkers? Am I doing everything I can to ensure all people feel part of the group and not excluded?

Check out the Introduction to disability and accessibility on Microsoft Learn to learn with me. 

My Key Takeaways

Be Respectful

No matter what you think you know about a person or their disability, we are all unique. Listen to the people you work with. They know themselves, and their needs and limitations, better than anyone else.

Own Your Mistakes

This one goes for life in general, and I've probably learned it a thousand times and will learn it a thousand times more. No one is perfect, myself included. Admit when you're wrong, examine why it happened, and do better next time. 

Be Approachable

Not all people with disabilities will be open about disclosing their disabilities. Listen to the people you work with, and take non-verbal cues too. 

Over 1 billion people in this world are living with some form of disability. Odds are you've worked with at least one of them, and every person matters.

#TheMVPChallenge Begins


Microsoft have put a challenge to the MVP community to dedicate the month of May to learning something new, and I've decided to accept. I wouldn't be a very good teacher if I wasn't constantly learning myself, so this is right up my alley.

MVP Global Cloud Skills Challenge: Get ready for The Challenge

We have three learning streams to choose from: 

  • Azure Data & AI Challenge
  • Dynamics 365 / Power Platform Challenge
  • Microsoft 365 Challenge

While I know a bit in each of these areas, there's always something new to learn and I'm excited to get stuck into it.

I'll be tracking my journey and posting my learnings, thoughts and questions here with the label #TheMVPChallenge. I'll also be posting regularly on LinkedIn and a bit of Twitter, so follow me to get updates. 

All three challenges have a module on Accessibility Fundamentals, so I'm going to start there. 

Custom Visual Review: Charticulator

This is not your ordinary custom visual - this is EVERY custom visual. Charticulator puts the power to design and develop custom visuals to ...