Damage assessment over an arbitrary area with computer vision

This video tutorial shows how to use the Vexcel API and Azure’s Custom Vision service to find damage after a fire or wind catastrophe across an arbitrary area. You can download the app from the previous post where we looked at how to analyze specific properties coming from a CSV file. The big difference here is that you don’t need a list of properties to analyze. instead you set the corners of a bounding box, and all of the properties within the box are analyzed. Microsoft’s building footprint files are used to find the candidate properties to test. In the image above, the white pins represent ALL of the building structures in the tested region. Each was run through the damage model with the structures determined to be destroyed showing as red pins.

Try it out and send your feedback along. you will need to have a Vexcel API account in order to run the app.

Using AI to assess damage after a disaster

Over the last weeks, I shared a three part tutorial on training a model to identify homes damaged by wildfire, hurricane or tornado using Azure Cognitive services. In the final part we looked at how to use the Cognitive Services API to automate analysis of properties. Building on that, I’ve used the API to create an app to read a CSV file of coordinates, request the imagery from Vexcel API, and run each through the Cognitive services model. the results are written out to a CSV file that you can then do further analysis with, as well as a KML file to load into a GIS. You will need your Vexcel Platform account credentials to run the app.

You can download a zip file containing the app (runs on Any flavor of Windows) and a couple of sample CSV files to run through it. Unzip everything to a folder and launch the .exe to get started.

Choose Your Input File
Use the … button to browse and select your input CSV. The first column should contain latitude and the second column, longitude. Additional fields can follow. A couple of sample files are included in the zip that you can use as a template. (I’ll cover the building footprints option in the next article.)

Set a name for your output file
a CSV and KML file will be created in the same folder as your input file. Specify a name for the root of the file. The appropriate extensions will be appended on.

Choose the appropriate model and processing threads
there are two choices for model; Wildfire and Wind Damage. choose the one appropriate for the type of damage you are analyzing. You can choose from 1 to 8 processing threads.

Finally enter your Vexcel Platform credentials and hit the Go! button to get things started.

While the app is running, have a look in the output folder. You’ll see some temporary files being created there of the form OutputFilename_thread#. To prevent IO conflict, each thread writes to it’s own files and when the run is finished, these files will be appended together to form your final output KML and CSV files. You can then delete these temp files if you wish.

After processing is complete, open up the .CSV file in Excel or a text editor. You’ll see three fields have been appended to your input file representing the prediction from the model, the date of the image used in the analysis and a link to the image itself. In the screenshot below you can see this in columns D, E, and F respectively. If you were running the wind damage model, the predictions will go from Damage0 (no visible damage) to Damage4 (complete loss). The score shown is the probability the prediction is correct. For Wildfire, there are just 2 possible tags; Fire0 (undamaged) and fire2 (burned).

Let’s look at row 4, which scored Damage4 with very high probability. the ImageURL in Column F returns this image:

You can load the CSV or KML file into just about any mapping application which will certainly have support for either CSV or KML files. Lets use the Vexcel viewer to visualize a file of Fire Damaged properties in Otis Oregon. After the run was complete, I loaded the CSV file into Excel, sorted by the AI_Probability column, then cut all of the damage2 records into 1 file, and damage0 into another. this allows me to load them as two sperate layers in the Vexcel viewer.

In this first image you can see all of the properties in the region that were run through the app.

And in these next two, you can see the properties tagged Damage2 in red.

and here zoomed in to the damaged cluster in the northeast

KNOWN APP ISSUES

After appending the individual KML file there is a stray character on the last line of the file that prevents it from loading in some client apps. if you encounter this, load the file in a text editor and delete the last line with the odd control character. I’ll fix this as soon as I figure out what’s causing it 🙂

Computer Vision and aerial imagery Part III

Finally! In the first 2 parts of this tutorial series we focused on training models on Azure’s Custom Vision platform to perform recognition on Vexcel imagery. But now the best part; We’ll use the REST API exposed by Custom Vision to handle the repetitive task of running multiple properties through the model. This opens up use cases like analyzing a bunch of property records after a tornado or wildfire using Vexcel graysky imagery. Or checking to see which homes have a swimming pool using Vexcel Blue sky imagery.

In this tutorial we’ll use c# to call the Vexcel API and Custom Vision API, but you should be able to adapt this to any language or environment of your choosing. The application will make a call to the Vexcel platform to get an auth token, subsequent calls to generate an ortho image of a given property, then pass that image to our model on Custom Vision for recognition. Once you have this working, its easy to take it to the next step to open a CSV file containing a list of locations, and perform these steps for each record..

Step 1: Publish your model

In the previous tutorials we trained a model to recognize objects or damage in aerial imagery. We can now make programmatic calls to the model using the Custom Vision API, but first we need to publish the trained iteration, making it accessible by the API.

This is easy to do. In the Custom vision Dashboard, go to the Performance tab, select your iteration, and hit the ‘publish’ button as highlighted here.

Once the publish is complete, the ‘Prediction URL’ link will become active. Click it to view the parameters for your model that you will need when making calls with the API. The ‘Iteration ID’ is shown on the main dashboard page. The prediction key is visible in the dialog that pops up, as well as the REST URL which will contain the project ID. Take note of all of these values. We’ll use them in a moment.

Step 2: Generate an Authentication token with the Vexcel API

Each API call to the Vexcel platform requires and auth token to be passed. When your app starts up, you can call the login service to generate one and use it for all subsequent calls. An auth token is good for up to 12 hours.

string VexcelToken = "";

try
{
    string userid = "yourUserID";
    string pw = "yourPW";
    string authURL = "https://api.gic.org/auth/Login?username=" + userid + "&password=" + pw;

    string jsonResponse = FetchWebURL(authURL);
    dynamic dynObj = JsonConvert.DeserializeObject(jsonResponse);
    VexcelToken = dynObj.token;
}
catch (HttpRequestException e)

The fetchURL() method is used to make an HTTP request and return the response as a string. Here is a simple implementation for C#.

string html = "";
try
{
    WebRequest request = WebRequest.Create(url);
    WebResponse response = request.GetResponse();
    Stream data = response.GetResponseStream();

    using (StreamReader sr = new StreamReader(data))
    {
        html = sr.ReadToEnd();
    }
}
catch (Exception ex)
{
     //handle the error here
}
return html;

Step 3: Generate a URL to request an image of a property

There’s generally two steps to request an image from the Vexcel library; first query the catalog to see what is available, then request the appropriate image. Lets do exactly that for this coordinate damaged in the Oregon wild fires recently: 45.014910, -123.93089

We’ll start with a call to FindImages(). This service will return a JSON response telling us about the best image that matches our query parameters. Those parameters include the coordinate, a list of layers to query against, and the orientation of the image we want returned. For the layer list we are passing in Vexcel’s two gray sky layers; we want the best (most recent) image across any catastrophe response layer. We’ll set orientation to Nadir as we want a traditional vertical image, but you can also query for Vexcel’s oblique imagery with this parameter.

string layers = "graysky,graysky-g";
double latitude = 45.014910;
double longitude = -123.93089;
string metadataURL = "https://api.gic.org/metadata/FindImage?layer=" + layers +
    "&format=json&EPSG=4326&orientation=" + "NADIR" +
    "&xcoordinate=" + longitude + "&ycoordinate=" + latitude + "&AuthToken=" + vexcelAuthToken;

string jsonString = FetchWebURL(metadataURL);

In the Json response, we’ll have all of the information we need to request a snippet of imagery with the ExtractImages() method. This workhorse provides you access to all of the pixels that make up the Vexcel library, one snippet at a time carved up to your exact specification. As you can see below in the code, the first bit of metadata that we’ll grab is the date the image was taken. This is one of the most important pieces of metadata regardless of what kind of application you are building; you’ll always want to know the date of the image being used. And then most importantly, we’ll form a URL to the ExtractImages endpoint with all of the parameters needed to get the image we need, as provided by the FindImage() call above.

try  {
    dynamic dynObj = JsonConvert.DeserializeObject(jsonString);

    string imageDate = dynObj.capture_date;

    string imageURL = "https://api.gic.org/images/ExtractImages/" + dynObj.layername +
            "?mode=one&orientation=NADIR&logo=yes&imagename=" + dynObj.image_id +
            "&EPSG=4326&xcoordinate=" + longitude + "&ycoordinate=" + latitude +
            "&nadirRotatedNorth=yes" +
            "&imagename=" + dynObj.image_id +
            "&zoom=0&width=" + 800 + "&height=" + 800 + "&AuthToken=" + vexcelAuthToken;   
}

the imageURL will be similar to this:

and will return an image that looks like this:

Step 4: Pass the image to Custom vision for analysis

Its finally time to pass the Image snippet to Custom Vision for recognition. You’ll need the details from step 1 above where you published your model. You can return to the Custom Vision dashboard to get them. Here is the c# to make the API call and get back a JSON response indicating what tags were found in the image.

string grayPredictionURL = "https://coppercustomvision.cognitiveservices.azure.com/customvision/v1.1/Prediction/8209f23b-e9f5-4a6e-8664-910425d5aa55/url?iterationId=74a18109-4e79-4ea8-b6b8-caa772739335"
string predictionKeyHeader = "Your prediction key"; //from the custom vision dialog box

try
{
    imageURL = "{ \"Url\": \"" + imageURL + "\" }";

    HttpWebRequest request = (HttpWebRequest)WebRequest.Create(grayPredictionURL);
    request.Headers.Add("Prediction-Key", predictionKeyHeader);
    request.ContentType = "application/json";
    request.Host = "coppercustomvision.cognitiveservices.azure.com";

    var sendData = Encoding.ASCII.GetBytes(imageURL);
    request.Method = "POST";
    request.ContentType = "application/json";
    request.ContentLength = sendData.Length;

    var newStream = request.GetRequestStream();
    newStream.Write(sendData, 0, sendData.Length);
    newStream.Close();

    WebResponse response = request.GetResponse();
    Stream data = response.GetResponseStream();
    using (StreamReader sr = new StreamReader(data))
    {
         string jsonString = sr.ReadToEnd();
    }
}

The last bit of code is to parse the returned JSON to find the tags discovered in the image. Keep in mind that there can be multiple tags each with their own probability score returned. We’ll keep it simple and loop through each tag looking for the highest probability, but in your implementation you could choose to be more precise than this, perhaps by considering the position of each discovered tag relative to the center of the image.

try
{
    dynamic dynObj = JsonConvert.DeserializeObject(jsonString);
    int predictionCount = dynObj.Predictions.Count;

    double highestDamageProbability = 0;
    string highestDamageTag = "";

    for (int indx = 0; indx < predictionCount; indx++)
    {
        string tagName = dynObj.Predictions[indx].Tag;

        string tmp = dynObj.Predictions[indx].Probability;
        double thisProb = double.Parse(tmp);
        thisProb = Math.Round(thisProb, 3);

        if (thisProb > highestDamageProbability)
        {
            highestDamageProbability = thisProb;
            highestDamageTag = tagName;
        }
    }
    roofDamageLevel = highestDamageTag + " (score: " + highestDamageProbability + ")";
}

That’s it! Now that you can programmatically analyze a single image, its a small step to put a loop together to step through a large table of properties. In a future tutorial here on the Groundtruth, we’ll do something similar building on the code above to create a highly useful application.

Computer Vision and aerial imagery Part II

In Part One of this three part tutorial, we trained a model using Azure’s Custom Vision platform to identify Solar Panels on rooftops using Vexcel’s Blue sky imagery. Here in part two we are going to work with disaster response imagery (aka graysky imagery) to identify buildings damaged in Wildfires.

The main difference here is that we will train the model on two tags, one representing buildings that have not been burned, and a second tag representing buildings that have been destroyed in the fire. Other than that, the steps are identical to what we did in Part One.

In this image you can see a good example of both tags that we will be training.

Step 1: Create  a Custom Vision Account

If you completed part one of the tutorial, you’ve already setup your Custom Vision account and can proceed to step two below. But if you have not set up you Custom Vision account yet, Go back to part 1 of the previous tutorial to complete step 1 (account setup) then return here.

Step 2: Collect a bunch of images for tagging

You’ll need to have 15 or more images of buildings NOT damaged by fire and 15 showing damaged buildings. Its important that both sets of images are pulled from the graysky data.

Create a folder on your local PC to save these images too. There are several ways you can create your images. One easy way is to use the GIC web application, browse the library in areas where there is wildfire imagery, then take a screen grab and save it to your new folder. Here are some coordinates to search for that will take you to areas with good wildfire coverage:

42.272824, -122.813898 Medford/Phoenix Oregon fires
47.227105, -117.471557 Malden, Washington fires

Here are two good representative images similar to what you are looking for. First an example of a destroyed building

and an example of a structure still standing after the fire:

When you have 15 or more good examples of each in your folder, move on to the next step. Its time to tag our images!

Step 3: Create a new project in your Custom vision account

Return to your Custom vision account at https://www.customvision.ai/

Click the ‘New project’ button and fill in the form like this:

If this is your first project you’ll need to hit the ‘create new’ link for the Resource section. Otherwise you can select an existing Resource. Hit the ‘Create project’ button to complete the creation process. You now have a new empty project. In the next step we’ll import the images we created previously and tag them.

Step 4: Upload and Tag these images.

Your new empty project should look something like this:

Hit the ‘Add Images’ button and import all of the images you saved earlier. You should see all of your untagged images in the interface like this:

Click on the first image of an undamaged property to begin the tagging process. Drag a box around the building structure. Its OK to leave a little buffer, but try to be as tight as possible to the building footprint. Enter a new tag name like ‘firenotdamaged’ And hit enter. If your image contains more than one structure, you can tag more than one per image.

Next choose an image with a building destroyed by fire and tag it in the same manner giving it a descriptive tag name like ‘firedamaged’.

Continue to click through all of your images and tag them. Some images might have a mix of burned and not burned structures. That’s OK, just tag them all appropriately.

Step 5: Train the model

If you click the ‘tagged’ button as highlighted below, you will see all the images you have tagged. You can click on any of them to edit the tags if needed. But if you are happy with your tags, its time to train your model!

Hit  the ‘Train’ button and select ‘quick training’ as your training type. Hit the ‘Train’ button to then kick off the training process. This will take around 5 minutes to complete, depending on how many images you have tagged.

Step 6: Test the model

When training completes, your screen will look something like this:

Its time to test your model! The easiest way to do so is with the ‘quick test’ button as highlighted above. Using one of the techniques used in step 2 to gather your images, go and grab a couple more and save them to the same folder. Grab a mix of buildings, some destroyed and some not.

Hit the ‘Quick test’ link, and browse to select one of your new images. Here I selected an image that contained two adjacent structures, one destroyed and one not. You can see that both were correctly identified, although the probability on the burned building is a little low. this can be improved with tagging more images and retraining the model.

In Part three of this tutorial, we’ll use the API exposed by the Custom Vision Platform to build an app that can iterate through a list of properties and score each one

Computer Vision and aerial imagery

At Vexcel, we collect and process our aerial imagery with an eye towards much more than just traditional visual inspection scenarios. Our Ultracam line of camera systems are engineered from the ground up (punny!) with precise photogrammetry and computer vision applications in mind. Until recently it took a room full of data scientists and lot’s of custom application development to tap into the power of AI analysis over imagery, but today off the shelf tools on Amazon’s AWS platform and Microsoft’s Azure have democratized this technology making it available and easy to use by anyone.

In this multipart tutorial we’ll look at how easy it is to use aerial imagery in your own computer vision systems built on Azure’s Custom Vision platform. Custom Vision provides a web application for image tagging and training your model, as well as a simple REST api to integrate your model into any kind of application. And it couldn’t be easier! You’ll have your first computer vision system working end to end in just a few hours with part one of this tutorial. Stick around for all three parts and this is what we’ll cover

  • Part 1. Train a model that works with Vexcel’s Blue Sky Ultra-high resolution imagery to detect solar panels on Rooftops.
  • Part 2. Train a model utilizing Vexcel Gray Sky (disaster response) imagery to detect Fire damage after wildfires. Or you could choose to focus on Wind Damage after a tornado or hurricane.
  • Part 3. Classify gray sky images using the Custom Vision REST api. We’ll build an app to iterate through a list of properties from a CSV file, classify each one based on the wind damage level, and save the results to a KML file for display in any mapping application

Part 1: Solar Panel detection in Blue Sky Imagery

This tutorial will show you how to Utilize GIC Aerial Imagery to detect objects like solar panels or swimming pools using AI.  We’ll build a system to detect the presence of solar panels on a roof, but you can easily customize to add other objects you would like to detect.

We’ll be using Microsoft’s Custom Vision service, which runs on the Azure platform. If you already have an Azure account, you can use it in this tutorial. If not, we’ll look at how to create one along the way. Keep in mind that although there is no charge to get started with Azure and Custom Vision, during Azure signup a credit card is required.

At the end of this section of the tutorial, you’ll have a trained model in the cloud that you can programmatically pass an image to and get back a JSON response indicating if a Solar panel was found in the image along with the probability score.

Step 1: Create  a Custom Vision Account

To get started visit this URL: https://www.customvision.ai/

Hit the ‘Sign in’ button. You can then either sign in with an existing Microsoft account, or create a new one. If you sign in with a Microsoft account already connected to Azure, you won’t need to create an Azure account.

If there isn’t an Azure subscription attached to the Microsoft account you are using, you’ll see a dialog like the one shown here. Click ‘sign up for azure’ and follow the steps to create your azure account.

When you are done, return to https://www.customvision.ai/

You should see something like the image shown here. Great! You got all of the account creation housekeeping out of the way, now on to the fun stuff!

Step 2: Collect a bunch of images showing solar panels

In this step, we’ll collect images pulled from the Vexcel image library that feature homes with solar panels on the roof. These images will be used in step 3 to train the AI model.

TIP: You need a minimum of 15 images to get started. More images will yield better results, but you can start with 15 and add more later if you like. As you collect them, try to pull a sample from different geographic regions. Rooftops in Phoenix are very different than those in Boston; try to provide diversity in your source images to ensure that the resulting model will work well in different regions.

Create a folder on your local PC to save these images too. There are several ways you can create your images of rooftops with Solar panels. One easy way is to use the GIC web application, browse the library looking for solar panels, then take a screen grab and save it to your new folder.

Here is an address to try this on: 11380 Florindo Rd, San Diego, CA 92127

Use a screen clipping tool to grab an image and save it to your folder. It should look something like this:

When you have 15 or more good examples of rooftops with solar panels in your folder, move on to the next step. Its time to tag our images!

Step 3: Create a new project in your Custom vision account

Return to your Custom vision account at https://www.customvision.ai/

Click the ‘New project’ button and fill in the form like this:

For Resource, hit the ‘Create new’ link.

Your ‘New project’ form will ultimately look something like this:

Hit the ‘Create project’ button to complete the creation process. You now have a new empty project. In the next step we’ll import the images we created previously and tag them.

Step 4: Upload and Tag these images.

Your new empty project should look something like this:

Hit the ‘Add Images’ button and import all of the images you saved earlier. You should see all of your untagged images in the interface like this:

Click on the first one to begin the tagging process. Drag a box around the area of the image with solar panels, enter a new tag name of ‘solarpanel’ and hit enter.

You’ve tagged your first solar panel! Continue tagging each of the remaining images, one at a time until you have no untagged images remaining.

Step 5: Train the model

If you click the ‘tagged’ button as highlighted below, you will see all the images you have tagged. You can click on any of them to edit the tags if needed. But if you are happy with your tags, its time to train your model!

Hit  the ‘Train’ button and select ‘quick training’ as your training type. Hit the ‘Train’ button to then kick off the training process. This will take around 5 minutes to complete, depending on how many images you have tagged.

Step 6: Test the model

When training completes, your screen will look something like this:

Its time to test your model! The easiest way to do so is with the ‘quick test’ button as highlighted above. Using one of the techniques used in step 2 to gather your images, go and grab a couple more and save them to the same folder. Grab some images of rooftops with solar panels of course, but also save a few that don’t have panels on the roof.

Hit the ‘Quick test’ link, and browse to select one of your new images.

As you can see here, the new model identified the correct location of the solar panels with a 71% confidence. Adding more images and running the training again will improve this.  and you can go back to step 4 and do this anytime.

very cool! you just taught a machine how to identify solar panels on a roof. You can not only tag more solar panels, but you can add new tags for other entities you want to recognize in aerial imagery… Pools, trampolines, tennis courts…

In Part 2 of this tutorial series, we’ll use the same technique to operate on our disaster response imagery to identify differing levels of damage after wind events. I’ll add a link here as soon as Part 2 is online.

In Part 3, we’ll start to access the model’s we trained using the REST Api. But if you’d like to get a headstart on that and try the API out, here is a good tutorial on the custom vision website. You’ll find all of the access details you need to integrate in your app on the ‘Prediction URL’ dialog on the performance tab: