This past week we released four new services in the Vexcel API. I’ll provide a brief description of each here, but be sure to click through into our full documentation for all of the details
Digital Terrain Model (DTM) This service provides access to our DTM via a convenient tiled interface, very much like our Digital Surface Model (DSM) which was released earlier this year. Requests are made by passing in the X/Y/Z tile location, and the response is a single 256×256 geotiff.
Coverage API’s When working with our imagery it is important to be able to query the catalog to find out what imagery and data products are available for a given location or even for an entire country. Our ImageLayerCoverage service has provided this capability for our discrete imagery (individual oblique and nadir images). This week we added three new services that work in the same manner, allowing you to query Orthomosaic, DSM, and DTM coverages.
The response for these API’s is JSON with metadata indicating attributes like the name, resolution and date range of each Area of Interest (AOI). the Geometry of each AOI in the coverage is also returned. In each case you can pass in a point or polygon in the optional WKT parameter to limit the response to just coverage over the specified location. If you omit this parameter, coverage will be returned for the entire layer allowing you to cache coverage for offline use.
Here are links to the documentation for each of these endpoints:
Last week we had a chance to take a close look at the Opera House in Sydney, where is thought to be the capital of Australia by most people. It is easy to understand because Sydney is well-known for its gorgeous harbour-front and sunlit skyscrapers. However, the Australian capital is Canberra combining the European “Cranberry” and the Indigenous “Nganbra”. Early 20th century, Canberra is given an entertaining nickname “The Bush Capital”. And when you visit Canberra, you cannot miss one of Australia’s iconic buildings is Parliament House, where you can skydive over it.
By our capture, it shows the complex design with a total of 20 volumes, forming a pile about a meter high, which was built in only 7 years (1981-1988) and involved more than 10.000 people. After that, it became an enduring symbol of Australia’s democracy and there is a famous quote from the artist, who described the meaning of Parliament House:
“it…stands for this place where all people come and meet together…These Dreamings are part of this country that we live in…We’ve been trying to explain what the land means to us for the sake of all Australians.”
Additionally, the art of mosaic at Parliament’s forecourt has complex layers of meaning based on Jagamara’s painting named Possum and Wallaby Dreaming. It illustrated the large group of human-being from the kangaroo, wallaby, and goanna ancestors. The gathering purposes are to talk and to enact ceremonial obligations. From my point of view, it stands for the voice of people to pass knowledge about their country, culture, and laws.
Our quarantine capture identifies the visitor numbers decline significantly. Yet the Art in Parliament House and its meaningful message are giving us a warm welcome anytime.
The GIC collected imagery over the tornados that struck Alabama and Georgia this week, and it is now published to our platform. Our web app also has a damage heat map for the regions we have flown that helps spot the worst hit areas. [read more on the heatmap here] You can also import your own property locations with the data import feature to visualize your properties of interest over the imagery and heat map [tutorial on data import]
Here are the regions we flew, with coverage maps below. If your organization needs access to our web app, drop an email to support@gic.org with your request.
Brent/Centreville/Calera – Bibb, Chilton, and Shelby Counties, Alabama An Eagle sensor was used for this tornado collecting nadir images.
Helena/Pelham/Meadowbrook – Shelby County, Alabama An Osprey sensor was used for this tornado collecting nadir and oblique images.
Newnan/Peachtree City – Coweta and Fayette Counties, Georgia An Eagle sensor was used for this tornado collecting nadir images.
I’m glad to have Brian Duy Nguyen, our marketing intern, rebooting the quarantine capture series, our look at aerial imagery of famous places minus the crowds.
The quarantine capture series introduces one of the top destinations in Australia, Sydney Opera House. In 2007, it was recognized as UNESCO’s World Heritage after its long journey. Jørn Utzon was inspired in his designs by bird winds, clouds, shells, walnuts, and palm trees for the international design competition held in 1956. Then Queen Elizabeth II brought this project into life on 20th October 1973.
Sydney Opera House sits on Bennelong Point blending with nature. Walking along in the evening and enjoying multiple performances from the Concert Hall or Joan Sutherland Theatre made people feel their life become more poetic.
Reading our blog, opening those captures in full-sized view, and playing your favorite music give your soul a good treat.
With our tile services its easy to add layers of imagery to applications that support WMTS and X/Y/Z tiles. In these applications, a ‘template’ URL is used with placeholders for the tile coordinates that are filled in by the application at runtime. A typical template url would look something like this one for Vexcel’s blue sky ultra layer:
In both cases, the host application will fill in the X, Y, and Z values dynamically to request the tiles needed to fill the map area. This is the most common use of tile services, but they can also be accessed programmatically to create maps by stitching a series of tiles together. Many map providers such as Google and Bing provide static map api’s that do this for you. In this article we’ll build a static map service that does exactly that.
Lets start by looking at the completed service, and trying it out. In its most basic form it will return a standard map image in PNG format but you can optionally pass in a polygon with the WKT parameter and have that polygon drawn on the map as an overlay or have the map cropped to that polygon. Lets look at examples of all three, then we’ll dive into the code to implement it.
Here is a simple request to the service along with the resulting stitched and cropped image. The required parameters are the latitude, longitude and zoom level of the desired map. layer is also a required parameter and should be one of the Vexcel imagery layers. width and height are optional and represent the size in pixels of the resulting bitmap. If not passed, these parameters default to 400 pixels. the authtoken parameter is also required and carries the standard authentication token generated by the login method of the Vexcel API.
Next we’ll pass in a polygon to be drawn over the stitched map image. The rest of the parameters remain the same. The polygon itself is passed in the wkt parameter, and surprise, should be in wkt format. The wktaction parameter specifies whether the polygon should be drawn or used for cropping. in this first case we set its value to ‘draw’.
And finally, in this third variant of the call we set the wktaction parameter to a value of ‘crop’ causing the resulting image to be cropped to include only the area within the polygon.
Alright, on to the code to make all of this happen. the main workflow is in the get() method. In it we:
Gather all of the parameters that were passed in and set defaults for any that were omitted.
Calculate the pixel dimension of the bounding box in global pixel coordinates that will fit the desired image
Calculate the tiles needed to cover that bounding box. the Northwest and southeast tile coordinates.
Request all of the tiles and stitch them into a single bitmap.
If a polygon was passed in the WKT parameter, draw it over the image or mask off the image as specified in the wktaction parameter
Crop the image to the precise pixel size requested in the width and height parameters
Return the bitmap as a PNG
All of the code below is written in C# and built to run as a web service. The portions related to threading and drawing rely on the .NET framework, but all of this should be easily adapted to any other language and execution environment.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using Microsoft.AspNetCore.Http;
using Microsoft.AspNetCore.Mvc;
using System.Threading;
using System.Web;
using System.Net;
using System.IO;
using System.Drawing;
namespace GICPropertyInfo.Controllers
{
[Route("api/[controller]")]
[ApiController]
public class staticMapController : ControllerBase
{
TileImage[] tileImages;
public int tileSizeInPixels = 256;
public MemoryStream Get()
{
//Get URL parameters
double cntrLatitude = double.Parse(HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("latitude"));
double cntrLongitude = double.Parse(HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("longitude"));
string vexcelToken = HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("authtoken");
if (vexcelToken==null || vexcelToken=="")
return new System.IO.MemoryStream(System.Text.Encoding.UTF8.GetBytes("authtoken is a required parameter."));
int zoom;
string tmpZoom = HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("zoom");
if (tmpZoom == null || tmpZoom == "")
zoom = 21;
else
zoom = int.Parse(tmpZoom);
int widthInPixels;
string tmpWidth = HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("width");
if (tmpWidth == null || tmpWidth == "")
widthInPixels = 400;
else
widthInPixels = int.Parse(tmpWidth);
int heightInPixels;
string tmpHeight = HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("height");
if (tmpHeight == null || tmpHeight == "")
heightInPixels = 400;
else
heightInPixels = int.Parse(tmpHeight);
string layername = HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("layer");
if (layername ==null || layername=="")
layername = "bluesky-ultra";
string aoiName = HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("aoi");
if (aoiName == null || aoiName == "")
aoiName = "";
string wktpolygon = HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("wkt");
if (wktpolygon == null || wktpolygon == "")
wktpolygon = "";
string wktAction = HttpUtility.ParseQueryString(Request.QueryString.ToString()).Get("wktaction");
if (wktAction == null || wktAction == "")
wktAction = "draw";
//Calculate the pixel dimension bounding box
TileSystem.LatLongToPixelXY(cntrLatitude, cntrLongitude, zoom, out int pixelX, out int pixelY);
int minPixelX = pixelX - (widthInPixels / 2);
int maxPixelX = pixelX + (widthInPixels / 2);
int minPixelY = pixelY - (heightInPixels / 2);
int maxPixelY = pixelY + (heightInPixels / 2);
//calculate the tiles needed to cover that bounding box
TileSystem.PixelXYToTileXY(minPixelX, minPixelY, out int minTileX, out int minTileY);
TileSystem.PixelXYToTileXY(maxPixelX, maxPixelY, out int maxTileX, out int maxTileY);
//Get the Pixel location of the Upper Left Tile
TileSystem.TileXYToPixelXY(minTileX, minTileY, out int PixelXorigin, out int PixelYorigin);
//build an image from all the tiles
Image imgUncropped = getPropertyTiles(minTileX, maxTileX- minTileX+1, minTileY, maxTileY - minTileY + 1,
zoom, vexcelToken, layername,aoiName);
//If a polygon was passed in, this is the place to apply it...
//Either Draw it...
if (wktpolygon!="" && wktAction=="draw")
{
List<Coordinate> polygonCoordsLatLong = UtilityFunctions.convertWKTPolygonToCoordinates(wktpolygon);
imgUncropped = renderPolygonToBitmap(polygonCoordsLatLong, PixelXorigin, PixelYorigin, zoom,
imgUncropped, Color.Red, Color.Yellow);
}
//or Crop with it
if (wktpolygon != "" && wktAction == "crop")
{
List<Coordinate> polygonCoordsLatLong = UtilityFunctions.convertWKTPolygonToCoordinates(wktpolygon);
//Convert the polygon's lat/long coordinates to pixel coordinates
PointF[] polygonPixels = new PointF[polygonCoordsLatLong.Count];
PointF thisPointPixel = new PointF();
for (int thiscoord = 0; thiscoord < (polygonCoordsLatLong.Count); thiscoord++)
{
TileSystem.LatLongToPixelXY(polygonCoordsLatLong[thiscoord].latitude, polygonCoordsLatLong[thiscoord].longitude, zoom, out int drawPix1X, out int drawPix1Y);
thisPointPixel.X = drawPix1X - PixelXorigin;
thisPointPixel.Y = drawPix1Y - PixelYorigin;
polygonPixels[thiscoord] = thisPointPixel;
}
//test each pixel in the bitmap and black out any pixel outside of the polygon
Bitmap bmpCropped = new Bitmap(imgUncropped.Width, imgUncropped.Height);
Bitmap gUncropped = (Bitmap)(imgUncropped);
for (int locrow = 0; locrow < imgUncropped.Height; locrow++)
{
for (int loccol = 0; loccol < imgUncropped.Width; loccol++)
{
thisPointPixel = new PointF(loccol, locrow);
bool inside = UtilityFunctions.IsPointInPolygon(polygonPixels, thisPointPixel);
if (inside)
{
bmpCropped.SetPixel(loccol, locrow, gUncropped.GetPixel(loccol, locrow));
} else
{
bmpCropped.SetPixel(loccol, locrow, Color.FromArgb(0, 0, 0));
}
}
}
imgUncropped = bmpCropped;
}
//crop the image down to the desired pixel width and height.
//IMPORTANT: offset it to crop the 'centered' area as expected
Rectangle cropRect = new Rectangle(minPixelX - (minTileX * tileSizeInPixels), minPixelY -(minTileY * tileSizeInPixels),
widthInPixels, heightInPixels);
Bitmap target = new Bitmap(cropRect.Width, cropRect.Height);
Bitmap src = (Bitmap)imgUncropped;
using (Graphics g = Graphics.FromImage(target))
{
g.DrawImage(src, new Rectangle(0, 0, target.Width, target.Height),
cropRect, GraphicsUnit.Pixel);
}
//Now return the reulting Cropped final bitmap as a PNG
MemoryStream memstream = new MemoryStream();
target.Save(memstream, System.Drawing.Imaging.ImageFormat.Png);
memstream.Seek(0, SeekOrigin.Begin);
Response.ContentType = "image/png";
return memstream;
}
//This function takes a List of lat/longs, converts them to pixel coords, and draws them over the map
private Image renderPolygonToBitmap(List<Coordinate> polygonCoordsLatLong,int PixelXorigin, int PixelYorigin ,int zoom,
Image bmp, Color edgeColor, Color vertexColor)
{
var g = Graphics.FromImage(bmp);
for (int thiscoord = 0; thiscoord < (polygonCoordsLatLong.Count - 1); thiscoord++)
{
TileSystem.LatLongToPixelXY(polygonCoordsLatLong[thiscoord].latitude, polygonCoordsLatLong[thiscoord].longitude, zoom, out int drawPix1X, out int drawPix1Y);
TileSystem.LatLongToPixelXY(polygonCoordsLatLong[thiscoord + 1].latitude, polygonCoordsLatLong[thiscoord + 1].longitude, zoom, out int drawPix2X, out int drawPix2Y);
g.DrawLine(new Pen(edgeColor, 2), new Point(drawPix1X - PixelXorigin, drawPix1Y - PixelYorigin), new Point(drawPix2X - PixelXorigin, drawPix2Y - PixelYorigin));
int radius = 3;
g.DrawEllipse(new Pen(vertexColor, 2), (drawPix1X - PixelXorigin) - radius, (drawPix1Y - PixelYorigin) - radius,
radius + radius, radius + radius);
}
return bmp;
}
//this function requests the needed tiles from vexcels tile services. for efficiency, each tile
//is requested on a separate thread to gather all the tiles in parrallel
private Image getPropertyTiles(int originXtile, int numXtiles, int originYtile,
int numYtiles, int originZtile, string VexcelToken, string layerName, string AOIname)
{
//Create a Bitmap for the RGB tiles
var bmp = new System.Drawing.Bitmap(tileSizeInPixels * numXtiles, tileSizeInPixels * numYtiles);
var g = Graphics.FromImage(bmp);
//we'll need a task per tile
int numerOftiles = numYtiles * numXtiles;
Task[] tasks = new Task[numerOftiles];
tileImages = new TileImage[numerOftiles];
int tileCnt = -1;
int colIndx = 0;
int rowIndx = 0;
//First, loop through each desired tile and create a TileImage object to be filled asynchronously
for (int row = originYtile; row < originYtile + numYtiles; row++)
{
colIndx = 0;
for (int col = originXtile; col < originXtile + numXtiles; col++)
{
tileCnt++;
TileImage thisTileImage = new TileImage();
thisTileImage.xIndx = colIndx;
thisTileImage.yindx = rowIndx;
thisTileImage.URL = "https://api.gic.org/images/GetOrthoImageTile/" + layerName + "/" + originZtile + "/" +
col + "/" + row + "/true?format=image%2Fjpeg&token=" + VexcelToken;
if (AOIname != "")
thisTileImage.URL += "&AOI=" + AOIname;
thisTileImage.requestID = "t" +tileCnt;
tileImages[tileCnt] = thisTileImage;
colIndx++;
}
rowIndx++;
}
//Now go fetch eeach tile asynchronously
tileCnt = 0;
DateTime startTime = DateTime.Now;
foreach (TileImage thisTI in tileImages)
{
tasks[tileCnt] = Task.Run(() =>
{
fetchATile(thisTI);
int tileindx = tileCnt;
});
tileCnt++;
}
//wait here for all threads to complete.
Task.WaitAll(tasks);
//loop again to stitch the tiles together into a single bitmap
colIndx = 0;
rowIndx = 0;
tileCnt = -1;
for (int row = originYtile; row < originYtile + numYtiles; row++)
{
colIndx = 0;
for (int col = originXtile; col < originXtile + numXtiles; col++)
{
tileCnt++;
g.DrawImage(tileImages[tileCnt].imgTile, tileSizeInPixels * colIndx, tileSizeInPixels * rowIndx);
colIndx++;
}
rowIndx++;
}
Image imageRGB = bmp;
return imageRGB;
}
private TileImage fetchATile(TileImage thisTileImage)
{
thisTileImage.imgTile = fetchImageFromURL(thisTileImage.URL);
return thisTileImage;
}
private Image fetchImageFromURL(string imageURL)
{
Image imageRGB =null;
if (imageURL == null || imageURL == "")
return imageRGB;
//fetch it
using (WebClient webClient = new WebClient())
{
DateTime startTime = DateTime.Now;
try
{
byte[] data = webClient.DownloadData(imageURL);
MemoryStream memstream = new MemoryStream(data);
imageRGB = Bitmap.FromStream(memstream);
}
catch (Exception ex)
{
imageRGB = null;
}
}
return imageRGB;
}
}
}
and here are two utility functions used by the code above
There are a number of conversion methods that were taken from this excellent article on Bing Maps tile system. FOr convenience, I’ve included these helper function here as well:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
using System.Text;
namespace GICPropertyInfo
{
static class TileSystem
{
private const double EarthRadius = 6378137;
private const double MinLatitude = -85.05112878;
private const double MaxLatitude = 85.05112878;
private const double MinLongitude = -180;
private const double MaxLongitude = 180;
/// <summary>
/// Clips a number to the specified minimum and maximum values.
/// </summary>
/// <param name="n">The number to clip.</param>
/// <param name="minValue">Minimum allowable value.</param>
/// <param name="maxValue">Maximum allowable value.</param>
/// <returns>The clipped value.</returns>
private static double Clip(double n, double minValue, double maxValue)
{
return Math.Min(Math.Max(n, minValue), maxValue);
}
/// <summary>
/// Determines the map width and height (in pixels) at a specified level
/// of detail.
/// </summary>
/// <param name="levelOfDetail">Level of detail, from 1 (lowest detail)
/// to 23 (highest detail).</param>
/// <returns>The map width and height in pixels.</returns>
public static uint MapSize(int levelOfDetail)
{
return (uint)256 << levelOfDetail;
}
/// <summary>
/// Determines the ground resolution (in meters per pixel) at a specified
/// latitude and level of detail.
/// </summary>
/// <param name="latitude">Latitude (in degrees) at which to measure the
/// ground resolution.</param>
/// <param name="levelOfDetail">Level of detail, from 1 (lowest detail)
/// to 23 (highest detail).</param>
/// <returns>The ground resolution, in meters per pixel.</returns>
public static double GroundResolution(double latitude, int levelOfDetail)
{
latitude = Clip(latitude, MinLatitude, MaxLatitude);
return Math.Cos(latitude * Math.PI / 180) * 2 * Math.PI * EarthRadius / MapSize(levelOfDetail);
}
/// <summary>
/// Determines the map scale at a specified latitude, level of detail,
/// and screen resolution.
/// </summary>
/// <param name="latitude">Latitude (in degrees) at which to measure the
/// map scale.</param>
/// <param name="levelOfDetail">Level of detail, from 1 (lowest detail)
/// to 23 (highest detail).</param>
/// <param name="screenDpi">Resolution of the screen, in dots per inch.</param>
/// <returns>The map scale, expressed as the denominator N of the ratio 1 : N.</returns>
public static double MapScale(double latitude, int levelOfDetail, int screenDpi)
{
return GroundResolution(latitude, levelOfDetail) * screenDpi / 0.0254;
}
/// <summary>
/// Converts a point from latitude/longitude WGS-84 coordinates (in degrees)
/// into pixel XY coordinates at a specified level of detail.
/// </summary>
/// <param name="latitude">Latitude of the point, in degrees.</param>
/// <param name="longitude">Longitude of the point, in degrees.</param>
/// <param name="levelOfDetail">Level of detail, from 1 (lowest detail)
/// to 23 (highest detail).</param>
/// <param name="pixelX">Output parameter receiving the X coordinate in pixels.</param>
/// <param name="pixelY">Output parameter receiving the Y coordinate in pixels.</param>
public static void LatLongToPixelXY(double latitude, double longitude, int levelOfDetail, out int pixelX, out int pixelY)
{
latitude = Clip(latitude, MinLatitude, MaxLatitude);
longitude = Clip(longitude, MinLongitude, MaxLongitude);
double x = (longitude + 180) / 360;
double sinLatitude = Math.Sin(latitude * Math.PI / 180);
double y = 0.5 - Math.Log((1 + sinLatitude) / (1 - sinLatitude)) / (4 * Math.PI);
uint mapSize = MapSize(levelOfDetail);
pixelX = (int)Clip(x * mapSize + 0.5, 0, mapSize - 1);
pixelY = (int)Clip(y * mapSize + 0.5, 0, mapSize - 1);
}
/// <summary>
/// Converts a pixel from pixel XY coordinates at a specified level of detail
/// into latitude/longitude WGS-84 coordinates (in degrees).
/// </summary>
/// <param name="pixelX">X coordinate of the point, in pixels.</param>
/// <param name="pixelY">Y coordinates of the point, in pixels.</param>
/// <param name="levelOfDetail">Level of detail, from 1 (lowest detail)
/// to 23 (highest detail).</param>
/// <param name="latitude">Output parameter receiving the latitude in degrees.</param>
/// <param name="longitude">Output parameter receiving the longitude in degrees.</param>
public static void PixelXYToLatLong(int pixelX, int pixelY, int levelOfDetail, out double latitude, out double longitude)
{
double mapSize = MapSize(levelOfDetail);
double x = (Clip(pixelX, 0, mapSize - 1) / mapSize) - 0.5;
double y = 0.5 - (Clip(pixelY, 0, mapSize - 1) / mapSize);
latitude = 90 - 360 * Math.Atan(Math.Exp(-y * 2 * Math.PI)) / Math.PI;
longitude = 360 * x;
}
/// <summary>
/// Converts pixel XY coordinates into tile XY coordinates of the tile containing
/// the specified pixel.
/// </summary>
/// <param name="pixelX">Pixel X coordinate.</param>
/// <param name="pixelY">Pixel Y coordinate.</param>
/// <param name="tileX">Output parameter receiving the tile X coordinate.</param>
/// <param name="tileY">Output parameter receiving the tile Y coordinate.</param>
public static void PixelXYToTileXY(int pixelX, int pixelY, out int tileX, out int tileY)
{
tileX = pixelX / 256;
tileY = pixelY / 256;
}
/// <summary>
/// Converts tile XY coordinates into pixel XY coordinates of the upper-left pixel
/// of the specified tile.
/// </summary>
/// <param name="tileX">Tile X coordinate.</param>
/// <param name="tileY">Tile Y coordinate.</param>
/// <param name="pixelX">Output parameter receiving the pixel X coordinate.</param>
/// <param name="pixelY">Output parameter receiving the pixel Y coordinate.</param>
public static void TileXYToPixelXY(int tileX, int tileY, out int pixelX, out int pixelY)
{
pixelX = tileX * 256;
pixelY = tileY * 256;
}
/// <summary>
/// Converts tile XY coordinates into a QuadKey at a specified level of detail.
/// </summary>
/// <param name="tileX">Tile X coordinate.</param>
/// <param name="tileY">Tile Y coordinate.</param>
/// <param name="levelOfDetail">Level of detail, from 1 (lowest detail)
/// to 23 (highest detail).</param>
/// <returns>A string containing the QuadKey.</returns>
public static string TileXYToQuadKey(int tileX, int tileY, int levelOfDetail)
{
StringBuilder quadKey = new StringBuilder();
for (int i = levelOfDetail; i > 0; i--)
{
char digit = '0';
int mask = 1 << (i - 1);
if ((tileX & mask) != 0)
{
digit++;
}
if ((tileY & mask) != 0)
{
digit++;
digit++;
}
quadKey.Append(digit);
}
return quadKey.ToString();
}
public static void MetersXYToLatLong(double x, double y, out double latitude, out double longitude)
{
latitude = 90 - 360 * Math.Atan(Math.Exp(-y / EarthRadius)) / Math.PI;
longitude = 360 * x / (EarthRadius * 2 * Math.PI);
}
/// <summary>
/// Converts a QuadKey into tile XY coordinates.
/// </summary>
/// <param name="quadKey">QuadKey of the tile.</param>
/// <param name="tileX">Output parameter receiving the tile X coordinate.</param>
/// <param name="tileY">Output parameter receiving the tile Y coordinate.</param>
/// <param name="levelOfDetail">Output parameter receiving the level of detail.</param>
public static void QuadKeyToTileXY(string quadKey, out int tileX, out int tileY, out int levelOfDetail)
{
tileX = tileY = 0;
levelOfDetail = quadKey.Length;
for (int i = levelOfDetail; i > 0; i--)
{
int mask = 1 << (i - 1);
switch (quadKey[levelOfDetail - i])
{
case '0':
break;
case '1':
tileX |= mask;
break;
case '2':
tileY |= mask;
break;
case '3':
tileX |= mask;
tileY |= mask;
break;
default:
throw new ArgumentException("Invalid QuadKey digit sequence.");
}
}
}
}
}
This week we launched a new API providing access to our high resolution Digital Surface Model (DSM) at 7.5cm! This data has been available to our customers over the past years via download of large GeoTiffs from our cloud storage, ideal for offline use. But now with the DSM API a whole world of applications opens up with on-demand access to the Surface Model on an individual property basis. With the surface model, you know the elevation in meters above the Earth’s surface of every pixel. Whether you simply need to know the height of an exhaust pipe on a building or you need to accurately measure the north facing facet of a roof, our DSM provides you with the necessary data. Detecting the footprint of a building, calculating the volume of the building, or identifying and measuring individual roof slopes and sizes are all straightforward operations with the our DSM.
And it couldn’t be any easier to use. The API returns standard 256X256 tiles in Geotiff format. As these tiles conform to the WMTS spec, they align perfectly with other Vexcel data like our True Orthomosaic tiles and Near-Infrared tiles, as well as tiled data from other providers in the industry. Here is an excellent overview on the standard tiling system from Microsoft, which also includes some helpful sample code for converting between lat/long and tile coordinates. If you know how to request an X/Y/Z tile from another provider, you already know how to request data from our DSM API. From street map tiles from Open Street Map to Land Parcels from Landgrid, our DSM aligns perfectly without any painful up/down sampling or transform of the data. and of course it aligns with our own high quality aerial imagery.
Here are two tiles at location Tile X = 359441, Tile Y = 836948, Zoom=21. The first tile is from the Vexcel Blue Sky Ultra Orthomosaic. The second image is a simple rendering of the SAME geography’s DSM. Note the clean alignment between them. the black areas are flat to the Earth’s Surface, with the brightest red areas representing the highest points. the Delta here is 11.87 meters from the ground to the flat roof. the elevation of EVERY pixel is easily read from the Geotiff.
Here is a series of tiles stitched together showing the orthomosaic and DSM together. use the slider to compare.
Getting started with the DSM API is easy. Begin with the documentation which you can access here. If all you need to do is display the DSM data in an application like QGIS that knows how to render DSM, simply download some tiles and drop them into a supported application. But if you want to access the elevation data programmatically in your own applications, you’ll likely want to use one of the many code libraries out there that making working with Tiff images (or the GeoTiff extension) quite easy. Here is a link to LibTiff, a very popular library for opening Tiff Files: http://libtiff.org/ And for .NET developers, here is a port of Libtiff for .NET: https://bitmiracle.com/libtiff/ LibTiff is so popular that many of the Geotiff libraries available rely on it under the hood. So you can’t go wrong by looking into LibTiff to speed your development process with our DSM.
Below is some sample code written in C# that shows how to open a DSM Geotiff tile and read the matrix of elevation values out of it. It relies on the aforementioned LibTiff libarary, but you should be able to adapt this code to any language and library of your choice. There are two important extended tags in the Geotiff that you will want to be aware of. Extended Tag 33550 contains the Pixelsize information. Tag 33922 contains the X/Y location on earth of the tile, in Meters. These tags are highlighted in the comments in the code below.
This should get you off to a good start working with the DSM API, but as always reach out to our support team with any questions. support@vexcelgroup.com In future articles here, we’ll build on the code below to do more interesting things like building footprint detection and 3D extrusion.
using System;
using BitMiracle.LibTiff.Classic;
using System.Net;
namespace DSMtileConsole
{
class Program
{
private const double EarthRadius = 6378137;
static void Main(string[] args)
{
string outputFolder = System.IO.Path.GetTempPath();
string VexcelToken = "YOURTOKENHERE";
//Request a DSM Geotiff tile from the service and save it as a local file
string filename = outputFolder + "TestDSMTile.tiff";
string imageURL_DSM = "https://api.gic.org/images/GetDSMTile/21/359441/836948/?layer=bluesky-ultra" +
"&token=" + VexcelToken;
fetchImageFromURL(imageURL_DSM, filename);
//Open and Parse the Geotiff file
using (Tiff tiff = Tiff.Open(filename, "r"))
{
if (tiff == null)
{
// Error - could not open
Console.WriteLine("ERROR: Couldn't open the file. ");
return;
}
Console.WriteLine("Input File: " + filename);
//Read the image Height and width from standard tags
int imageWidth = tiff.GetField(TiffTag.IMAGEWIDTH)[0].ToInt();
int imageHeight = tiff.GetField(TiffTag.IMAGELENGTH)[0].ToInt();
Console.WriteLine("Image width: " + imageWidth + " Image height: " + imageHeight);
//Parse the Pixel Width and Height. this information is contained in Extended Tag 33550
//These values are each 8 byte doubles. the first double is PixelsSizeX and the second is PixelsSizeY
byte[] destin = new byte[8];
FieldValue[] values = tiff.GetField((TiffTag)33550);
byte[] bytesGeo = values[1].ToByteArray();
Array.Copy(bytesGeo, 0, destin, 0, 8);
double PixelSizeX = BitConverter.ToDouble(destin, 0);
Array.Copy(bytesGeo, 8, destin, 0, 8);
double PixelSizeY = BitConverter.ToDouble(destin, 0);
Console.WriteLine("Pixel Size X: " + PixelSizeX + " Pixel Size Y: " + PixelSizeY);
//Parse the X and Y origin. this represents the coordinate of the upper left pixel of the geotiff
//these values are in meters and found in Extended Tag 33922
//These values are each 8 byte doubles. the 4th double is OffsetX and the fifth is OffsetY
values = tiff.GetField((TiffTag)33922);
bytesGeo = values[1].ToByteArray();
Array.Copy(bytesGeo, 24, destin, 0, 8);
double OriginMetersX = BitConverter.ToDouble(destin, 0);
Array.Copy(bytesGeo, 32, destin, 0, 8);
double OriginMetersY = BitConverter.ToDouble(destin, 0);
Console.WriteLine("X Origin: " + OriginMetersX + " Y Origin: " + OriginMetersY);
//Convert the X,Y from meters to decimal degrees
MetersXYToLatLong(OriginMetersX, OriginMetersY, out double originLatitude, out double originLongitude);
Console.WriteLine("Origin Longitude: " + originLongitude + " Origin Latitude: " + originLatitude);
//Now we can start reading the body of the geotiff containing the elevation data
//the body is organized as a series of 4 byte Singles
Console.WriteLine("Reading the body of the file...");
int stripSize = tiff.StripSize();
int stripMax = tiff.NumberOfStrips();
int bufferSize = stripMax * stripSize;
byte[] buf = new byte[bufferSize];
int result = tiff.ReadEncodedTile(0, buf, 0, stripSize);
//Scan the file. Report each pixel's Elevation and find the min/max elevation along the way
Single minElevation = 999999;
Single maxElevation = 0;
Single curEvelation = 0;
FieldValue[] value = tiff.GetField(TiffTag.IMAGELENGTH);
int imageLength = value[0].ToInt();
string outline = "";
for (int row = 0; row < imageLength; row++)
{
outline = "Row " + row.ToString().PadRight(4,' ');
for (int col = 0; col < (buf.Length / (imageWidth * 4)); col++)
{
int offset = (row * (imageWidth * 4)) + (col * 4);
byte[] bytes = { buf[offset], buf[offset + 1], buf[offset + 2], buf[offset + 3] };
curEvelation = BitConverter.ToSingle(bytes, 0);
minElevation = Math.Min(minElevation, curEvelation);
maxElevation = Math.Max(maxElevation, curEvelation);
outline += Truncate( curEvelation.ToString(),6).PadLeft(8, ' ');
}
//Console.WriteLine(outline);
}
Single elevationDelta = maxElevation - minElevation;
Console.WriteLine("Elevation range: " + minElevation + " to " + maxElevation);
Console.WriteLine("Elevation delta: " + elevationDelta );
}
}
public static void fetchImageFromURL(string imageURL, string outputFilePath)
{
using (WebClient webClient = new WebClient()) {
try {
webClient.DownloadFile(imageURL, outputFilePath);
}
catch (Exception ex)
{
}
}
return;
}
public static void MetersXYToLatLong(double x, double y, out double latitude, out double longitude)
{
//Adapted from https://docs.microsoft.com/en-us/bingmaps/articles/bing-maps-tile-system Thanks Joe!!
latitude = 90 - 360 * Math.Atan(Math.Exp(-y / EarthRadius)) / Math.PI;
longitude = 360 * x / (EarthRadius * 2 * Math.PI);
}
public static string Truncate(string source, int length)
{
if (source.Length > length)
{
source = source.Substring(0, length);
}
return source;
}
}
}
Aerial imagery over the area hit by this weeks tornado in Alabama is live on our platform. Just over 250 square kilometers of imagery, including 45 degree oblique imagery, has been published covering the northern Birmingham suburb of Fultondale. A computer vision generated heat map of the damage is also available to all GIC members in our web app.
Below is a screengrab showing the area flown with the heat map turned on. If your organization needs access to our web app, drop an email to support@gic.org with your request.
The Image gallery is a relatively new feature in our web application, providing a means of navigating our image library for a given location with folders and filters. After searching for a property, you can hit the ‘See all images at this location’ link in the infobox to bring it up. You can then browse all imagery we have at the location, filtering by year, image type, orientation and more.
The only downside is that if you click one of the thumbnails and go into full screen interactive mode, you have to repeat the entire process of filtering to then get to the next image you want to see.
Well, things just got a lot more convenient in the Vexcel Web App! The new Image Gallery Sidebar stays docked in the left rail enabling you to quickly filter, view and repeat without losing context. It is the third tool down on the toolbar as highlighted here:
All of the same filters from the full screen interface are present. Choose your year, filter by image type or orientation, then click away on the preview thumbnails. As you click, each image will take over the interactive map window. You can navigate around in the interactive window, change orientation, etc… and then simply click the next thumbnail you want to interact with. this is the smoothest way to explore our image library to date, although I still love using the history tool on occasion.
A question came in to support this week on how to select the best image for each day of collection. When we fly and collect aerial imagery, we will typically end up with many shots of a property, each just a second or 2 apart. Add in the fact that we have multiple images in each oblique orientation of north, south, east and west and you can see how we have on the order of 50 views of each property every time we fly over. All of these images can be very helpful in doing detailed analysis of a property, but sometimes you want to present just the best image on each day of flight in your application.
Here is an easy solution that takes advantage of the date filters on the ExtractImages endpoint. there are three steps in the process.
Query the catalog to get all of the metadata for all collections at your coordinate of interest
Parse the results to create a list of dates that have imagery
Make a follow up call to ExtractImages for EACH of the days found in step two and ask for the single best image on that day
Here is a sample app showing the three steps together. You need to plug in your Vexcel account credentials, then specify any lat/long coordinate and hit go! The app is all client-side JavaScript so just view source to see the API calls being made.
Using this coordinate (36.175216, -86.73649), lets look at the actual API calls for steps 1 and 3
Step 1. Find all Image metadata a the point
We’re going to focus on just the Nadir (vertical) imagery. Here is the API call to query the Vexcel catalog at this coordinate. in the response you’ll see we have 57 Nadir images for this property!
Step 2. Parse the Json from step one to find the dates with at least 1 image.
We want to build a list of days that have at least one image. The best way to do this is to sort the 57 images by capture date, then iterate through and build a list of unique dates. In this case we will end up with a list of 5 unique dates from September 9, 2012 to March 4, 2020
Step 3. Request the best image on each date found in step 2
For each of these calls, note the initDate and EndDate parameters. Also, note the mode=one which will return the best image in the date range. We can make a call to ExtractImages like this for each of the 5 dates we found in step 2. Here are a couple of them with their resulting image.
A nice new feature was added to the Vexcel API this week; the ability to crop an image to an arbitrary shape, masking off the rest of the image in black. This is very useful in computer vision applications where you don’t want neighboring pixels to interfere in the analysis. A couple of example use cases:
when looking for rooftop solar panels, you can crop the image to the building footprint or parcel boundary of the target property so that an adjacent home with solar panels will not interfere.
when assessing damage after a hurricane, cropping to the building footprint will prevent the debris field from the storm from coloring the analysis of the target structure.
Lets look at an example. This is a typical image with three homes in it.
If you want to limit the analysis to the building at the center, you can pass a polygon in well known text format as shown in this URL. Also, you’ll want to set the ‘crop’ parameter to ‘exact’.
Here is another example utilizing recent imagery from the Oregon wildfires. In this case, cropping to the building footprint prevents neighboring properties from adversely affecting the results of analysis to determine if the building has been destroyed or not. Here is the API call:
Here are a couple of additional helpful resources. Microsoft has made large sets of building footprints available for the United States and Canada. You can download them in convenient zip files here.
When passing in a polygon representing a land parcel or building footprint, you might want to buffer or ‘inflate’ the polygon before passing it in. This can be helpful as your geometry might not match up with our imagery perfectly; allowing a bit of fudge factor can insure the entire intended area is included in the cropped image. I’ve used the Clipper library for this and have found it convenient and easy to use. It can also be installed via the NuGet package manager. In the second example above, Clipper was used to inflate the footprint by 20%.