We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Research & business

View Profile

Mr. Matthew Wiecek, M.A.

Research Postgraduate (PhD) in the Department of Geography

(email at

1: PhD Research

Introduction and Motivation

Deforestation is a major component of climate change

Forests are a critical part of the global carbon cycle, and they have a powerful effect on atmospheric carbon dioxide concentrations. Tropical land use changes, such as increases in agriculture, logging and mining, are causing extensive deforestation and forest degradation (Pan et al., 2011). As trees are cut down, solid carbon is released into the atmosphere as carbon dioxide. Deforestation is currently the second largest anthropogenic source of carbon after fossil fuels (Van der Werf et al., 2009; Pan et al., 2011), and tropical deforestation and forest degradation are key parts of this (Houghton 2012). In addition to the impact on global climate, deforestation will increase flooding, reduce ecosystem services, promote water-related diseases, and negatively impact agriculture (Bovolo et al., 2018).

The opposite is also possible: changes in forest extent can remove carbon dioxide from the atmosphere. When trees grow, they sequester atmospheric carbon back into solid form, and reduce the concentration of carbon dioxide in the atmosphere. Forests are therefore an important carbon sink (Pan et al., 2011). Changes in forest cover are therefore a vital part of climate change forecasting and climate change solutions. However, deforestation is currently a major source of uncertainty in global climate change forecasts (Asuka Suzuki-Parker, pers. comm.), making it essential that certainty and precision be improved when monitoring changes in forest cover.

Deforestation is defined as the destruction of an area forest, such that the area is now not forest. Deforestation can be caused by clear-cutting, agriculture, or severe fires. In deforestation mapping, it is currently impossible to accurately identify the edge of a moving forest, because the shortest trees are indistinguishable from grassland in a satellite image (Morley et al., 2019). Forest degradation is an anthropogenic process that is defined based on loss of carbon stock, and this takes the form of a loss of trees that does not push the forest over the threshold into non-forest. This is harder to detect. The canopy quickly regrows, and the loss of trees may not be visible from above after a few years.

The only effective way of monitoring large areas of forest quickly is by using satellite images (Lynch et al., 2020), and thus this research will be based on satellite images. Different remote sensing sensors record very different information. Multispectral images record information in the visible light and infrared spectra. Data is recorded for several bands that can include blue, green, red, and various infrared bands. This data contains information on spectral reflectance properties of objects. Radar images, meanwhile, record one band of information on surface texture and dielectric properties of objects. These sensors therefore provide complementary information, and the inclusion of radar information may incorporate information that multispectral sensors will cannot record at any resolution. 

This research will examine the possibility that combining free multispectral images with free radar images will reduce uncertainty and improve applicability in a cost-effective way. It may be possible to solve these problems with a higher information dataset. Commercial satellite image providers offer very high resolution images, but these are prohibitively expensive. There would be great benefits to creating a dataset that equals high resolution commercial images in information content, but comes from free image products.

The Guyana Forestry Commission has a mature REDD+ program

Monitoring changes in forest cover using satellite images is a problem with worldwide significance, but this research will focus on one specific country: Guyana. This is because the Guyana Forestry Commission has already done extensive work for many years on deforestation monitoring. Over the last ten years, Guyana has accumulated field and satellite data and an understanding of the issues that will provide the foundation for research of this nature. This data has been made available for this research free of charge, because of the long-standing partnership between Durham University and the Guyana Forestry Commission and its industry contractors. This means that this research can begin at an advanced level that would not be possible in any other country.

The United Nations Framework Convention on Climate Change has responded to the problem of climate change by, among other things, creating the REDD+ program (Reducing Emissions from Deforestation and Forest Degradation) (REDD+, 2020). This is a global program where developed countries provide results-based financial support to developing countries for the purpose of limiting deforestation. By 2019, 39 countries were participating, and the first REDD+ program to implement results-based payments was the Guyana-Norway REDD+ Agreement (Guyana Forestry Commission, 2020). In the original agreement, Norway agreed to provide up to USD 250 million in results-based payments to help Guyana transition to a low-carbon green development based economy.

A key part of REDD+ programs is Measurement, Reporting and Verification Systems (MRVS). The goal of the MRVS project is to contribute to Guyana’s green development pathway by implementing the MRVS, reporting on the REDD+ Interim Indicators, and streamlining the REDD+ indicators. The Guyana Forestry Commission has approached this as a continuous learning project, and Guyana’s MRVS has been expanded and refined considerably over the last ten years. Payments are tied to the amount of forest that is lost, and no money is provided if deforestation crosses a specified threshold.

Monitoring of forest change is done using manual interpretation of satellite and aerial images. In addition to deforestation, the Guyana Forestry Commission also maps forest degradation, and is one of very few countries to do so. To this end, Guyana has acquired free and commercial image products. The commercial images are very high resolution and also very expensive, but they have been made available for this research at no cost. In addition, because Guyana has the most mature REDD+ program in the world, a wealth of data has already been collected, and the issues of mapping forest change in this area are already well understood. This will serve as the foundation of the present research.


In this research, the data consists of images, which are grids of pixels that each contain spectral values corresponding to a specific location. Since this research will focus on remote sensing and the analysis of images, the relevant literature to review will come from image classification, sensor fusion, and time series analysis.

Image classification

Mapping forest change is the task of assigning each pixel to a category (forest or non-forest). Calculating the correct category of each pixel is the job of a statistical learning algorithm. This is done using statistical learning, a set of tools for understanding data (James et al., 2017), and statistical learning techniques can be supervised or unsupervised. Supervised learning builds a statistical model for predicting or estimating an output based on inputs. The inputs (X1, X2…Xn) are called predictors, independent variables, variables, or features. The output (Y) is the response or the dependent variable. This output can be quantitative, a numerical value, or it can be qualitative, a class/category.

In supervised image classification of forests, supervised learning forms the statistical foundation of an implementation. Mapping deforestation is firmly a classification problem. The output consists of two categories: forest and not-forest, with potentially a grey area in between. Forest degradation can be a classification problem. Carbon stock change mapping is a regression problem. In contrast, unsupervised learning only has input, with no output. Instead of predicting or estimating an output value, unsupervised learning identifies clusters or other patterns in the inputs.

The first deforestation mapping created sharp lines between forest and non-forest, but it quickly became clear that there is a smooth transition between the two. In the 1980s and 1990s, fuzzy classification and other methods were used to map these ecotones (Clements, 1905; Cannon et al., 1986; Johnston and Bonde, 1989; Wood and Foody, 1989; Quarmby et al., 1992; Foody, 1992; Schaffer, 1993; Foody, 1994; Fortin and Drapeau, 1995; Fortin et al., 2000; Ranson et al., 2004). This fell out of favour in the mid-2000s, when statistical learning came to be the dominant approach to image classification (Wang et al., 2018).

The most common statistical learning algorithm used in forest mapping today is Random Forest. It belongs to a group of algorithms called decision trees. Each region becomes a category, and the data points inside each region are assigned to that category. Decision tree methods work by stratifying or segmenting the predictor space into multiple simpler regions (James et al., 2007). Each step of the process is a node, and at each node, the dataset is split in two. At the next node, the regions are themselves split in two. Random Forest is one version of the decision tree approach. It is different in that, at each node, a random subset of the predictors is used. This prevents one strong predictor from being the only predictor that is ever used. Classical regression models tend to work best when the relationship is well approximated by a linear model, and complex nonlinear relationships might be better represented by a decision tree.

The random forest algorithm has been used in forest mapping with much success. Souza Jr. et al. (2003) used a combination of IKONOS and SPOT images were used to map forest degradation in the Amazon rainforest. First, the relationship between ground and satellite scales was investigated using statistical and visual analyses of the field classes in terms of fraction values. Four classes were created: intact forest, logged forest, degraded forest and regeneration. A decision tree classifier was used to definite a set of rules for separating the forest classes using the fraction images. The forest degradation map had 86% accuracy and R2 = 0.97 between the total aboveground biomass and the NPV fraction image. In Hermosilla et al. (2015b), the classification of change objects was done using the random forest algorithm. In this case, the data set consisted of a set of change metrics associated with each pixel trajectory. This data was classified using Random Forest. In Wang et al. (2018), the classification was done using Random Forest, and the Majority filter is used to remove isolated pixels. Two TerraClass masks are then used. The Forest Mask is used to make a map of disturbed vs intact forest. The Post-Deforestation Regrowth Forest Mask is then used to separate forests into degraded and regrowth areas.

Neural networks have been used with great success in remote sensing of forest structure and extent. A neural network consists of neurons, which hold a numerical value, and connections, which mathematically determine which other neurons activate based on the value of a neuron (Sanderson, 2017). Quite recently, Mhatre et al. (2020) found that supervised classification with Convolutional Neural Networks can be used to improve classification accuracy.

In addition to multispectral data, radar data has been used to map forest degradation. Trisasongko (2010) found that P and L band SAR were effective, and C-band SAR provided poor results. The data was analyzed using Support Vector Machines with a Radial Basis Function kernel.

There have also been comparisons between methods. Zhao et al. (2019) compared four learning algorithms, measuring their relative ability to predict the health of a forest plantation in China using remote sensing data. The algorithms are Classification and Regression Tree (CART), Support Vector Machine (SVM), Artificial Neural Networks (ANN), and Random Forest (RF). This study found that Random Forest produced the lowest RMSE and best R2.

Sun et al (2019) compared learning algorithms to map AGB. The data came from the Geoscience Laser Altimeter System (GLAS), optical imagery and field inventory data. The predictor variables were the Normalized Difference Vegetation Index (NDVI), Wide Dynamic Range Vegetation Index (WDRVI0.2), tree cover percentage, and three GLAS-derived parameters. The six algorithms used were partial least squares regression, regression kriging, k-nearest neighbour, support vector machines, random forest and high accuracy surface modelling (HASM). HASM had the best modelling accuracy based on MAE, RMSE, NRMSE, RMSV, NMSE and R2.

Some recent research has also improved the capabilities of learning-based approaches by incorporating geostatistical techniques. Chen et al. (2019) combined Random Forests with Ordinary Kriging, and found accuracy improvements.

This review of image classification methods gives a sense of which algorithms generally work. However, the results of studies done in other regions cannot be applied to Guyana without testing. The performance of a statistical learning algorithm depends on the nature of the dataset, and how the algorithm’s mathematical process responds to the data. Different algorithms handle data in different ways, and thus the best algorithm in one situation is useless in another. Some are linear, and thus capture the signal in a linear trend but can’t handle nonlinear trends Some are nonlinear, and their flexibility captures complex signals, but overfits simpler signals. Support vectors and decision trees can behave very differently in different datasets. This research will test a variety of methods, and determine which one works best with Guyana forest images. Careful research design will allow the comparison to provide insights into the nature of the ongoing problems and the range of solutions concerning deforestation and especially forest degradation monitoring. This area of application remains essential to effectively manage climate change and environmental degradation, and it remains a lively area of research in the literature.

Sensor fusion

The performance of image classification depends heavily on the amount of information in the data. A higher information dataset can be created by combining the information of multispectral and radar images.

Remote sensing is done with a variety of sensors, and the two that appear in this research are multispectral sensors and radar sensors. Multispectral sensors record spectral information: visible and infrared light that comes from the Sun and is reflected by vegetation and other surfaces on the Earth’s surface. One drawback of multispectral images is that they cannot see any feature past a closed canopy. Another drawback is that multispectral sensors cannot penetrate clouds, and thus large gaps can appear in the data when the weather is cloudy. As with many tropical regions, Guyana has frequent cloud cover. This hampers work with optical satellites, and means that radar satellites will bring considerable benefits.

Radar sends microwaves outward, and then records the microwaves that reflect off of a surface and return to the sensor. This is determined by surface texture and dielectric properties. One advantage of radar images is that they are not affected by cloud cover, and another advantage is that some radar configurations can provide structural information in three dimensions. In addition, radar images are useful in forest mapping, because they are sensitive to forest structure changes even after the canopy has closed (Milodowski et al., 2017).

Different sensors can provide complementary information. Features that are indistinguishable or invisible to one sensor may be visible to another sensor. Two objects that have the same spectral properties cannot be distinguished in multispectral images. Likewise, two objects that have the same surface texture and dielectric properties cannot be distinguished in radar images. The goal of sensor fusion is to combine images from different sources in a way that increases the quantity and quality of information available. A fused multispectral-radar image has spatial, spectral, textural, and dielectric information in one image. Features not visible in either source image alone can be detected here (Pohl and van Genderen, 1998; Wang et al., 2005; Pandit and Bhiwani, 2015; Ghassemian, 2016; Kulkarni and Rege, 2020).

Sensor fusion uses a mathematical process to combine or replace information. In this context, this means spatial and spectral information. A multispectral image’s bands (Red, Green, Blue and infrared) contain both spatial and spectral information. Radar images contain spatial information. One way to combine the spatial information of the two images is Component Substitution, where the spatial and spectral information in the image are separated, the spatial information is combined or replaced, and then a new image is created from that. One way to do this is by transforming the Red Green Blue colour space to the Intensity Hue Saturation colour space. The Intensity component contains the spatial information, and the Hue and Saturation components contain spectral information. The Intensity component of the multispectral image can be replaced with the radar image. Alternatively, a filter can be used to extract high frequency information from the radar image and low frequency spatial information from the multispectral image, and then the two are added. The modified IHS image is transformed to an RGB image that contains information from both source images.

Another way is to use a decomposition to separate the high resolution and low resolution information in an image, replace just one of those, and then do a reverse decomposition to create a fused image. This can be done using mathematical techniques such as pyramids, wavelets, contourlets and curvelets. What they have in common is this: all images have fine information that is visible only at high resolutions (high frequency information) and coarse information that is visible at lower resolutions (low frequency information). An image can be decomposed into high frequency and low frequency images, one of these can be replaced or combined with the radar image, and then an inverse decomposition can be done to create a fused image.


An emerging approach to sensor fusion is deep learning. The Convolutional Neural Network architecture used in (Scarpa et al., 2018) works like this:

  1. A generic layer can be described as z = wx + b, where z is the output image stack, x is the input image stack, and w and b are sets of parameters that are learned during training.

  2. In a CNN, w is a tensor with a set of convolutional kernels

  3. At each layer, there is an activation function whose arguments are the input and the parameter set

  4. The training phase is an optimization problem where the minimum of a loss function is calculated

  5. A simple classification problem may require dozens of passes of the training dataset, and a complex problem might require thousands of passes

  6. A large number of training samples of suitable size is required to train a neural network

The Conditional Generative Adversarial Network used in Grohnfeldt et al. (2018) works by finding a generative function capable of producing an artificial image that fools a discriminator. A Generative Adversarial Network has two neural networks: one generates images, and the other evaluates them. The generative neural network uses training data to produce an image that the discriminative neural network cannot distinguish from the true data set.

These different approaches emphasize different kinds of information, in a complementary way. Component Substitution creates an image that is rich in spatial information and relatively low in computational complexity, at the cost of spectral distortion. Multiscale Decomposition preserves spectral information, at the cost of spatial distortion and increased computational complexity. Deep Learning offers extreme flexibility, which enables it to handle the most complex data, at the cost of extreme computational complexity. This enables a comparison of methods with clear parameters. Insight into the relative importance of spatial and spectral information in forest monitoring can be gained by comparing Component Substitution methods with Multiscale Decomposition methods. There can also be a comparison of the benefits of more flexible methods relative to their computational complexity. In this research, Deep Learning will be compared with Component Substitution and Multiscale Decomposition to determine whether the extreme flexibility will yield benefits, or whether the benefits are not sufficient to justify the computational resources required.

Time series analysis

Change is not static, it occurs through time. This means that a single image is not sufficient; multiple images must be used to detect change. The work of the Guyana Forestry Commission is to monitor changes in forest cover from one year to the next, which requires the comparison of images from two years. Time series analysis is the use of a series of images at the same location but different times to identify changes through time and study dynamic processes. Current research on time series analysis in remote sensing uses a long series of images that can span decades. This analysis will use only two images from two consecutive years, but the method can be generalized to longer term analyses in future research.

There has been a steady progression of time series methods over the last ten years. An early example of this is Kennedy et al., (2010). They quantified forest change through time by this process:

  1. Plotting the forest cover through time.

  2. Perform linear regression on the plot.

  3. Identify the first vertex by finding the maximum deviation from the regression line.

  4. Segment the data at the vertex, and perform linear regression on the two segments.

  5. Continue the process until the maximum number of segments is reached

Hermosilla et al. (2015a) added change metrics to the approach.

  1. First, breakpoints, segments and trends are defined.

  2. There are four types of trends: no breakpoint, all positive trends, one breakpoint trends with a negative slope, and trends with multiple breakpoints and at least one negative slope.

  3. From these, a set of pre-change, change and post-change metrics are calculated.

In the literature, a trajectory is a time series of images. Wang et al., 2018 uses not one trajectory, but six. They are two slices of the short wave infrared spectrum, Normalized Difference Vegetation Index, Soil Adjusted Vegetation Index, and the Normalized Difference Water Index, calculated from two short wave infrared bands. Eleven metrics are the variables associated with each of the six trajectories. They are Minimum, Maximum, Range, Mean, Standard Deviation, Coefficient of Variation, Skewness, Kurtosis, Slope, Max-slope, Year-2010 Value. Eleven metrics calculated for six time series creates 66 variables, and using this, changes in forest cover were identified.

The literature on time series analysis therefore shows a progression of methods that build on those that came previously, rather than a diverse array of different methods. First, the analysis consisted of trends and breakpoints. Then, change metrics were added. Finally, the method was expanded to include multiple time series based on different data. Therefore, this research will be based on the last method, described in Wang et al., 2018.

PhD Plan


This research will make use of the existing data of the Guyana Forestry Commission. To carry out their Monitoring, Reporting and Verification responsibilities, the GFC has acquired high resolution commercial satellite images, along with free images. This data, which is worth a few hundred thousand pounds, was compiled specifically for the work of monitoring forest change, and has been made available for free for this research. This therefore means that a Guyana case study allows this research to make use of a dataset that is already designed for the task at hand, and is not otherwise available. The Landsat and Sentinel data have wall to wall coverage, and the data covers a ten year period. The Guyana Forestry Commission uses four image products (see Table 1 for detailed specifications):

  • Landsat 8 is a free to use satellite image product provided by NASA and the United States Geological Survey. Landsat images are available as far back as 1972. Wall-to-wall coverage is available.

  • Sentinel-2 is a free to use satellite image product provided by the European Space Agency. Sentinel-2 images are available as far back as 2016. Wall-to-wall coverage is available.

  • RapidEye is a commercial satellite image product provided by PlanetLabs. The Guyana Forestry Commission has RapidEye images going back nine years, and they cover selected areas. Note that, as per O’Shea (2020), PlanetLabs has made its data freely available, and the research plan may change to take advantage of that. One example is a potential test of the fusion of radar information with very high resolution multispectral information.

  • GeoVantage is a commercial aerial photography product. Because this GeoVantage uses aircraft rather than satellites, it is taken below the clouds, and clouds do not obscure parts of the image. The Guyana Forestry Commission has eight years of these images, and they cover selected areas.

Table 1: Spectral information recorded by each sensor.

Landsat 8


Band with Central Wavelength

Spectral Resolution

Spatial Resolution

Band with

Central Wavelength

Spectral Resolution

Spatial Resolution

Coastal, 443 nm

20 nm

30 m

Coastal, 443 nm

21 nm

60 m

Blue, 483 nm

65 nm

30 m

Blue, 490 nm

66 nm

10 m

Green, 563 nm

75 nm

30 m

Green, 560 nm

36 nm

10 m

Red, 655 nm

50 nm

30 m

Red, 665 nm

31 nm

10 m

Near Infrared, 865 nm

40 nm

30 m

Red Edge 1, 705 nm

16 nm

20 m

Short Wave Infrared 1, 1610 nm

100 nm

30 m

Red Edge 2, 740 nm

15 nm

20 m

Short Wave Infrared 2, 2200 nm

200 nm

30 m

Red Edge 3, 783 nm

10 nm

20 m

Panchromatic, 590 nm

180 nm

15 m

Near Infrared, 842 nm

106 nm

10 m

Cirrus, 1375 nm

30 nm

30 m

Water Vapour, 940 nm

21 nm

60 m

Long Wavelength IR 1, 10 900 nm

1000 nm

100 m

Cirrus, 1375 nm

30 nm

60 m

Long Wavelength IR 2, 12 000 nm

1000 nm

100 m

Short Wave IR 1, 1610 nm

94 nm

20 m

Short Wave IR 2, 2190 nm

185 nm

20 m




Spectral Resolution




Spectral Resolution

Spatial Resolution

Blue, 475 nm

70 nm

5 m

Blue, 450 nm

80 nm

25 cm to 1m

Green, 555 nm

70 nm

5 m

Green, 550 nm

80 nm

25 cm to 1m

Red, 658 nm

55 nm

5 m

Red, 650 nm

80 nm

25 cm to 1m

Red Edge, 710 nm

40 nm

5 m

Near Infrared, 850 nm

100 nm

25 cm to 1m

Near Infrared, 805 nm

90 nm

5 m

The research will investigate samples that represent each of the major deforestation drivers: legal logging, illegal logging, agriculture, shifting cultivation, fire, and natural processes. Each of these samples will use four source images: A Sentinel-1 and Sentinel-2 image from Year 1, and a Sentinel-1 and Sentinel-2 image from Year 2. 

Aim of the Research

The goal of this research is to investigate whether using image classification to map deforestation and forest degradation, a time series of fused freely available multispectral and radar images yields results that are equivalent or superior to high resolution commercial image products. Methodological developments in this field are likely to yield results that will contribute to improvements in assessing deforestation and forest degradation with a good level of accuracy using freely available data. This will assist developing countries with limited financial resources to participate more fully in climate change mitigation schemes such as UN REDD+.

Methodology of the Research

The research will primarily focus on comparing a diverse selection of sensor fusion and image classification techniques, following the schedule outlined in Table 2. Sensor fusion techniques to be considered will be drawn from Component Substitution, Multiscale Decomposition, and Deep Learning, and the research will investigate these questions:

  1. Which is more important in this context: spatial information or spectral information? Answering this question, by comparing the results of Component Substitution and Multiscale Decomposition methods, will provide insight into the nature of the data and the best sensor fusion techniques.

  2. Does the flexibility of Deep Learning provide benefits that justify its computational complexity?

Image classification techniques will be selected to answer the following research questions:

  1. The first step is to compare the results of Random Forest and Boosting to determine whether it is better to use all of the predictors or a random subset of the predictors.

  2. The second step is to compare the results of Maximum Margin Classifier and Support Vector Classifier with Support Vector Machine to determine whether the problem is linear or non-linear.

  3. The third step is to compare the results of every method used to identify the best method for mapping deforestation and forest degradation in Guyana. The best method will be used in the next step. Options for this step include: the best decision tree algorithm (Random Forest or Boosted Decision Trees), the best support vector algorithm (Maximum Margin Classifier, Support Vector Classifier, or Support Vector Machine), Logistic Regression, and Discriminant Analysis

  4. The fourth step is to compare the best supervised classification algorithm with Hierarchical Clustering, an unsupervised algorithm, to test the hypothesis that the same results will emerge when the response is supervising the analysis and when there is no response.

  5. The fifth step will compare the results of Deep Learning with the results of the previous steps, to determine whether or not its flexibility will improve classification results to an extent sufficient to justify the computational resources required. 

  6. The sixth step is to use the best algorithm to compare the classification output of the fused time series with RapidEye and GeoVantage images, to test the hypothesis that fused Sentinel-1 and Sentinel-2 images will be equivalent to commercial image products.

In remote sensing of forests, the inputs are often the bands of the images used as the dataset. Some bands and band ratios are more useful to some problems than others. For example, Vieira et al. (2003) found that a plot of Band 5 vs NDVI provided the best separation between young, intermediate, advanced and mature forests in Parà, Brazil. Therefore, it is important to determine the predictive power of each predictor, and many statistical learning methods are well suited to this.

In all steps of the experiment, quality assessment will be a key step. Different methods of quality assessment provide different information, and they can be complementary. Careful selection of complementary methods can provide detailed information about why each algorithm performed the way it did, what it changed, and what insights that provides into the data and the problem. Careful design here will also determine which metrics work best with which deforestation drivers. Examples of frequently used quality assessment metrics include Mean Average Error, Root Mean Square Error, and Entropy. Other potentially useful metrics include Relative Bias, Relative Variance, Correlation Coefficient, Standard Deviation, and Mutual Information.

2: Background

My background begins with my undergraduate education in anthropology and linguistics at the University of Manitoba. My coursework included training in paleoethnobotany, mineralogy, petrology and geochemistry. During this time, I volunteered in the archaeology lab at the Manitoba Museum, where I was trained in artifact analysis. I received field training in excavation and forensic anthropology at an early modern cemetery in Poland, and further training in excavation at a Canadian fur trade fort. This was followed by a master's degree in archaeology at Durham University, where I conducted a preliminary analysis of a collection of ancient South Asian artifacts in the collections of the British Museum.

After this, I gained research experience in a variety of settings. First, I worked as a volunteer in a laboratory, assisting with statistical analysis of data from nutritional research, while at the same time working as an intern in an archive. I then transferred over to a hospital's cardiac surgery team, where I had a hand in building databases of patient data for the nurses and researchers, and assisting the statistician with exploratory data analysis. After this, I worked as a GIS technician at Manitoba Infrastructure, where I carried out flood risk assessment in response to requests for information from other government departments and the general public. This also included flood risk mapping for the National Disaster Mitigation Program, the International Joint Commission, and public open houses. In between that and my PhD, I worked as a research assistant to a University of Manitoba history professor, helping her build web maps using ArcGIS Online.

During my PhD, I have had a hand in several projects. First, I prepared satellite images for use by a documentary team that had been contracted by National Geographic for the documentary Ancient China from Space. Second, I was part of a joint Department of Archaeology-Department of Geography team that used satellite images and neural networks to create high resolution soil maps of the Middle East, for use in environmental archaeology and modern land use management. Most recently, I have assisted Durham County Council with COVID-19 mapping, and helped the Guyana Forestry Commission monitor deforestation.

Research Interests

  • Spatial data analysis using geographic information systems
  • Remote sensing, especially sensor fusion, image classification, and time series analysis
  • Environmental archaeology, especially human-climate interaction before the Industrial Revolution
  • Reforestation and agroforestry science and policy, including land management practices of past cultures and present non-Western cultures

Selected Publications

Conference Paper

Journal Article

Show all publications

Is supervised by