Cookies

We use cookies to ensure that we give you the best experience on our website. You can change your cookie settings at any time. Otherwise, we'll assume you're OK to continue.

Durham University

Research & business

View Profile

Mr. Matthew Wiecek, M.A.

Research Postgraduate (PhD) in the Department of Geography

(email at matthew.g.wiecek@durham.ac.uk)

1: Background

For my PhD research, I am developing an improved method of mapping deforestation and forest degradation in support of Guyana’s deforestation mitigation work. This is part of Guyana’s REDD+ program, which began in 2009. REDD+, or “Reducing Emissions from Deforestation and Forest Degradation”, provides the framework for agreements between two countries, or between the United Nations and a country, where a country receives financial support to keep deforestation below a specified threshold. In this case, Guyana and Norway have an agreement where Norway sends money to Guyana that is tied to the amount of deforestation that is prevented. An essential part of a REDD+ program is Monitoring, Reporting and Verification, or MRV. A country must develop a robust MRV program to continue to receive payments.


Deforestation is when an area of forest is converted to non-forest. Deforestation mapping, therefore, seeks to detect the boundary between two classes. Accurate deforestation mapping hinges on the algorithm correctly identifying areas that are forest, and correctly identifying areas that are everything except forest, and precisely identifying the boundary between the two. Images of any resolution can identify large areas of forest and non-forest. High-resolution images will identify small areas of forest loss not visible in lower resolution images, and can identify the edge of forest more precisely. High-resolution images may also be able to detect the area of short trees that occurs at the leading edge of forest advance.


Forest degradation is when an area of forest is still forest, but biomass is lost and carbon stock is reduced. One challenge in identifying forest degradation is that the changes that occur are so small in area that they do not appear in low to moderate resolution images. Another major challenge in identifying degradation is that the canopy closes very quickly, while the biomass below remains diminished.
The data for this work comes from multispectral and radar sensors mounted on satellites and airplanes. Multispectral sensors have a set of bands that record different slices of visible and infrared light. There is a band for blue light, a band for green light, and bands for red light, near infrared, and short wave infrared. Multispectral sensors are passive sensors – they record light from the Sun that reflects off of the Earth’s surface and the vegetation on the surface. Different physical properties determine the spectral reflectance properties of different bands. Green, red and near infrared light respond to chlorophyll activity in vegetation, and short wave infrared reflects the moisture content of vegetation. Radar is an active sensor, that sends out microwaves that reflect off of vegetation and return to the sensor to be recorded. The resulting image is determined by surface texture and physical properties of the molecular structure.


There are many examples of multispectral sensors. The Landsat program, operated by NASA and the United States Geological Survey, is a series of satellites that has been operating since 1972. Five of these satellites have been decomissioned, and two are operational. One of the two, Landsat 7, has a sensor called the Enhanced Thematic Mapper (ETM). It records blue, green, red, near infrared, short wave infrared, and thermal infrared light. Landsat 8 has a different sensor. The Operational Land Imager (OLI) records everything that the ETM did, and adds a new band that records high-frequency light scattered by aerosols and suspended sediment. Both the ETM and OLI sensors record at 30 meter resolution. Sentinel-2 is a satellite that is operated by the European Union’s Copernicus Program and the European Space Agency. It records everything that the Landsat sensors record, though the exact frequency range of each band is slightly different. Sentinel-2’s sensor records at up to 10 meter resolution. RapidEye is a satellite constellation with a sensor that records blue, green, red, red edge, and near infrared light at 5 meter resolution. RapidEye is operated by a private company called Planet. In addition to satellites, multispectral sensors can be mounted on aircraft for aerial photography. The Guyana Forestry Commission has GeoVantage air photos, thanks to a company called Aeroptic. GeoVantage photos are recorded at 25 cm to 1 m resolution, depending on altitude. Recently, the Guyana Forestry Commission has started using radar images from the Sentinel-1 satellite. Sentinel-1, like Sentinel-2, is part of the Copernicus Program.


Image classification algorithms can identify forests, non-forested areas, and deforestation drivers in these images, based on the spatial information (surface texture) and spectral information (colour). The accuracy of the results of an image classification algorithm depend on the information in the image, and the simplest way of increasing information content is increasing the resolution. Higher resolution images allow ever smaller details to be resolved. However, this has limits. There is information that a sensor will not record at any resolution, because of the way electromagnetic radiation of that frequency interacts with matter. Microwaves interact with the surface texture and dielectric properties of materials. Short wave infrared interacts with moisture content. Near infrared, red and green light interact with chlorophyll content. This information is either spatial information, related to texture, or spectral information, related to colour. Multispectral sensors record spectral information, and spatial information of the kind that interacts with visible light and infrared. Radar sensors only record spatial information of the kind that interacts with microwaves. Therefore, another way of creating higher-information images is to fuse images from two sensors. In forest mapping, multispectral and radar images have been fused with great success. This creates one image that has the textural information of a radar image and the textural and colour information of a multispectral image. This research will find out whether or not a combined multispectral-radar image increases the accuracy of deforestation and forest degradation monitoring, compared with unmodified multispectral and radar images.

2: Current Research

The objective of this research is to develop a method of mapping and quantifying deforestation and forest degradation in the context of Guyana’s REDD+ program. This will be done by:


1. Creating fused multispectral radar images,
2. Using Google Earth Engine’s built-in classification algorithms to map deforestation and forest degradation,
3. Using confusion matrices to identify the most accurate method, and
4. Compare the most accurate dataset and method with the accuracy of existing work.


Google Earth Engine is a platform for analyzing satellite images. It has two sides: the client side and the server side. The server side consists of supercomputers that are designed to store satellite images and analyze them in seconds. The client side is a web interface that allows the user to perform analyses on Google’s server from any computer in the world. Google Earth Engine has the following datasets ready for use:


1. Sentinel-1 radar images. This provides C-band radar information at 5x20 meter resolution.
2. Sentinel-2 multispectral images. Sentinel-2 provides visible light and near infrared information at 10 meter resolution, and red edge and shortwave infrared information at 20 meter resolution. The Surface Reflectance product also includes bands with information cloud cover, which can be used to mask clouds out and make a cloud-free composite.
3. Landsat 7 Enhanced Thematic Matter (ETM) multispectral images. The Enhanced Thematic Matter is a sensor on Landsat 7 that provides visible light and infrared information at 30 meter resolution. Green, red and near infrared bands can be pan-sharpened to 15 meter resolution. The Surface Reflectance product also includes bands with information cloud cover, which can be used to mask clouds out and make a cloud-free composite.
4. Landsat 8 Operational Land Imager (OLI) multispectral images. The Operational Land Imager is a sensor on Landsat 8 that provides visible light and infrared information at 30 meter resolution. Blue, green and red bands can be pansharpened to 15 meters. The Surface Reflectance product also includes bands with information cloud cover, which can be used to mask clouds out and make a cloud-free composite.


This allows for a comparison between two multispectral products at different resolutions (10 and 30 meters) and one radar product common to both. The Landsat data has low resolution compared with other multispectral products, which would limit its usefulness in deforestation and especially forest degradation mapping.

Acquiring and using higher resolution radar image products would allow the study to measure the effect of radar resolution on classification accuracy. There are two radar image products that would enable this:


1. Radarsat. This provides C-band radar information at 1x3 meter resolution. Radarsat data is provided by the Canadian Space Agency.
2. TerraSAR-X. This provides X-band radar information at up to 1 meter resolution. TerraSAR-X is used to generate WorldDEM, a worldwide digital elevation model with 10 meter resolution. It is available for purchase from Airbus.


Radar sensors do not record spectral information at any resolution, but high-resolution multispectral sensors will provide spectral information relating to chlorophyll activity and moisture content at higher resolution.

Acquiring high resolution multispectral image products would allow the analysis to determine the benefits of fine spectral details in identifying deforestation and forest degradation. There are two multispectral image products that would enable this:


1. RapidEye. This provides visible light, red edge and near infrared information at 5 meter resolution.
2. PlanetScope. This provides visible light and near infrared information at 3.7 meter resolution.
3. SPOT. This provides visible light information at 6 meter resolution (1.5 meter pan-sharpened) and near infrared information at 6 meter resolution.


In addition, half meter multispectral sensors, such as SkySat and Pléiades, would provide reference and validation data in areas that are inaccessible to aerial photography.


If wall-to-wall coverage is available using SkySat and Pléiades, then they will be useful for deforestation and forest degradation mapping. If incorporated into the analysis, they could test the hypothesis that half-meter resolution will enable the detection of new trees at the leading edge of forest advance, that are not visible in lower resolution image products.


In forest degradation mapping, they will have the resolution needed to identify the very smallest signs of disturbance not visible in lower resolution images. This is critical when the canopy has closed over, and most signs of disturbance are now hidden.

Once the dataset has been finalized, the analysis will work like this:


1. Clouds will be masked out of the multispectral images, and a cloud-free composite will be created.
2. Multispectral and radar images will be fused using GEE’s .rgbToHsv() method (Figure 4 below). The radar image is a single band image, and the multispectral image is a three band image in RGB format. The RGB to HSV transformation separates the spatial and spectral information, by sending the spatial information into the Volume band, and the spectral information into the Hue and Saturation channels. The Volume channel will be replaced with the radar image, and the Hue and Saturation channels will be left unchanged. This will then be converted to an RGB image that has the texture information of the radar image, and the colour information of the multispectral image.

3. Training data will be created from the fused images.
4. Validation samples will be generated from the reference images. GeoVantage aerial photography is already available courtesy of the Guyana Forestry Commission. If available, SkySat or Pléiades images can be used in areas not accessible to aerial photography.
5. Deforestation mapping will be done using three classes (water, forest, non-forest land). Forest degradation mapping will be done using classes for water, intact forest, grassland, agriculture, shifting agriculture, timber harvesting, illegal logging, fire, mining, and natural changes in forest cover. This will be done using each classification and clustering algorithm in Google Earth Engine. Each algorithm will be run on the fused image and the source images, so that accuracy can be compared later.
6. For each analysis, the training and validation samples will be used to generate a confusion matrix, which measures the accuracy of the analysis.
7. The most accurate fused image and algorithm will be used as a new method of monitoring deforestation and forest degradation for Guyana’s REDD+ program.
8. The standard deviation of the source images and the fused image will be measured and compared to determine the change in the amount of information due to fusion. Error measures will be used to measure the accuracy of the fusion process, relative to GeoVantage aerial photography, made available courtesy of the Guyana Forestry Commission. This will provide insight into how this sensor fusion process works, what effect it has on information content, and why it produced the results that it did.

Figure 1: The image fusion process in Google Earth Engine.

3: Next Steps

After that study has been completed, the next step is to create a time series of fused images, and use that to track deforestation and forest degradation in the long term. Google Earth Engine already has the following image products:

Table 3: Date ranges of image products in GEE
Type Sensor Date Range
Landsat Multispectral Scanner 1972 to 2013
Landsat Thematic Mapper 1982 to 2013
Landsat Enhanced Thematic Mapper 1999 to present
Landsat Operational Land Imager 2013 to present
Sentinel-1 2014 to present
Sentinel-2 2015 to present

With those four image products, it is possible to create a time series of fused Sentinel-1/Sentinel-2 images from 2015 to the present, and a time series of fused Sentinel-1/Landsat images from 2014 to the present. These other image products, if acquired, would provide data over a longer time period:

Table 4: Date ranges of relevant image products.
Type Sensor Date Range Resolution
Multispectral Landsat Multispectral Scanner 1972 to 2013 57m
Multispectral Landsat Thematic Mapper 1982 to 2013 30m
Multispectral SPOT 1, 2, 3 1986 to 2009 10m
Multispectral SPOT 4 1988 to 2013 10m
Radar European Remote Sensing Satellite 1991 to 2011 25m
Radar RADARSAT 1995 to present 1x3m
Multispectral Landsat Enhanced Thematic Mapper 1999 to present 30m
Multispectral SPOT 5 2002 to 2015 2.5m
Multispectral Pléiades 2003 to present 0.5m
Multispectral RapidEye 2008 to 2020 5m
Radar TerraSAR-X 2010 to present 1m
Multispectral SPOT 6, 7 2012 to present 1.5m
Multispectral Landsat Operational Land Imager 2013 to present 30m
Multispectral SkySat 2013 to present 0.5m
Radar Sentinel-1 2014 to present 5x20m
Multispectral Sentinel-2 2015 to present 10m
Multispectral PlanetScope 2016 to present 3.7m

This selection will allow the creation of a time series of fused radar-multispectral images that goes back thirty years, to 1991 (the year ERS launched). The Landsat and SPOT programs will allow the time series to be extended another twenty years back, though with multispectral information only. SPOT provides 10 meter resolution data to 1986, Landsat TM provides 30 meter resolution data to 1982, and Landsat MSS provides 57m resolution data to 1972. Multiple series of images will be created according to these principles:


1. Each time series will have images from two sensors, one multispectral and one radar. Sensors will not be mixed, so that resolution, uncertainty, etc. can be known with certainty.
2. From 1991 to the present, each time series will consist of the following fused band combinations:
a. Radar + Near Infrared, Red, Green (for chlorophyll activity)
b. Radar + Short Wave Infrared 1, Short Wave Infrared 2, Near Infrared (for moisture content)
3. From 1972 to 1991, the unfused multispectral counterparts of the above will be used.


This is applicable to regional climate change forecasts in Amazonia. Forests are currently a major source of uncertainty in climate change forecasting (Asuka Suzuki-Parker, pers. comm.). If successful, this research will quantify deforestation and forest degradation through time with greater accuracy and precision than before. A natural follow-up study would be to run climate change forecasts using the improved dataset, and compare the results with forecasts that used previous data.

Google Earth Engine already has the following image products:

4: About Me

My background begins with my undergraduate education in anthropology and linguistics at the University of Manitoba. My coursework included training in paleoethnobotany, mineralogy, petrology and geochemistry. During this time, I volunteered in the archaeology lab at the Manitoba Museum, where I was trained in artifact analysis. I received field training in excavation and forensic anthropology at an early modern cemetery in Poland, and further training in excavation at a Canadian fur trade fort. This was followed by a master's degree in archaeology at Durham University, where I conducted a preliminary analysis of a collection of ancient South Asian artifacts in the collections of the British Museum.

After this, I gained research experience in a variety of settings. First, I worked as a volunteer in a laboratory, assisting with statistical analysis of data from nutritional research, while at the same time working as an intern in an archive. I then transferred over to a hospital's cardiac surgery team, where I had a hand in building databases of patient data for the nurses and researchers, and assisting the statistician with exploratory data analysis. After this, I worked as a GIS technician at Manitoba Infrastructure, where I carried out flood risk assessment in response to requests for information from other government departments and the general public. This also included flood risk mapping for the National Disaster Mitigation Program, the International Joint Commission, and public open houses. In between that and my PhD, I worked as a research assistant to a University of Manitoba history professor, helping her build web maps using ArcGIS Online.

During my PhD, I have had a hand in several projects. First, I prepared satellite images for use by a documentary team that had been contracted by National Geographic for the documentary Ancient China from Space. Second, I was part of a joint Department of Archaeology-Department of Geography team that used satellite images and neural networks to create high resolution soil maps of the Middle East, for use in environmental archaeology and modern land use management. Most recently, I have assisted Durham County Council with COVID-19 mapping, and helped the Guyana Forestry Commission monitor deforestation.

Research Interests

  • Spatial data analysis using geographic information systems
  • Remote sensing, especially sensor fusion, image classification, and time series analysis
  • Environmental archaeology, especially human-climate interaction before the Industrial Revolution
  • Reforestation and agroforestry science and policy, including land management practices of past cultures and present non-Western cultures

Selected Publications

Conference Paper

Journal Article

Show all publications

Is supervised by

Teaching Areas

  • Handling Geographic Information Practicals (120 hours/year.)

Related Links