Stand Mapping

This webpage documents an open-source data science and software engineering effort to produce new tools that segment and label forest conditions using computer vision approaches that can be applied with a variety of imagery and other data sources.

Forest stand boundaries overlaid on an aerial image.

Since widespread adoption of Geographic Information Systems, forest stands are typically hand-delineated by foresters using high-resolution aerial imagery with supporting geospatial layers including property boundaries, roads, hydrographic features, and topography.

Motivation

This software is being developed as part of a broader project with the goal of increasing landowner engagement and adoption of stewardship and conservation activities in forests across Oregon and Washington. Most pathways to adopting new practices involve assessing current and historical forest conditions and preparing plans to implement and follow-up on new activities. This typically involves mapping and characterizing forest stands and gathering publicly-available information in the forms of maps and tables. This work and information can be costly, time-consuming, or difficult to access, limiting the adoption of new stewardship practices by landowners who might otherwise be inclined to do so. We are investigating whether and how a web-based forest mapping and planning toolkit could reduce these barriers.

Examples of data layers used for training forest models.

We are utilizing a stack of data layers to help train new algorithms to delineate land cover “instances” such as forest stands. Our models are based entirely upon free and publicly-available datasets including high-resolution aerial imagery, leaf-on and leaf-off satellite imagery, roads, hydrologic features, parcel boundaries, disturbance history, and terrain and land cover datasets.

Contents

Concepts & Methods

The technical tasks we are focused on in forest mapping involve delineating and classifying the conditions within forest stands, which are considered as two distinct steps.

The term “stand” is generally used to refer a forested area ranging in size from a few acres to more than one hundred acres that can be distinguished from adjacent forested areas based on species composition, tree size, density, spatial arrangement, productivity, and/or management history and accessibility.

Generalizing the Stand

In the context of object detection and delineation, the concept of a “stand” can also be generalized to refer to any region of interest that can be distinguished from adjacent regions using available input data. Any objects on the landscape that that can be represented with polygons/masks can be the “targets” of segmentation workflows like the ones we adopt. Examples include identifying areas with forest composition or structure that provides suitable habitat for a particular wildlife species, or areas that have been affected by natural of human disturbances with varying levels of intensity such as harvests, pathogens, or fire.

Pixels vs. Stands

There has been a significant amount of research and product development related to generating pixel-scale predictions of forest attributes. Our project departs from this approach by recognizing the importance of aggregating the landscape into practical management units which still remain the basis for most forest conservation and management planning purposes.

The fundamental premise of the stand mapping aspect of this project is that human-drawn stand boundaries provide a good place to start for teaching machines to recognize and attempt to replicate how human managers delineate forest conditions for practical uses.

Stand Delineation Approach

We are currently working on two main segmentation approaches:

  • Multi-stage segmentation pipelines linking together image preprocessing, over-segmentation, region-merging, and boundary post-processing using tools primarily drawn from from scikit-image
  • Instance segmentation using a region-based convolutional neural network, Mask-RCNN, adapted from the PyTorch implementation

A significant part of this effort involves the construction of a benchmarking dataset that includes several layers of features as well as the targets which include bounding boxes and masks distinguishing major land cover types (water, field, forest, barren, impervious) and the distinct instances of each cover type.

Benchmarking Dataset

To do…

Getting started

This project is under active development. Source code is not yet hosted on a package index such as PyPI or conda-forge. In the meantime, the best way to get up-and-running is to first clone this repo to your local machine then navigate into it.

git clone https://github.com/d-diaz/stand_mapping.git
cd stand_mapping

For a simple install, you can just do

python setup.py install

If you want to make any changes to the source code and have them updated while you work, consider any of the following methods.

python setup.py develop

If you want to use the pip package manager

pip install . -e

If you use the conda package manager and have the conda-build package installed, you can also use

conda develop .

Once that is done, you should be able to import any functions or modules from stand_mapping

>>> from stand_mapping.data.fetch import naip_from_tnm
>>> XMIN, YMIN = 555750, 5266200
>>> WIDTH, HEIGHT = 500, 500
>>> BBOX = (XMIN, YMIN, XMIN+WIDTH, YMIN+HEIGHT)
>>> img = get_naip_from_tnm(BBOX, res=1, inSR=6339)
>>> img.shape
(500,500,4)

Dependencies

This project is under active development. Dependencies may change over time.

Documentation Requirements

sphinx
sphinx_rtd_theme

Runtime Requirements

descartes
geopandas
imageio
matplotlib
numpy
pandas
pytorch
rasterio
scikit-image
scikit-learn
torchvision

Recommended for interactive visualization and development

ipywidgets
jupyterlab
seaborn

SARE Project

Under development.

Funding Sources

Work on Stand Mapping is led by David Diaz as part of his Ph.D research at the University of Washington’s School of Environmental and Forest Sciences and as Director of Forestry Technology & Analytics for the nonprofit organization Ecotrust.

This work has been enabled by financial support from:

The Western SARE Research & Education Project is a collaborative effort co-led by Ecotrust, Wallowa Resources, Northwest Natural Resource Group, and the University of Washington. Find out more about the project here.

Western SARE Western SARE USDA National Institute of Food and Agriculture
"""
This material is based upon work that is supported by the National Institute
of Food and Agriculture, U.S. Department of Agriculture, under award number
G353-20-W7899 through the Western Sustainable Agriculture Research and
Education program under project number SW20-914. USDA is an equal opportunity
employer and service provider. Any opinions, findings, conclusions, or
recommendations expressed in this publication are those of the author(s) and
do not necessarily reflect the view of the U.S. Department of Agriculture.
"""

Indices and tables