Neural Network Models for Prediction of Deforestation: A Survey

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Deforestation, as one of the most uprising environmental problems in today’s time, has been recorded as the foremost serious threat to the environmental ecosystem and one of the main factors that have contributed to the green cover change. This paper depicts various methods used for the identification and prediction of deforestation. Over the years, numerous methods were implemented for this purpose; however, they were restricted to a specific area. In this survey paper, we have described many such techniques and have compared their identification and predictions.

Deforestation is one of the most concerning problems that must be handled on an urgent basis by finding an efficient solution, as it has affected biodiversity, habitat loss, and climate change exponentially, causing a massive loss to natural elements. Regardless of its negative impact, most nations don’t have itemized statistics on the level of deforestation. We hope to tackle this problem by using satellite data to track deforestation and help researchers better understand where, how, and why deforestation happens and how to respond to it. #

There have been many developments in Satellite imagery technology, which has led to better deforestation detection and has made it faster, more convenient, and more accurate than ever before. A Real-Time System for Detection of Deforestation has been implemented in Brazil to reduce the deforestation rate by almost 80% since 2004 by alerting environmental officials about the large-scale forest clearing. Current tracking efforts within rainforests largely depends on coarse-resolution imagery from Landsat (30-meter pixels) or MODIS (250-meter pixels). Limited effectiveness in the detection of small-scale deforestation or differentiating between human causes of forest loss and natural causes were the challenges faced by these methods.

The Planet, the designer and builder of Earth-imaging satellites, has a labeled land surface dataset at the 3-5-meter resolution. It helps in building modern deep learning techniques to identify activities happening within the images. Using these images from multiple sources like google earth engine, the Planet, the neural network models can be built to predict the required data.

The author Eric Xu and Orien Zeng from Stanford University describe a model using a platform Planet, the designer and builder of Earth-imaging satellites have a labeled land surface dataset at the 3-5-meter resolution. They have collected JPEG images of size 256×256 containing four data channels: red, green, blue, and near-infrared. Each of the training images was labeled with a subset of seventeen different labels. These seventeen labels were organized into the group of atmospheric conditions, primary land cover, and rare land cover. The atmospheric condition labels were cloudy, partly cloudy, haze, and clear. The standard land labels were primary, water, road, agriculture, cultivation, bare ground, and habitation. The rare land labels were artisanal mine, blooming, blowdown, conventional mine, selective logging, and slash burn.[1]

Vahid Ahmadi, Tarbiat Modares University, in his research, to predict deforestation, modeled green cover changes using an artificial neural network as it provides remarkable results for the development of nonlinear complex models. The procedure involves image processing, classification of images using various algorithms, preparing maps of deforested regions, determining layers for the model training, and designing a multi-layer neural network to predict deforestation. The satellite images for this study were from the area in Hong Kong captured from 2012 to 2016. The results of the study show that the neural networks approach can be used for predicting deforestation, and its outcomes show the areas that were reduced during the research period.[2]

The author developed a multi-layer perceptron neural network model to predict the area most vulnerable to future deforestation based on the anthropogenic transformation of forest in southern Belize using variables that mostly affected the deforestation. These variables were acquired from remote sensing techniques. In this research, due to human activities like agriculture, vulnerable areas were defined as regions susceptible to forest loss.[3]

In this study, authors Pablo Pozzobon de Bem, Osmar Abílio de Carvalho Junior, Renato Fontes Guimarães, and Roberto Arnaldo Trancoso Gomes selected three areas from the Brazilian Amazon for this study. Two-third of the data was used for training, and the rest were used as validation. The Landsat 8 imagery for the years 2017, 2018, and 2019 was obtained, and a bi-temporal modeling approach was used. Multitemporal images from similar periods of the year reduce variations in the phenology and sun-terrain-sensor geometry. To reduce the noise content in the images, they were collected from the dry seasons only.[4]

Author Aleksandr Lukoshkin in his study used two different sites Juuka and Karttula. The dataset includes LiDAR measurements, aerial photographs, and forest parameters of count 38 variables, two feature variables, and four target variables, respectively. The variables from LiDAR measurements include cumulative percentile of the first and last pulse heights of non-ground hits, percentile intensities of first and last pulse intensities of non-ground impacts, means of first pulse heights > 5m, standard deviation(SD) of first pulse height, and the number of measurements < 2m of first and last pulse heights divided by the total number of the same measures of each plot. The features derived from aerial images represent the percentage of all pixels in an image of a plot classified as hardwood (Hwd) and coniferous trees (Cnf). A human interpreter was used for the classification. The target variables of the forest standings dataset consisted of Vt - the total volume and species-specific volumes of V1 - Scots pine, V2 - Norway spruce, and V3 – for hardwoods treated as a group, but mostly comprised of birch.[5]

Their model predicts the labels of over 60000 JPEG images in a test set in the final test evaluation. The average score provided by the author is only for the half dataset, while the score for the other half was hidden. An F2 score evaluates the model so that recall (ratio of true positives to all actual positives) is weighted higher than precision (ratio of true positives to all predicted positives). The final F2 score is formed by averaging the individual F2 scores of each label. They concluded that the VGGNet performs the best in analyzing the satellite images [1]

The studied area by the author is a forest in Hong Kong, China. Extension Georeferencing was used to retrieve the images in the ArcGIS environment. The images were classified into forest, sea, and urban classes using the algorithm most similarly to these features using ERDAS software and then loaded back to the ArcGIS. The initial input for the network were the two layers of forest cover index. The next layer for the network was the proximity of the nearby cities, which was calculated using two classes of urban and non-urban, and the last input is height. The model was trained using this data, and the author obtained the resulting precision of 98 Network convergence.[2]

The author collected the data of the Landsat 8 Operational Land Imager (OLI) data via Google Earth Engine for the years 2014, 2016, and 2017 and classified the forest cover using the supervised classification. To enhance the features of the imagery, False-color composites and normalized difference vegetation indexes were computed. They performed the supervised classification of Landsat imagery and used a stratified random sampling design to split the data into training and validation sets. This classification achieved an accuracy of 88%, 94%, and 95% for the years 2014, 2016, and 2017 respectively. Class-specific producer’s and user’s accuracies ranged from 86% to 96%.[3]

The author has made the addition of two classical Machine Learning algorithms, random forest, and a simple multilayer perceptron architecture, in order to assess the DL models. The Landsat 8 imagery for the years 2017, 2018, and 2019 was obtained, and a bi-temporal modeling approach was used. To reduce the noise content in the images, they were collected from the dry seasons only. ResUnet model had the best results except for the precision score for 2017-2018. The similar but slightly better results were provided by harpMask and U-Net models.[4]

From the above discussions, we can conclude that many algorithms and models have been implemented that predict deforestation with high accuracy but are restricted to a limited region with no temporal predictions. Further, the similar features along with additional ones like data history, climatic data, population growth rate, etc. can be used to build a general model that can take input of any geographical location to predict its deforestation and the time within which the green cover should be retrieved to avoid crossing the threshold limit.

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!