Anomaly Detection Using Deep Learning Techniques

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!

Summary

In recent years, anomaly detection has gained significant attention from both the academic community and business enterprises. In general, this term refers to a set of techniques that are implemented to identify the unusual behavior in the data (Schwartz & Jinka, 2015). Nevertheless, analogy detection is a highly complicated and multifaceted subject, consisting of a large variety of strategies, methods, and frameworks. IT and network intrusion analytics, medical diagnostics, financial fraud protection, manufacturing quality control, marketing, and other areas might benefit from implementing anomaly detection (Cloudera, 2020). The application of anomaly detection is associated with a number of advantages, such as time-cost efficiency, adaptability, classification of unseen data, automated analysis, and improved security.

At present, various organizations are attempting to minimize the human factor to improve the performance of the implemented systems. From these considerations, it is necessary to enhance the current technologies of appropriate feature extraction, defining normal behaviors, and handling imbalanced distribution of normal and abnormal data. For this purpose, a large number of experts recommend implementing deep learning approaches. Deep learning techniques allow for the automated process of integrating the data from multiple sources, thus, increasing the overall efficiency of anomaly detection (Cloudera, 2020). As a result, deep learning-based algorithms demonstrate exceptional potential for improving anomaly detection techniques.

Background of the Study

In the process of anomaly detection, modeling normal behavior and then exploiting the knowledge to identify irregularities is the most significant phase. The application of deep learning approaches implies several advantages. First, deep learning allows for the comprehensive analysis of high-dimensional data acquired from multiple sources (Cloudera, 2020). Consequently, this approach eliminates the necessity to individually analyze sources and variables and, thus, increases the overall speed and efficiency of the process. Secondly, due to the high flexibility of the approach, it is possible to minimize the framework tuning to model interactions among multiple variables, such as the number of layers and units per layer (Cloudera, 2020). Lastly, deep learning is associated with a performance increase.

Nevertheless, certain complications may occur by implementing the deep learning approach to identify abnormalities. The primary factors include contaminated normal examples, computational complexity, human supervision, the difference between normal and irregular behavior, threshold selection, and interpretability (Cloudera, 2020). All these factors might significantly affect the results of anomaly detection and should be addressed. Therefore, it is essential to improve the framework and eliminate the potential flaws of the approach in regard to anomaly detection in unsupervised and semi-supervised settings.

Motivation and Objectives

A large number of potential flaws are identified that obstruct the beneficial features of existing methods of anomaly detection in the literature. These considerations necessitate to evaluate the existing approaches and devising solutions to the problems. Therefore, the primary objective of the research is to propose a systematical performance analysis of deep learning-based approaches and their implementation in anomaly detection in unsupervised and semi-supervised settings. Furthermore, to achieve the primary objective, it is essential to complete the following tasks:

  • Examine the state-of-the-art studies focused on machine learning and deep learning techniques for anomaly detection in data streams;
  • Comparative study of the existing literature emphasizing the nature of the data, anomaly types, detection learning modes, window models, datasets, and evaluation criteria;
  • Analyze the current techniques and methods of anomaly detection based on the proposed taxonomy
  • Recognize the potential challenges that might obstruct the direction of the research;
  • Acquaint with deep learning methods for the general semi-supervised anomaly detection;
  • Design an architectural model for time-series-based anomaly detection methods;
  • Assess the productivity of the proposed approach by assessing such parameters as precision, recall, and accuracy.

Literature Review

A detailed research study of the anomaly detection techniques is essential to identify the potential challenges and theoretical backdrops present in the literature. Implementation of deep learning methods in anomaly detection has received significant attention in recent years, and a large number of experts are continually improving the present frameworks. Bulusu et al. (2020) investigate the existing research concerning deep learning model architectures and anomaly detection techniques. The authors present an extensive list of the prior studies and identify the classification type (supervised/semi-supervised/unsupervised) and major contributions to the topic (Bulusu et al., 2020). Kwon et al. (2017) utilize a similar principle and discuss the deep learning-based methods to identify abnormalities within network applications. Ultimately, a large amount of theoretical research has been done in recent years, and it is essential to overview the existing studies before collecting the data and developing the framework for the current work.

While some research provides the general guidelines for the deep learning models, some experts emphasize the practical value of the studies. In a paper by Ruff et al. (2020), “Deep Semi-Supervised Anomaly Detection”, the authors discuss the advantages of labeled anomalies with existing deep learning approaches being domain-specific. The authors present an end-to-end methodology for general semi-supervised anomaly detection (Ruff et al., 2020). The authors also introduce a theoretical framework for the deep anomaly detection approach based on the concept that the entropy of the latent distribution for normal data should be lower than the entropy of the anomalous distribution, which can serve as a theoretical interpretation for the method in the thesis work (Ruff et al., 2020). Ultimately, the authors suggest adopting the MNIST, Fashion-MNIST, and CIFAR-10 frameworks along with other anomaly detection benchmark datasets to demonstrate that these approaches are on par or even outperform competitors, displaying substantial performance improvements even when provided with only little labeled data (Ruff et al., 2020).

Furthermore, for the sake of the current work, it is essential to address the unsupervised anomaly detection model as well. Zong et al. (2018) present a Deep Autoencoding Gaussian Mixture Model (DAGMM) for unsupervised anomaly detection. This model utilizes a deep autoencoder to generate a low-dimensional representation and reconstruction errors for each data entry, which is, consequently, uploaded into a Gaussian Mixture Model (GMM) (Zong et al., 2018). Instead of using decoupled two-stage training and the standard Expectation-Maximization (EM) algorithm, DAGMM jointly optimizes the parameters of the deep autoencoder and the mixture model simultaneously in an end-to-end fashion, leveraging a separate estimation network to facilitate the parameter learning of the mixture model (Zong et al., 2018). The suggested model balances autoencoding reconstruction, density estimation of latent representation, and regularization; it helps the autoencoder move from less attractive local optima and further reduces reconstruction errors, avoiding the need for preparation measures (Zong et al., 2018). The authors explain that these results on several public benchmark datasets confirm that DAGMM significantly outperforms state-of-the-art anomaly detection techniques and achieves up to 14% improvement based on the standard F1 score (Zong et al., 2018).

Interpretation of the literature review

The current literature review demonstrates the possibility of various approaches to achieve competitive performance on different data sets and with different classifications. Nevertheless, there is a research gap concerning time-series-based anomaly detection, and it is crucial to address this topic in the future via deep learning-based approaches. Therefore, it is necessary to analyze the present methods, evaluate the effectiveness of the existing frameworks in regard to time-series data sets, and extend the scope of the research by examining other types of data in different domains.

Proposed Methodology

The current research focuses on a new architecture model detecting anomalies using deep learning techniques. The process is carried out in three major stages: analysis of the existing literature, development of the framework and data collection, and, ultimately, data analysis and discussion. The framework and data collection methods are based on the methodology described below.

First, it is essential to model normal behavior and establish certain thresholds to identify abnormalities in the data. For this purpose, the expert is required to analyze the existing data, which comprises both normal and abnormal information. Consequently, it is necessary to assign parameters to each data point, which represent the deviation from established normal behavior. The second step concerns the setup of the threshold point, which separates normal and abnormal behaviors. As a result, all data points above the threshold would be considered abnormalities, while the information entries below the threshold are normal. Furthermore, the expert can manually change the threshold value to adjust the results of the anomaly detection framework, increasing the accuracy of the ultimate results. This approach is generally considered semi-supervised due to the direct involvement of the expert and the deliberate setup of the thresholds. According to the initial estimations, the thesis paper will utilize the described framework to collect data and achieve the objectives.

Contents

The structure of the current research is subject to change; nevertheless, the initial organization is the following:

  • Chapter 1 covers the research background, problem statement, motivation, aim, objectives, and significance of the research;
  • Chapter 2 is the literature review that examines the existing research related to anomaly detection using deep learning techniques;
  • Chapter 3 introduces the research methodology that covers the detailed description of the proposed research design, approach, data collection, analysis, and sample size of the data;
  • Chapter 4 overviews the experimental results and provides the analysis of the collected data;
  • Chapter 5 includes the discussion, major findings, limitations of the study, recommendations for future research, and conclusion.

Schedule

The writing of the thesis is comprised of seven stages: literature review, research proposal, development of the experimental framework, data collection, data analysis, thesis write-up, and thesis submission. The schedule for the research is the following (initial estimations).

Schedule

References

Bulusu, S., Kailkhura, B., Varshney, P., & Song, D. (2020). IEEE Access, 8, 132330-132347.

Cloudera Fast Forward. (2020). .

Kwon, D., Kim, H., Kim, J., Suh, S. C., Kim, I., & Kim, K. J. (2017). . Cluster Computing, 22. 949–961

Ruff, L., Vandermeulen, R. A., Görnitz, N., Binder, A., Müler, E., Müler, K.-R., & Kloft, M. (2020). Deep semi-supervised anomaly detection. ICLR 2020 (pp. 1-23).

Schwartz, B., & Jinka, P. (2015). Anomaly detection for monitoring: A statistical approach to time series anomaly detection. O’Reilly Media: US.

Zong, B., Song, Q., Min, M. R., Cheng, W., Lumezanu, C., Cho, D., & Chen, H. (2018). Deep autoencoding Gaussian mixture model for unsupervised anomaly detection. ICLR 2020 (pp. 1-19).

Do you need this or any other assignment done for you from scratch?
We have qualified writers to help you.
We assure you a quality paper that is 100% free from plagiarism and AI.
You can choose either format of your choice ( Apa, Mla, Havard, Chicago, or any other)

NB: We do not resell your papers. Upon ordering, we do an original paper exclusively for you.

NB: All your data is kept safe from the public.

Click Here To Order Now!