Adaptive Anomaly Detection in Cloud using Robust and Scalable Principal Component Analysis
by
Abstract
This paper proposes a novel and scalable model for automatic anomaly detection on a large system such as a cloud. Anomaly detection issues early warning of unusual behavior in dynamic environments by learning system characteristic from normal operational data. Anomaly detection in large systems is difficult to detect due heterogeneity, dynamicity, scalability, hidden complexity, and time limitation. To detect anomalous activity in the cloud, we need to monitor the datacenter and collect cloud performance data. In this paper, we propose an adaptive anomaly detection mechanism which investigates principal components of performance metrics. It transforms the performance metrics into a low-rank matrix and then calculates the orthogonal distance using the Robust PCA algorithm. The proposed model updates itself recursively learning and adjusting the new threshold value in order to minimize reconstruction errors. This paper also investigates the robust principal component analysis in distributed environments using Apache Spark as the underlying framework, specifically addressing cases in which a normal operation might exhibit multiple hidden modes. The accuracy and sensitivity of the model is tested on Google data center traces and Yahoo! datasets. The model achieves an 87.24% accuracy.
MY COMMENT: By the way the paper has referenced to MASF technique which I have enhanced and have been using (check my SETDS methodology) for years to capture anomalies (exceptions) and sudden short term trends against huge server farms (20,000+ servers) including private and public clouds. Note my way is much-much simpler and in spite the MASF has indeed a high rate of false positives, SETDS has the way to handle that well.
No comments:
Post a Comment