Popular Post

Search This Blog

Friday, June 19, 2015

Papers with citations to my work: 1. "Automated detection of performance regressions using statistical process control techniques"

Abstract
The goal of performance regression testing is to check for performance regressions in a new version of a software system. Performance regression testing is an important phase in the software development process. Performance regression testing is very time consuming yet there is usually little time assigned for it. A typical test run would output thousands of performance counters. Testers usually have to manually inspect these counters to identify performance regressions. In this paper, we propose an approach to analyze performance counters across test runs using a statistical process control technique called control charts. We evaluate our approach using historical data of a large software team as well as an open-source software project. The results show that our approach can accurately identify performance regressions in both software systems. Feedback from practitioners is very promising due to the simplicity and ease of explanation of the results.

6 AUTHORS, INCLUDING: Thanh H. D. Nguyen Queen's University 14 PUBLICATIONS 246 CITATIONS SEE PROFILE Bram Adams Polytechnique Montréal 100 PUBLICATIONS 686 CITATIONS SEE PROFILE Ahmed E. Hassan Queen's University 196 PUBLICATIONS 2,454 CITATIONS

Trubin et al. [18] proposed the use of control charts for infield monitoring of software systems where performance counters fluctuate according to the input load. Control charts can automatically learn if the deviation is out of a control limit, at which time, the operator can be alerted. The use of control charts for monitoring inspires us to explore them for the study of performance counters in performance regression tests. A control chart from the counters of previous test runs, may be able to detect “out of control” behaviours, i.e., deviations, in the new test run. 
...
[18] I. Trubin. Capturing workload pathology by statistical exception detection system. In Computer Measurement Group (CMG), 2005
_______
The next paper that has citations to my work is in the next post:

My Statistics at the ResearchGate: 238 publication downloads, 618 views, 13 citations

I see the interest to my  publications is growing:

Publication downloads


So you also may want to look at my 15 publications at https://www.researchgate.net/profile/Igor_Trubin  and you are welcome!
Check my next posts with papers that have citation to my work:

Saturday, June 13, 2015

Anomaly detection by using R

8/2017 UPDATE:  My ML based anomales and patterns change detection tool - SETDS was redeveloped on R. See more details:


Igor = I go R. I have redeveloped SETDS on R = SonR


_______________________________________ original post:
I have already suggested (and partially tested) to use R to developed an exception (anomaly) detector by applying my SETDS Methodology. You can find some simple examples in my CMG.org papers or here or at the following post:


SEDS-Lite: Using Open Source Tools (R, BIRT, MySQL) to Report and Analyze Performance Data 


I did not used any specific statistical packages for that 
(e.g.  qcc), but I see now some very specific ones have been appearing that could be used to detect different type of anomalies. 

Here is one at  Twitter Blogs:
Introducing practical and robust anomaly detection in a time series

Not sure how the approach evaluate (score) significance of the anomaly like EV meta-metric does in my SETDS Methodology. I see at least it puts them in some categories such as "global anomalies" and "local anomalies".
 I may want to test the package. You?