Popular Post

Search This Blog

Tuesday, November 20, 2012

SETDS Methodology

Preparing my upcoming CMG'12 presentation about SEDS-lite I try to formulate what SEDS or extended version of that - SETDS actually is.

SE(T)DS is Statistical Exception (and Trend) Detection System.  It is not an application. But could be implemented by developing one. And I have done that several times (using SAS, COGNOS, BIRT, R and other programming/reporting systems). But developing SETDS-like reports/apps is just a beginning.  The most important part of SETDS is how to use that for Systems Capacity Management and how to build that in the Service Management processes. The set of my CMG papers I wrote since 2001 (list is in the very 1st post of this blog) describes that in details.

By the way it is not absolutely necessary to develop the SETDS application because starting from BMC PP and visualizer (now it is Capacity Optimizer, Perceiver and  Proactive Net) a lot of performance tools have SETDS-like features and this blog has several posts analyzing them (e.g. see Gartner's Magic Quadrant).

A Capacity Manager just need to know how to use the home made or vendor based  SETDS-like tools features efficiently and SETDS is the method. 

So bottom line is:

SETDS is the methodology (Trubin's methodology if you wish) of using statistical filtering, pattern recognition, active base-lining, dynamic vs. static thresholds,  IT-Control Charts, Exception Value (EV) based reporting/smart alerting and EV based trend detection to do Systems Capacity Management including Capacity Planning and Performance Management.

What value SETDS could bring to a company? I will formulate that later during and after my  CMG'12 presentations on which you are welcome to attend (see agenda: CMG'12)!

Wednesday, November 14, 2012

SAS code to build Control Chart

As I have already shared at the "The Master of MASF" post there was the CMG presentation (where BTW my work and this blog were mentioned) that illustrates how to use SAS to build MASF Control Charts.

The two parts of that presentation are available at the www.MXG.com along with link to SAS code that could be used to build control charts:

Automatic Daily Monitoring of Continuous Processes Theory and Practice
Frank Bereznay and MP Welch (UKCMG2011) (SWCMG2011)
Filename            Size (bytes)       Date Posted 
ADMCP Part 1 PDF 1,178 KB  May 17, 2011 
ADMCP Part 2 PDF 8,719 KB  May 17, 2011 
Coding Sample admcp_sample.sas 4KB  May 17, 2011 
HTMLBLUE SG 9.3 Style Preview (Make 9.2 look like 9.3) 31KB  May 17, 2011 





Wednesday, October 24, 2012

Not a MASF Based Statistical Techniques (Entropy-based) for Anomaly Detection in Data Centers (and Clouds)

The following papers published on Mendeley criticizes the MASF Gaussian assumption and offer other methods (Tukey and Relative Entropy) to detect anomalies statistically. (BTW I tried to use the entropy analysis to capture performance anomalies - check my other post)

1. Statistical techniques for online anomaly detection in data centers
by Chengwei Wang, Krishnamurthy Viswanathan, Lakshminarayan Choudur, Vanish Talwar, Wade Satterfield, Karsten Schwan
  
Abstract
Online anomaly detection is an important step in data center management, requiring light-weight techniques that provide sufficient accuracy for subsequent diagnosis and management actions. This paper presents statistical techniques based on the Tukey and Relative Entropy statistics, and applies them to data collected from a production environment and to data captured from a testbed for multi-tier web applications running on server class machines. The proposed techniques are lightweight and improve over standard Gaussian assumptions in terms of performance.











2. Online detection of utility cloud anomalies using metric distributions
by Chengwei Wang Chengwei Wang, V Talwar, K Schwan, P Ranganathan

Abstract

The online detection of anomalies is a vital element of operations in data centers and in utility clouds like Amazon EC2. Given ever-increasing data center sizes coupled with the complexities of systems software, applications, and workload patterns, such anomaly detection must operate automatically, at runtime, and without the need for prior knowledge about normal or anomalous behaviors. Further, detection should function for different levels of abstraction like hardware and software, and for the multiple metrics used in cloud computing systems. This paper proposes EbAT - Entropy-based Anomaly Testing - offering novel methods that detect anomalies by analyzing for arbitrary metrics their distributions rather than individual metric thresholds. Entropy is used as a measurement that captures the degree of dispersal or concentration of such distributions, aggregating raw metric data across the cloud stack to form entropy time series. For scalability, such time series can then be combined hierarchically and across multiple cloud subsystems. Experimental results on utility cloud scenarios demonstrate the viability of the approach. EbAT outperforms threshold-based methods with on average 57.4% improvement in accuracy of anomaly detection and also does better by 59.3% on average in false alarm rate with a `near-optimum' threshold-based method.




 

Tuesday, October 23, 2012

MASF Control Charts Against DB2 Performance Data

I have done that before... I used for that my own variation of MASF Control Chart called "IT- Control Chart". You can see the example in my older post: Power of Control Charts and IT-Chart Concept (Part 1) :
 But not only me do that! I have found the following paper in MeasureIt :

The paper has a good MASF reference, the list of DB2 performance metrics that are good to be analyzed by MASF Control Charts and a few examples of Control Charts in form of 24-hour profile. One example is below (Linked to picture form the original paper published on Internet):

BTW, the actual data curve is hourly aggregated data for the particular month (October) to compare with historical base-line. Similar approach was taken in the exercise I published in the following post: Adrian Heald: A simple control chart using Captell Version 6

Tuesday, October 16, 2012

Availability vs. Capacity

Continuing the previous posts about  "Battle between "Gut-feeling" and Engineering." ....

Engineer 2: Igor - if you would like to find further extensions of your equation, you might check out Volume 1 of "Breaking the Availability Barrier," which I co-authored and which is available on Amazon. Also, check out several papers I published in the Availability Digest in the Geek Corner (http://www.availabilitydigest.com/articles.htm). A subscription is free.


Monday, October 8, 2012

Systems Availability Arena: Battle between "Gut-feeling" and Engineering. Round 3. (2 and 3 are in the previous posts)

UPDATE: the start is here
UPDATE: other rounds are here

Wednesday, October 3, 2012

Systems Availability Arena: Battle between "Gut-feeling" and Engineering. Round 2.

This is continuation of the previous post.

Gut-feeler 1 Gut feelings aren't always bad - for instance, when working on an IT project meant to support really critical business processes, and hence with a lot of money or even lives at stake, your gut feeling might be that standard clustering just isn't good enough and you need something significantly better here.
 
 Then it comes to curiosity - is there something better around than the usual standard clustering ? Is there some other IT infrastructure, one that is fault tolerant and self-healing, providing much more reliability right out of the box than you could ever achieve using the plain vanilla stuff and the most sophisticated clustering conceiveable ?

If being that curious, chances are you will end up at NonStop.

Not being curious and just doing calculations, you just might end up with adding more nodes to your standard cluster hoping to make it more reliable - which in the real world often turns out to be a false hope ...

But don't get me wrong, I'm not at all against calculations. A very important one is on capacity, will (n - 1) nodes still support my workload, when a node went down for whatever reason ? That's often overlooked ... 

Engineer 2  Two comments, one on the Trubin law, and one on .. comments [above]. .... Adding a node of m 9s to a node of m9s adds m9s to a cluster. The overall availability of a cluster of n nodes, each with m 9s availability, is mn 9s.... For instance, a three-node cluster of nodes with 2 9s availability will be six nines...

Let f = the failure probability of a node [A=(1-f)]. If f is an even number of nines, then the failure probability of a node is f= 0.1^m, where m is the number of nines (for instance, for three nines, f=0.1^3 =.001 and A =(1-0.1^3) = 0.999). For an n-node cluster, its availability is 1-(1-A)^n = 1-[1-(1-0.1^m)^n = 1-(0.1^m)n = 1-0.1^mn.

In general, if a node has a failure probability of f, then an n-node cluster has an availability of 1-f^mn. Two nodes with availabilities of 0.95 will have an availability of 0.9975.

Of course, this assumes that the cluster fails only if all nodes fail. Generally, a cluster can withstand the failure of some nodes but not all. In this case, the above relations can be modified to accommodate this situation.

"Gut-feeler 1"'s suggestion that adding nodes does not result in this additional availability is quite correct. The above relations apply only to hardware failures (or whatever failures might be included in the nodal availability), and are accurate for those. However, once the hardware availability becomes high (say four 9s), other factors that are not node related come into play, such as software bugs, operator errors, and environmental faults (power, air conditioning). These limit the practical availability that can be achieved. In effect, after a certain point, increases in hardware availability become irrelevant to system availability.

Thanks for starting a very interesting and meaningful thread, Igor. 

See the next post for the next round. 
 

Tuesday, October 2, 2012

Systems Availability Arena: Battle between "Gut-feeling" and Engineering!

I have put my Cluster Availability 9's Equation post to LinkedIn Continuous Availability forum and got 19 comments, divided in about two camps: "Gut-feelers" and "Engineers". Below is the first two comments. (See next posts for other comments.) 


Gut-feeler 1 "Nice formula, looks good in theory – but won’t hold true in the real world. The reason for this is the underlying assumption of an ideal cluster, which does not exist in the real world. When looking at some real world implementation like the Oracle RAC cluster, you will find that a simple two-node cluster configuration will typically deliver somewhere between three and four nines of availability.

Now, will adding a third node to that cluster add another 9 to the availability figure ? Will a ten node cluster really provide 99.999999999999 % availability ? Will a cluster with hundred nodes run continuously for thousands of years without any outage ?

Certainly not, and talking to system administrators running typical cluster installations will quickly reveal that large clusters are quite complex and difficult to handle, hence more prone to failure than simple two-node clusters.

Even when looking at the HP NonStop architecture – which comes pretty close to the ideal cluster – the formula would not apply. A NonStop system (which internally is indeed a cluster, each NonStop CPU resembling a node) delivers roughly five nines of availability – but there is no significant availability difference between systems eg. with four and with sixteen CPU’s (cluster nodes).

So it is not so important how many nodes you have – but it is very important what kind of cluster you have ! 

Engineer 1I know this particular formula is too simple for the real world, so I completely agree with your comment. But still the complexity of big clusters can be modeled by more complex math. models by e.g. adding more boxes with parallel and series type of connections. And the formula will be much more ugly but useful I believe... Plus each individual node could be decomposed on some structure to model both HW and SW parts. The approach is written in some books I mentioned in my other posts, and it is suppose to be a tool to do that, but I am not aware of any. Are you?

In my real life this type of calculation/modeling is just a starting point to get rough estimation and then using some monitoring data to get that adjusted. What I do not like is when some Architects makes some decisions of the level of cluster redundancy without ANY calculation (!), just based on their gut feelings....

(NB: Real names can be found on the actual LinkedIn forum tread)
UPDATE: see the start point here
 

Tuesday, September 25, 2012

Wavelets and Anomaly Detection - another CMG'12 paper

In the CMG'12 conference agenda I see another interesting paper appeared and it is written by Dima Seliverstov who I have met before in another CMG conference and mentioned in this blog

His paper is called 
"Introduction to Wavelets and their Application for Computer Performance Trend and
Anomaly Detection"


The very short abstract is 
"In this session, I will present a technique to identify trends and anomalies in Performance data using wavelets. I will answer the following questions: Why use Wavelets? What are Wavelets? How do I use them?"

CMG'12 conference Monday: 2:45 pm ‐ 3:45 pm Subject Area: Capacity Planning

I plan to attend. You?

Monday, September 17, 2012

LinkedIn Discussion around Trubin's Availability Formula

The previous post "Cluster Availability 9's Equation" triggered a very good discussion on LinkedIn Continuous Availability Forum.  It currently has 19 comments (!)... I plan to re-post some comments from the discussion here in my blog. (UPDATE: it is re-posted here)

BTW in one of the comments Bill Highleyman (co-author of the Breaking the Availability Barrier) pointed on the mistake in my formula which I corrected by replacing "n+n" with "mn". He also provided the excellent resource about availability calculation where he writes articles at the The Geek Corner for Availability Digest. One of the articles there extends the subject of this ( and couple previous) post and called: "Calculating Availability – Redundant Systems " 

As I suspected my formula ("Trubin law") is just a particular case of more generic rule Bill Highleyman formulates in that article. That says:

"... Adding a spare node adds the number of nines associated with that node to the system
availability but reduced by the increase in failure modes.


That is, adding an additional spare node adds the number of 9s of that node to the system
availability – almost. This improvement in availability is reduced a bit by the increase in the
number of failure modes in the system. More nodes mean more failure modes..."


 

Friday, September 14, 2012

Cluster Availability 9's Equation


Based on the "Trubin" Law (see my previous post) each additional node adds one more 9's to overall cluster availability. That is exactly true only if the single node has only one 9's (A=0.9), which the above "Trubin"'s equation shows.

But how that would work for other single node availability numbers? What if that has two or three 9's? I have generalized my previous equation to cover that and it shows that the cluster availability number of 9's will be increasing in arithmetic progression (sequence)!
Check more in the next post "LinkedIn Discussion around Trubin's Availability Formula"


Wednesday, September 12, 2012

The Right Number of Cluster Redundancy to Achieve the Availability Goal. Trubin's Law #4!

...Still playing with the availability algebra exercise, that I started in my other post:  How to Calculate Availability of Clustered Infrastructure for Multi-Tier Application
I have built the following two charts to see how the same availability goal can be achieved by different number of redundancy.


That of course possible if the less redundant configuration has more available individual components.

By the way, interesting that if the component availability has one 9's  (90.00%), then increasing redundancy on +1 gives the additional one 9's for the cluster availability. 

So, the equation
has the following solution:  A=0.9  for any integer n within the interval (0,)

It is probably the known law in the availability algebra. If not let's call that Trubin's law #4! ;)
________
This Saga of 9's continuous in the next post:

Cluster Availability 9's Equation

Friday, September 7, 2012

EV Based Trend Display - Trubin's Criterion!

Very typical task is to display trend direction on a dashboard type of report. Here is an example from SAS website:


















But how correctly to choose the direction? Based on what? My suggestion is to use Exception Value (EV) meta metric (Check how that should be calculated here: EV-Control Chart). Indeed, for the given most recent time period (e.g. day, week or month) EV will be greater than 0 , if more UCL crossings happened than LCL crossings; EV  will be 0 if everything occurred within UCL-LCL band; and EV will be less than 0, if more LCL crossings happened than UCL crossings. 

That criterion have been already used to detect most recent trends, but the same way that could be used to choose the right direction for the trend arrow on dashboards!

So the Trubin's Criterion would be:



Friday, August 31, 2012

Z Capacity Management without SAS and MXG

I have just commented the following post on LinkedIn:

"Is there anyone using anything else besides SAS or SAS assisted tools for analyzing Z/OS SMF data? If so, what are you using?"


I have involved in some IBM activity to offer an alternative (to SAS) solution to process and analyze SMF data by using TDS + SPSS + COGNOS (all IBM tools).

My part of this is to offer SETDS elements to include in the out-of-box COGNOS (and potentially SPSS) reportingsuch as IT Control Charts, EV based anomaly and recent trend detection.

Friday, August 17, 2012

How to Calculate Availability of Clustered Infrastructure for Multi-Tier Application

That is the task I am working on right now. I have some progress and the approach I found is to build availability graph to consider the clustered infrastructure as a chain of parallel and sires connected nodes described here with formulas. So below is a simple example:









And the availability calculation formula will be: 

A  = A1*(1-(1-(A2*A3)n)*A4

You can play with different level of redundancy "n" of the cluster here. Currently  it is 2 but you could estimate it for n=3 or n=4. That approach opens possibility to quantitatively justify you architectural decisions (not just using "best practices" or "gut feelings"). 

If you know MTTR for each individual component (SW and HW) you could estimate the whole infrastructure availability using this approach.  But how to get that individual MTTR? From vendors - good luck! Maybe from incident records? Or set up special monitoring for that (Synthetic- robotic?)

Other useful resources with formulas that relevant to this:

Thursday, August 2, 2012

SEDS-Lite: Using Open Source Tools (R, BIRT, MySQL) to Report and Analyze Performance Data - my new CMG'12 paper

I wrote this paper with some help from Shadi G. (from Dublin, also IBMer).
The paper is based on my blog postings:
SEDS-Lite Presentation at Southern CMG Meeting in the SAS Institute
SEDS-Lite Introduction
How To Build IT-Control Chart - Use the Excel Pivot Table!
BIRT based Control Chart

Below is the abstract:
Statistical Exception Detection (SEDS) is one of the variations of learning behavior based performance analysis methodology developed, implemented and published by Author. This paper took main SEDS tools – IT-Control Chart and Exceptions (Anomalies) Detector - and showed how that could be built by Open Source type of BI tools, such as R, BIRT and MySQL or just by spreadsheet. The paper includes source codes, tool screen-shots and report input/output examples to allow reader building/developing a light version of SEDS.
-------------------------
The presentation of this paper is scheduled on December 5th, 2012 Wednesday, 2:45:00 PM - 3:45:00 PM in Las Vegas, Nevada (check updates here: http://www.cmg.org/conference/cmg2012/ )
-------------------------

THAT IS MY SECOND CMG'12 PAPER. THE FIRST ONE ANNOUNCED HERE:

AIX frame and LPAR level Capacity Planning. User Case for Online Banking Application

Tuesday, July 31, 2012

AIX frame and LPAR level Capacity Planning. User Case for Online Banking Application - my new CMG'12 paper

    I have just got acceptance notifications about my two new CMG papers I wrote and submitted for this year CMG'12 conference.
    Below is the abstract of the 1st one which is base on the successful project I had this year.
    AIX frame and LPAR level Capacity Planning. User Case for Online Banking Application
    The paper shares some challenges the Online Banking Capacity Management team had and overcame during the Solaris to AIX migration. The raw capacity estimation model was built to estimate AIX frames capacity needs. The Capacity planning process was adjusted to virtualized environment. The essential system, middleware and database metrics to monitor capacity were identified; business driver correlated forecast reports were built to proactively tune entitlements; IT-Control Charts were created to establish dynamic thresholds for Ph. Processors and IOs usage. Capacity Council was established. 

    The presentation of this paper is scheduled on December 5th, 2012 Wednesday, 9:15:00 AM - 10:15:00 AM in Las Vegas, Nevada (check updates here: http://www.cmg.org/conference/cmg2012/ )
    _____________________________________
    The 2nd paper information is on the next post:
    SEDS-Lite: Using Open Source Tools (R, BIRT, MySQL) to Report and Analyze Performance Data



Thursday, July 12, 2012

Just submitted CMG'12 papers abstracts: Very preliminary analysis

Abstracts are published anonymously here: http://www.cmg.org/cgi-bin/abstract_view.pl 
Apparently one of the  papers was inspired by me: 

Time-Series: Forecasting + Regression: “And” or “Or”?
At CMG’11, I had a fascinating discussion with Dr. I.Trubin. We talked about Uncertainty, Second Law of Thermodynamics, and other high matters in relation to IT. That discussion prompted this paper. We propose a method to get better predictions when we have a forecast of independent variable and a regression. It works for any scenarios where performance can be linked with business metrics. A real-world example is worked through that demonstrates how this technique works to improve the performance metric prediction and highlight trends that would have been overlooked otherwise.
 I guess that relates to my other posting about other paper that use "entropy" :  
Quantifying Imbalance in Computer Systems: CMG'11 Trip Report, Part 2
The following are abstracts of some other papers from the list that potentially could relate to the main topics of this blog. I cannot wait when I can read them!
Methods for Identifying Anomalous Server Behavior
Identifying anomalous server behavior in large server farms is often overlooked for a variety of reasons. The anomalous behavior does not breach alerting thresholds, or perhaps the behavior is subtle and is simply missed. Whatever the case, it is important to identify such behavior before it becomes more severe. In this paper we discuss methods of identifying server behavior that is anomalous or otherwise or uncharacteristic. Methods include statistical techniques such as multidimensional scaling, and machine learning methods such as isolation forests and self organizing maps.

Software Performance Antipatterns for Identifying and Correcting Performance Problems
Performance antipatterns document common software performance problems as well as their solutions. These problems are often introduced during the architectural or design phases of software development, but not detected until later in testing or deployment. Solutions usually require software changes as opposed to system tuning changes. This tutorial covers five performance antipatterns and gives examples to illustrate them. These antipatterns will help developers and performance engineers avoid common performance problems.


Introduction to Wavelets and their Application for Computer Performance Trend and Anomaly Detection
In this paper I will present a technique to identify trends and anomalies in Performance data using wavelets. I will answer the following questions: Why use Wavelets? What are Wavelets? How do I use them?

Application Invariants: Finding constants amidst all the change
This paper presents a method for deriving and utilizing Application Invariants. An Application Invariant is a metric that quantifies the behavior or performance of an application in such a way that its value is immune to changes in workload volume. Several sample Application Invariants are developed and presented. One of the primary benefits of an Application Invariant is that it provides a simple (flat) shape that can readily be used to track changes in application performance or behavior in an automated manner.
Couple other papers could be found there with the obvious interest for this blog.... Will post them later here.

All in all, based on the 1st glance, looks like this year CMG conference (http://www.cmg.org/ ) will have a great success.