Last Thursday we had a very good Southern Computer Measurement Group meeting of 16 attendees in Richmond VA, where I have presented the material about how to use R, BIRT, MySQL and EXCEL to analyze and report systems' performance data having as an example some real Unix server CPU utilization data for control charting.
Agenda is still on SCMG website and now my presentation slides are published and linked there:
SEDS-Lite: Using Open Source Tools (R, BIRT and MySQL) to Report and Analyze Performance Data
(slides).
This blog relates to experiences in the Systems Capacity and Availability areas, focusing on statistical filtering and pattern recognition and BI analysis and reporting techniques (SPC, APC, MASF, 6-SIGMA, SEDS/SETDS and other)
Popular Post
-
I have got the comment on my previous post “ BIRT based Control Chart “ with questions about how actually in BIRT the data are prepared for ...
-
Your are welcome to post to this blog any message related to the Capacity, Performance and/or Availability of computer systems. Just put you...
_
Wednesday, November 9, 2011
SEDS-Lite: Using Open Source Tools (R, BIRT and MySQL) to Report and Analyze Performance Data
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Tuesday, October 11, 2011
My Southern CMG Presentation in Richmond Is About Open Source Tools for Capacity Management
I have been invited to make my new presentation on the 2011 Fall SCMG Meeting. See agenda here.
My presentation will be actually a compilations of some of my last posts in this blog:
- UCL=LCL : How many standard deviations do we use for Control Charting? Use ZERO!
- BIRT based Control Chart
- One Example of BIRT Data Cubes Usage for Performance Data Analysis
- How To Build IT-Control Chart - Use the Excel Pivot Table!
- Power of Control Charts and IT-Chart Concept (Part 1)
- Building IT-Control Chart by BIRT against Data from the MySQL Database
- EV-Control Chart
So please plan to attend ! (registration is here)
My presentation will be actually a compilations of some of my last posts in this blog:
- UCL=LCL : How many standard deviations do we use for Control Charting? Use ZERO!
- BIRT based Control Chart
- One Example of BIRT Data Cubes Usage for Performance Data Analysis
- How To Build IT-Control Chart - Use the Excel Pivot Table!
- Power of Control Charts and IT-Chart Concept (Part 1)
- Building IT-Control Chart by BIRT against Data from the MySQL Database
- EV-Control Chart
So please plan to attend ! (registration is here)
Labels:
SCMG CMG
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Monday, October 10, 2011
Is Anomaly Detection Similar to Exception Detection? Apply SEDS for Information Security!
Sometimes I call my "Exception Detection" as "Anomaly Detection". In some cases the performance degradation could be caused by parasite program (like badly written data collection agent ) or incompetent user (like submitting badly written ad-hock database query) or even by a cyber attack (denial-of-service attack -DoS definitely degrades performance to absolutly not performing, doesn't it?)
So it is similar by my opinion and the Exception Detection methodology I am offering to by using MASF technique can be applied to broader filed of Information Security. And vice versa! Some intrusion detection techniques could be useful for automatic performance issues detection!
I have made a litle Google reserch on that and found a few interesting approaches. See one of that:
See the abstract page for dissertation written by Steven Gianvecchio:
So the question is "can that information theory (entropy analysis) could be applied to performance exception detection?"
So it is similar by my opinion and the Exception Detection methodology I am offering to by using MASF technique can be applied to broader filed of Information Security. And vice versa! Some intrusion detection techniques could be useful for automatic performance issues detection!
I have made a litle Google reserch on that and found a few interesting approaches. See one of that:
See the abstract page for dissertation written by Steven Gianvecchio:
Application of information theory and statistical learning to anomaly detection.
So the question is "can that information theory (entropy analysis) could be applied to performance exception detection?"
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Friday, October 7, 2011
EV-Control Chart
I have introduced the EV meta-metric in 2001 as a measure of anomaly severity. EV stands for Exception Value and more explanation about that idea could be found here: The Exception Value Concept to Measure Magnitude of Systems Behavior Anomalies
Basically it is the difference (integral) between actual data and control limits. So far I have used EV data mostly to filter out real issues or for automatic hidden trend recognition. For instance, in my paper CMG’08 “Exception Based Modeling and Forecasting” I have plotted that metric using Excel to explain how it could be used for a new trend starting point recognition. Here is the picture from that paper where EV called “Extra Volume” and for the particular parent metric (CPU util.) it is named ExtraCPUtime:
![]() |
| The EV meta-metric first chart |
But just plotting that meta-metric and/or two their components (EV+ and EV-) over time gives a valuable picture of system behavior. If system is stable that chart should be boring showing near zero value all the time. So using that chart would be very easy (I believe even easier than in MASF Control Charts) to recognize unusual and statistically significant increase or decrease in actual data in very early stage (Early Warning!).
Here is the example of that EV-chart against the same sample data used in few previous posts:
1. Excel example:
![]() |
| IT-Control chart vs. EV-Chart |
Here is the BIRT screenshots that illustrate how that is built:
a. A. Addition query to get EV calculated written directly in the additional BIRT Data Set object called “Data set for EV Chart”:
![]() |
| SQL query to calculate EV metric from the data kept in MySQL table |
B. Then additional bar-chart object is added to the report that is bind to that new “Data set for EV Chart”:
Result report is already shown here.
Labels:
"control chart",
BIRT,
Capacity Management,
Capacity Planning,
CMG,
IT-Chart,
IT-control chart,
Performance management,
SEDS-lite,
SPC,
Threshold
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Tuesday, October 4, 2011
Building IT-Control Chart by BIRT against Data from the MySQL Database
This is just about another way to build an IT-Control chart assuming the raw data are in the real database like MySQL. In this case some SQL scripting is used.
1. The raw data is CPU hourly utilization and actually the same as in the previous posts: BIRT based Control Chart and One Example of BIRT Data Cubes Usage for Performance Data Analysis. (see the raw data picture here)
2. That raw data need to be uploaded to some table (CPUutil) in the MySQL schema (ServerMetric) by using the following script (sqlScriptToUploadCSVforSEDS.sql):

The uploaded data is seen at the bottom of the picture.
3. Then the output (result) data (ActualVsHistoric table) is built using the following script (sqlScriptToControlChartforSEDS.sql):
The fragment of the result data are seen at the bottom of the picture also. Everything is ready for building IT-Control Chart and the data is actually the same as used in BIRT based Control Chart, so result should be the same also. Below is more detailed explanation how that was done.
1. 6. Nice thing is in BIRT you can specify report parameters, that could be then a part of any constants including for filtering (to change a baseline or to provide server or metric names). Finally the report should be run to get the following result, which is almost identical with the one built for BIRT based Control Chart post:
Labels:
BIRT,
Control Chart,
IT-Chart,
IT-control chart,
MySQL,
SQL
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Thursday, September 29, 2011
Power of Control Charts and IT-Chart Concept (Part 1)
This is the video presentation about Control Charts. It is based on my workshop I have already run a few times. It shows how to read and use Control Charts for reporting and analyzing IT systems performance (e.g. servers, applications) . My original IT-(Control) Chart concept within SEDS (Statistical Exception Detection System) is also presented.
The Part 2 will be about "How to build" control chart using R, SAS, BIRT and just
If anybody interested I would be happy to conduct this workshop again remotely via Internet or in person. Just put a request or just a comment here.
UPDATE: See the version of this presentation with the Russian narration:
Labels:
"control chart" SPC R,
Control Chart,
IT-Chart,
IT-control chart,
MASF,
Near-Real-Time IT Control Charts,
SEDS,
workshop
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Friday, September 23, 2011
How To Build IT-Control Chart - Use the Excel Pivot Table!
Continuing the topic of the previous post “One Example of BIRT Data Cubes Usage for Performance Data Analysis” I am showing here the way how to transform raw data to a “SEDS DB” format suitable for IT- Control Chart building or for exception detection. Based on the published on this blog SEDS-lite introduction it is “...building data for charting/detecting” task which is seen on the picture:
But in this case it is strictly manual process (unless someone wants to use VBA to automate that within MS Excel….) and requires the same basically approach as Data Cube/CrossTable usage in BIRT and in MS Excel it is called “PivotTable and PivotChart report“ listed under “data” menu item.
Below are a few screenshots that could help someone who is a bit familiar with EXCEL to understand how to build IT-Control Charts in order to analyze performance data in SEDS terms.
The input data is the same as in the previous post – just date/hour stamped system utilization metric (link to it). Additionally three calculated variables were added: Weekday (using Excel WEEKDAY () function) and weekhour as seen on the next picture:
Then the pivot table was built as shown on the next screenshot against raw data plus calculated weekhour field, which is actually is specified in “row” section of Pivot Table Layout Wizard (it is a bit similar with CrossTable object in BIRT; indeed, the Excel Pivot Table is the another way to work with Data Cubes too!):
Then three other columns were added right next to the pivot table to be able to compare Actual vs. Base-line and calculate Control limits (UCL and LCL). To do that, the “CPU util. Actual” data were referenced from the raw /CPUdata/ sheet where the last week data considered as Actual. Control limits calculation was done by usual spreadsheet formula and the picture shows that formula for UCL.
The last step was to build a chart against the data range, which includes pivot table and those three additional fields. See result IT Control Chart on the final picture:
Do you see where exceptions (anomalies) happened there?
Note that is IT-Control chart where the last day with actual data at the very right last 24 hours on Saturday. So that report made by Excel or BIRT is good to run once a week (e.g. by Sundays before work hours) to get all last week exceptions. To be more dynamic this report should be a bit modified (by adding "refreshing" birder) to run it daily, so minor exception first happened in the Thursday could be captured at least on Friday morning and one could make some proactive measures to avoid overutilization issue the chart shows for Friday and especially Saturday. The most dynamic way is to run that hourly (Excel is not good for that - use BIRT!) to be able to react on the first exception with a few next hours! See live example how that's suppose to be here: http://youtu.be/NTOODZAccvk or here: http://youtu.be/cQ4bk1HNuRk
But in this case it is strictly manual process (unless someone wants to use VBA to automate that within MS Excel….) and requires the same basically approach as Data Cube/CrossTable usage in BIRT and in MS Excel it is called “PivotTable and PivotChart report“ listed under “data” menu item.
Below are a few screenshots that could help someone who is a bit familiar with EXCEL to understand how to build IT-Control Charts in order to analyze performance data in SEDS terms.
The input data is the same as in the previous post – just date/hour stamped system utilization metric (link to it). Additionally three calculated variables were added: Weekday (using Excel WEEKDAY () function) and weekhour as seen on the next picture:
![]() |
| /CPUdata/ sheet |
![]() |
| /PivotForITcontrolChart/ sheet |
The last step was to build a chart against the data range, which includes pivot table and those three additional fields. See result IT Control Chart on the final picture:
Do you see where exceptions (anomalies) happened there?
Note that is IT-Control chart where the last day with actual data at the very right last 24 hours on Saturday. So that report made by Excel or BIRT is good to run once a week (e.g. by Sundays before work hours) to get all last week exceptions. To be more dynamic this report should be a bit modified (by adding "refreshing" birder) to run it daily, so minor exception first happened in the Thursday could be captured at least on Friday morning and one could make some proactive measures to avoid overutilization issue the chart shows for Friday and especially Saturday. The most dynamic way is to run that hourly (Excel is not good for that - use BIRT!) to be able to react on the first exception with a few next hours! See live example how that's suppose to be here: http://youtu.be/NTOODZAccvk or here: http://youtu.be/cQ4bk1HNuRk
By the way, I plan to prepare one another workshop type of presentation to demonstrate the technique discussed in my last posts and also to share actual reports maybe during some CMG.org events in the nearest future...
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Thursday, September 22, 2011
One Example of BIRT Data Cubes Usage for Performance Data Analysis
I have got the comment on my previous post “BIRT based Control Chart“ with questions about how actually in BIRT the data are prepared for Control Charting. Addressing this request I’d like to share how I use BIRT Cube to populate data to CrossTab object which was used then for building a control chart.
As I have already explained in my CMG paper (see IT-Control Chart), the data that describes the IT-Control Chart (or MASF control chart) has actually 3 dimensions (actually, it has 2 time dimensions and one measurement - metric as seen in the picture at the left). And the control chart is a just a projection to the 2D cut with actual (current or last) data overlaying. So, naturally, the OLAP Cubes data model (Data Cubes) is suitable for grouping and summarizing time stamped data to a crosstable for further analysis including building a control chart. In the past SEDS implementations I did not use Cubes approach and had to transform time stamped data for control charting using basic SAS steps and procs. Now I found that Data Cubes usage is somewhat simpler and in some cases does not require a programming at all if the modern BI tools (such as BIRT) are used.

Data source (Input data) is a table with date/hour stamped single metric with at least 4 months history (in this case it is the CPU utilization of some Unix box). That could be in any database format; in this particular example it is the following CSV file:
(2) the Actual data set which is the same but having the different filter: (raw[“date”} Greater “2011-04-02”)
As I have already explained in my CMG paper (see IT-Control Chart), the data that describes the IT-Control Chart (or MASF control chart) has actually 3 dimensions (actually, it has 2 time dimensions and one measurement - metric as seen in the picture at the left). And the control chart is a just a projection to the 2D cut with actual (current or last) data overlaying. So, naturally, the OLAP Cubes data model (Data Cubes) is suitable for grouping and summarizing time stamped data to a crosstable for further analysis including building a control chart. In the past SEDS implementations I did not use Cubes approach and had to transform time stamped data for control charting using basic SAS steps and procs. Now I found that Data Cubes usage is somewhat simpler and in some cases does not require a programming at all if the modern BI tools (such as BIRT) are used.
Below are the some screenshots with comments that illustrates the process of building the IT-Control Chart by using BIRT Cube.
Data source (Input data) is a table with date/hour stamped single metric with at least 4 months history (in this case it is the CPU utilization of some Unix box). That could be in any database format; in this particular example it is the following CSV file:
The result (in the form of BIRT report designer preview) is on the following picture:
(Where UCL – Upper Control Limit; LCL is not included for simplicity)
Before building the Cube the three following data sets were built using BIRT “Data Explorer”:
(1) The Reference set or base-line (just “Data Set” on the picture) is based on the input raw data with some filtering and computed columns (weekday and weekhour) and (2) the Actual data set which is the same but having the different filter: (raw[“date”} Greater “2011-04-02”)
(3) To combine both data sets for comparing base-line vs. actual, the “Data Set1” is built as a “Joint Data Set” by the following BIRT Query builder:
Then the Data Cube was built in the BIRT Data Cube Builder with the structure shown on the following screen:
Note only one dimension is used here – weekhour as that is needed for Cross table report bellow.
The next step is building report starting with Cross Table (which is picked as an object from BIRT Report designer “Pallete”):
The picture above shows also what fields are chosen from Cube to Cross table.
The final step is dropping “Chart” object from “Palette” and adding UCL calculation using Expression Builder for additional Value (Y) Series:
To see the result one needs just to run the report or to use a "preview' tab on the report designer window:
FINAL COMMENTS
- The BIRT report package can be exported and submitted for running under any portals (e.g. IBM TCR).
- Additional Cube dimensions makes sense to specify and use, such as server name or/and metric name.
- The report can be designed in BIRT with some parameters. For example, good idea is to use a server name as the report parameter.
- To follow the “SEDS” idea and to have the reporting process based on exceptions, the preliminary exception detection step is needed and can be done again within a BIRT report using the SQL script similar with published in one of the previous post:
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Saturday, September 17, 2011
BIRT based Control Chart
Recently implementing some solution using IBM TCR I have noticed that one of the default reports in TCR/BIRT is a Control Chart in the classical (SPC) version. Looks like that was one of the requirements for ability to build a consistent reports using TCR/BIRT as it's written here: Tivoli Common Reporting Enablement Guide
So I have built a few TCR reports with control chart against Tivoli performance data and that was somewhat useful.
I believe the IT-Control Chart (see my post about that type of control chart here) would give much more value for analyzing time stamped historical data. Is that possible to build using BIRT?
The BIRT is open source free BI tool (can be downloaded from here). I have downloaded and installed that on my laptop and have built a few reports for one of my customers. One of them was to filter out the exceptionally "bad" objects (servers) using EV criteria (see the linked post here).
Then I have built the IT-Control chart using BIRT. Below is the result:
Yes, it is possible with some limitation I have noticed in the current version of BIRT report designer. You could see it if you compare that with oher IT-Control Charts I have build using R (See example here), SAS (Example here) or EXCEL (here).
Anyway, could you see how that chart reports pro-actively on an issue?
So it is another way (not to program like in R or SAS and not to make manually like in EXCEL) to build IT-Control charts. After it is built that could be submitted to TCR (or other reporting portals) to be seen/run on a web.
So I have built a few TCR reports with control chart against Tivoli performance data and that was somewhat useful.
I believe the IT-Control Chart (see my post about that type of control chart here) would give much more value for analyzing time stamped historical data. Is that possible to build using BIRT?
The BIRT is open source free BI tool (can be downloaded from here). I have downloaded and installed that on my laptop and have built a few reports for one of my customers. One of them was to filter out the exceptionally "bad" objects (servers) using EV criteria (see the linked post here).
Then I have built the IT-Control chart using BIRT. Below is the result:
Yes, it is possible with some limitation I have noticed in the current version of BIRT report designer. You could see it if you compare that with oher IT-Control Charts I have build using R (See example here), SAS (Example here) or EXCEL (here).
Anyway, could you see how that chart reports pro-actively on an issue?
So it is another way (not to program like in R or SAS and not to make manually like in EXCEL) to build IT-Control charts. After it is built that could be submitted to TCR (or other reporting portals) to be seen/run on a web.
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Tuesday, August 23, 2011
CMG'11 papers about non-statistical ways to capture outliers/anomalies and trends
from The CMG'11 Abstract report :
| Monitoring Performance QoS using Outliers | |
| Eugene Margulis, Telus | |
Commonly used Performance Metrics often measure technical parameters that the end user neither knows nor cares about. The statistical nature of these metrics assumes a known underlying distribution when in reality such distributions are also unknown. We propose a QoS metric that is based on counting the outliers - events when the user is clearly “dis”-satisfied based on his/her expectation at the moment. We use outliers to track long term trends and changes in performance of individual transactions as well as to track system-wide freeze events that indicate system-wide resource exhaustion. |
BTW I have already tried to "count" outliers ; see my
2005 paper listed here: http://itrubin.blogspot.com/2007/06/system-management-by-exception.html
I used the SEDS database to count and analyze exceptions:
Introduction to Wavelets and their Application for Computer Performance Trend and Anomaly Detection:
Introduction to wavelets and their application for computer performance analysis. Wavelets are a set of waveforms that can be used to match a signal or noise. There are various families of wavelets unlike Fourier Analysis. Wavelets are stretched(scaled) in time AND frequency and correlated with the signal. The correlation in time and frequency is displayed as a heat map. The color is the intensity, the X axis is the time and the Y axis is the frequency. The heat map shows the time the trends or anamoly starts and when it repeats(frequency).
2005 paper listed here: http://itrubin.blogspot.com/2007/06/system-management-by-exception.html
I used the SEDS database to count and analyze exceptions:
Introduction to Wavelets and their Application for Computer Performance Trend and Anomaly Detection:
Introduction to wavelets and their application for computer performance analysis. Wavelets are a set of waveforms that can be used to match a signal or noise. There are various families of wavelets unlike Fourier Analysis. Wavelets are stretched(scaled) in time AND frequency and correlated with the signal. The correlation in time and frequency is displayed as a heat map. The color is the intensity, the X axis is the time and the Y axis is the frequency. The heat map shows the time the trends or anamoly starts and when it repeats(frequency).
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
CMG'11 Abstract Report shows my virtual presence
The CMG'11 agenda is online now. The Abstract report shows the following paper related to this blog subject:
1. A Real-World Application of Dynamic Thresholds for Performance Management by Jonathan B Gladstone
He published some material on this blog that most likly is included in his CMG paper:
1. A Real-World Application of Dynamic Thresholds for Performance Management by Jonathan B Gladstone
He published some material on this blog that most likly is included in his CMG paper:
Feb 17, 2011
Jonathan Gladstone has worked with a team to implement pro-active Mainframe CPU usage monitoring, basing his design partly on presentations and conversations with Igor Trubin (currently of IBM) and Boris Ginis (of BMC Software).
Here is the abstract form the Abstract report:
The author describes a real application of dynamic thresholds as developed at BMO Financial Group. The case shown uses performance management data from IBM mainframes, but the method would work equally well for detecting deviations from normal patterns in any time-series data including resource utilization in distributed systems, storage, networks or even in non-IT applications such as traffic or health management. This owes much to previous work by well-regarded CMG participants Igor Trubin (currently at IBM), Boris Zibitsker (BEZ Systems) and Boris Ginis (BMC Software).
2. Automatic Daily Monitoring of Continuous Processes in Theory and Practice by Frank Bereznay
Monitoring large numbers of processes for potential issues before they become problematic can be time consuming and resource intensive. A number of statistical methods have been used to identify change due to a discernable cause and separate it from the fluctuations that are part of normal activity. This session provides a case study of creating a system to track and report these types of changes. Determining the best level of data summarization, control limits, and charting options will be examined as well as all of the SAS code needed to implement the process and extend its functionality.
I believe that paper is based on the presentation he did at Southern CA CMG this year, which I have already mentioned in my following post: "The Master of MASF"
I have not written any paper for this year (1st time for the last 10 years!) but I glad that the technology I have been promoting for years still have presented in this year CMG conference with some references to my work!
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Tuesday, August 16, 2011
"The Master of MASF"
The following paper has been recently presented at Southern California CMG (SCCMG)
Automatic Daily Monitoring of Continuous Processes
Theory and Practice
by
For Ron Kaminski work:
Automatic Daily Monitoring of Continuous Processes
Theory and Practice
by
MP Welch – Merrill Consultants
Frank Bereznay - IBM
That is another great paper that promotes the MASF approach in System performance monitoring, which is actually the main subject of this blog. Most likely that paper will be presented again and publish at the international CMG'11 conference.
I am very proud that I was called "The Master of MASF" at that presentation! Thank you, Frank!
Here is the link to the presentation file I have found via google, which has the following pages referencing my work and also this blog:
[PPT]
Automatic Daily Monitoring of Continuous Processes Theory and Practice
The paper also has good references to Ron Kaminski and Dima Seliverstov work. Both authors as well as Frank Bereznay have already been mentioned in this blog already:
See the following posts for Frank Bereznay work:
See the following posts for Frank Bereznay work:
Aug 13, 2007
2006 Best Paper Award paper: Did Something Change? Using Statistical Techniques to Interpret Service and Resource Metrics. Frank M. Bereznay, Kaiser Permanente LINK: http://cmg.org/conference/cmg2006/awards/6139.pdf ...
Nov 05, 2010
Brian Barnett, Perry Gibson, and Frank Bereznay. That paper has a deep discussion about normality of performance data, showing examples where MASF approach does not work. The Survival Analysis that does not require any knowledge of how...
For Ron Kaminski work:
Jan 24, 2009
and ron kaminski who expressed some interest in my ev algorithm to capture recent bad trends as that solves some problems of workload pathology recognition on which he has been working recently. so you want to manage your z-series mips?
And for Dima Seliverstov work:
Dec 10, 2010
At CMG'10 conference I met BMC software specialist Dima Seliverstrov and he mentioned of referencing my 1st CMG'01 paper in his CMG presentation (scheduled to be presented TODAY!). I looked at his paper "Application of Stock Market...
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin
Wednesday, May 25, 2011
My CMG publication statistics from Microsoft
Googleing one of my CMG paper (for the purpose of IBM certification, which I am doing now sitting on a "bench" between projects) I ran across interesting new site called "Microsoft Academic Search". That references my last CMG papers with some statistics. I saw similar information in other specialized search engines (e.g. CiteSeerX or DBLP), but this one looks most accurate (I was not aware of this site until now, they got me themselves somehow - CMG?); it provides actual citations and has a nice "silverlite" charting. I like charts, so I cannot resist to copy the snapshot here:
Labels:
citations paper
He started in 1979 as IBM/370 system engineer. In 1986 he got his PhD. in Robotics at St. Petersburg Technical University (Russia) and then worked as a professor teaching CAD/CAM, Robotics for 12 years. He published 30+ papers and made several presentations for conferences related to the Robotics and Artificial Intelligent fields. In 1999 he moved to the US, worked at Capital One bank as a Capacity Planner. His first CMG.org paper was written and presented in 2001. The next one, "Exception Detection System Based on MASF Technique," won a Best Paper award at CMG'02 and was presented at UKCMG'03 in Oxford, England. He made other tech. presentations at IBM z/Series Expo, SPEC.org, Southern and Central Europe CMG and ran several workshops covering his original method of Anomaly and Change Point Detection (Perfomalist.com). Author of “Performance Anomaly Detection” class (at CMG.com). Worked 2 years as the Capacity team lead for IBM, worked for SunTrust Bank for 3 years and then at IBM for 3 years as Sr. IT Architect. Now he works for Capital One bank as IT Manager at the Cloud Engineering and since 2015 he is a member of CMG.org Board of Directors. Runs UT channel iTrubin


















