Monday, December 19, 2011

Sample Application to Capture Counters

As testers we would like to capture a lot of counters during our testing. This becomes crucial especially critical if we are performing any sort of Performance oriented testing. There are quite a few performance loading tools but nothing like custom built code. I will share this in my next post, but for this post, let us assume we have a free ware to load our system.

Now we would like to monitor the performance of a few machines, [I had to monitor a 3 Service Machines, 2 Web App Server, 2 DB Servers]. Also each machine we would like to monitor difference performance counters. If we had tools like VSTS Perf Monitoring, we can do the same, though I have never been able to connect and gather this from all machines. But then VSTS Perf version is not a free version.
Alas Microsoft always gives us the code to get the required details. The ultimate perfom counters.

The steps are definetely not rocket science
Step 1
Add all machines that you would want to include in your perf test. I have added all these in my appconfig file. Then I launch a Simple Windows Forms and populate all these machines as Check Boxes

Step 2 - Gather the required Category, Instance and Counter Names
a. Add all category names to a List and send this List to next step
categories = PerformanceCounterCategory.GetCategories(PMmachineName);
foreach (PerformanceCounterCategory cat in categories)
{ categoryNames.Add(cat.CategoryName);
b. For Each category Name get the instance name and save it again as List of strings
PerformanceCounterCategory category = new PerformanceCounterCategory()
{
MachineName = PMmachineName,
CategoryName = PMcategoryName
};
List instanceNames = category.GetInstanceNames().ToList();
c. For the supplied CategoryName and Instance Name display the counters and select which ever is required for performance testing
PerformanceCounterCategory pcc = new PerformanceCounterCategory()
{
MachineName = PMmachineName,
CategoryName = PMcategoryName,
};
PerformanceCounter[] counters;
counters = pcc.GetCounters(PMinstanceName);

List counterNames = new List();
foreach (PerformanceCounter counter in counters)
{ counterNames.Add(counter.CounterName); }

Step 3
Schedule a refresh rate to make sure how often you would want to capture the results

Step 4
For each of the selected counters spawn a new thread with the simple code mentioned below [A new thread is required because if all machines are in same thread then the .NextValue() method would take ages for each machine switching. But if 1 unique thread is maintained for each counter value, then results come in lightining speed]
PerformanceCounter performanceCounter = new PerformanceCounter();
performanceCounter.CategoryName = categoryName;
performanceCounter.CounterName = counterName;
performanceCounter.InstanceName = instanceName;
performanceCounter.MachineName = machineName;
perfCounterVal = performanceCounter.NextValue();

Step 5
Capture this value in parent thread and wait for the refresh rate and once refresh rate is reached store the perfCounterVal to another list and store the results in .csv file

Step 6
Add any logic you would want depending on ther perfCounterValue. We could add spwan another small thread to send out an email with the value of the last x mins from all machines if the Threshold goes beyond a point.
We could add a logic to control our Perftesting depending on threshold values.

Once we have the counter values its our imagination to do any activity that we want.

It would have been easier to share my code, but I guess if anybody really wants it you could drop me an email and I will be more than glad to help you. Probably with this approach you find out a better method

Tuesday, May 24, 2011

Can elevators teach us how to write good code?

Most of you might have taken the lift from West Lake Station to Onvia office and I wanted co-relate the same to performance

We usually find us waiting for the lift to come down when going up and coming up when going down !! HOW STRANGE !!
Would not it be so simple to always make the lift 1 to always come down after delivering a passenger. Is that all that needs ?? Atleast to begin with the lift can always be made to go down for passenger entering into Seattle.

This applies for lift that takes up from Platform to Mezzaine [From bus level to intermediate] and another lift from Mezzaine to Platform. But there is one only one lift that takes passenger from Surface to Mezzaine and this is shared by both sides. How do we optimize it? Well in the mornings you always make sure the lift returns to Mezzaine because that’s where most of the people would use it and during the evenings make sure it always default to surface. Aren’t we helping the MOST passengers save time

But that’s like me thinking like a developer. Everything I develop is cool. From a test perspective, I would like to validate it and that’s were counters would come in. If I could track the no of times user presses the buttons to the actual time the lift were used, it would be easy for me to validate the claims. No wonder as testers we need to be so sure of the requirement and ask for appropriate counters

Finally is this like universal law and applied everywhere [Make lifts go to ground floor during morning and make lifts go to Top floors in evening] Not quite though. Take the Medical building in which Onvia resides in 5th floor. Making the lift always goto a desired level is a waste of energy because there are enough lifts. We hardly save any time so let’s try optimizing some-thing else..
[When there is enough memory or threads or resources for me don’t spend it in optimizing time rather try saving ENERGY - electricity]

Hope we get some correlation between Lifts and Performance in our applications.

Golden Rule - There is no golden rule to solve all problems. Be innovative and think through and capture as many counters as possible INITIALLY to keep building and improvising systems..

Thursday, October 02, 2008

Google's Python Test Framework and Aditi's ATLAS
To start with I am completely excited over the Atlas framework. This was a dream for me when I was using Rational Robot, yet I could not accomplish it due to technical difficulties. My dream was to have complete Automation capabilities using normal Manual Script methodology. i.e “1-1” for 1 Line of Code for every 1 line Manual writing. I faced a few practical impossibilities tough. 1) “1-1” this is not completely possible due to the various standards followed across in writing styles. But if some standardization would be accomplished this is possible. I needed to come with standardization or a Tool, which was too complicated at that juncture 2) Manual Scripts are easily written and understood when written in either Excel or Word. Excel is preferred over Word due to its Column ability. Yet reading from Excel was very unstable. It’s still common in any Standard Automation Tool. Due to only these 2 practical problems I never could move forward. Both Problems could be solved if we used Excel, brought in standardization in Excel and made Excel stable. That’s when (an intelligent friend of mine) Dhiren’s idea of XML was so striking. If Excel was standardized, then I could create an XML out of my Excel. And XML’s are always so stable and way faster than reading from Excel. And why am I excited, because all these is possible with Atlas. I can vision Atlas being more Powerful that Rational Robot (I am not expert in QTP so can’t comment), cos Rational Robot still needs an Automater, whereas with Atlas framework we are targeting writing manual and Automation as 1 effort. To achieve this we need 2 levels of expertise. A python expert as the Tool requires it and Market tool expert to help the Tool grow.
To start with
Excel would be converted into a Tool where Manual Scripts would be entered. This Manual Script will have the following
a. First Row would contain columns TestCaseID, Test Case Description and DataFarm (separated by identifier). The TestCase ID can behave like Functions returning Values. Returning one Value is pretty simpe by assigning Test Case ID to return. If more Value need to returned then the fourth column can be used in instantiate multiple output variables. As python needs no declaration for variables it becomes very simple.
b. Row 2 onwards the actual Test Case writing starts. Column 1 would be for used to Conditional statements or Looping. If there is an expert in Automation and Manual Tester there, you would clearly see where I am heading. In most Cases we will not be using too many if-else or looping. Looping is still handled in Datafarm. Datafarm can be both used for accepting Values are hard coding Values like traditional Data Inputs. Column 2 will contain the execution step. Column 3 the Parameters(Multiple Parameters separated by some identifier) and finally Verification. The last column can be used for some description.
Well that’s it, this is the end of story. Looks pretty straight forward right. From a user who used it, I strongly believe this would just like any manual automation but infact even simpler and more specific.
Now how do we make this work
1) Excel needs to be incorporated with macros to perform verification at each cell level. This could be a challenge , yet could be the most interesting part.
2) Defining of Keywords - Column 2. This is common in any Tool
http://msdn.microsoft.com/en-us/library/ms750574.aspx - For Windows.
Will look for such site for Web Components, otherwise pick from any standard Tool :)
3) Convert the same to xml format.
4) Use the Google Framework to execute.
5) Convert the Logs into user readable format. I am still working on this. If anybody is interested or has ideas please share across.
Lastly this is the brain of my work, and I am completely excited over it. A glimpse of it. No wonder ATLAS is going to far superior to any automation Tool. The Tool can be used to create automatic Test Cases. I have heard that Rational Functional Tester does this, but I ain’t the expert. Secondly not only generate Test Cases but Validate Scenario Coverage. I don’t think there is any Tool out there. There are Code Coverage, but why can’t we use intelligence to generate Scenario coverage. If there is anybody out there, who can read my mind, I am sure they can understand my excitement. What looked like a dream now seems like a reality. Thanks to GOOGLE
I can help in Tool expertise and if anybody wants to join me, they are most welcome. If there are any python savvy gurus, it would be so helpful with your knowledge. Will keep all posted on more updates. This is an exciting phase :)

Friday, September 12, 2008

Is Testing in India considered like Softwares’ in 70’s and 80’s?
During my last debate with a group of Enthusiastic Testers, I was hearing the same complaint. “Tester’s profile is considered as Scum job”. In a group of Software Engineers, there is always a vast comparison felt between Developers and Testers. Also when speaking to Potential New Comers into the field, it is very difficult to get a buy in into the Testing Domain. The reason given would be, there are inadequate challenges, the growth is very limited, the opportunities don’t exist or even worse “Testing Profile does not match my brilliance”
All I can do is laugh at the ignorance of words. I still remember my Uncle who always used to tell me how much he regrets of not entering the software field during his college days, but instead opted for Civil Engineering, a field only for the elite group in those days. I strongly believe this is the same scenario, currently in the Indian Markets. The Vision of Testers are still not fully known or the effects of Quality Tester is unrecognized. Yet this felling is curtailed only in the lower layer, as the Top management have already realized the worth of smart Testers.
When I look at Testing, I am always amazed at the vastness of this field. Right from Unit Testing, Module Testing, Integration Testing, Functional Testing, System Testing, Regression Testing, Performance Testing, Security Testing or Usability Testing, the field is vast. What is more interesting as Testers is, we have had a taste of each cup unlike in development where your knowledge is curtailed to certain domains.
If testing was so vast, challenging and interesting why has the market not realized yet. The Answer is simple. The Flood gates have just opened. People have just seen the trickling of water. This was the same Scenario as “Software in 70’s and 80’s”. The moment Testers start specializing in domains, the flood gates would wide open. With importance being realized in countries like US, UK and Germany the scene of India getting into race is not too far in the future. With abundant potential we are not far away from Seeing the next Boom. “The Testing Boom”

The Security Depth is inversely proportional to the Weakest link in an n – Tier Architecture
The Below Diagram shows a n Tier Architecture. To explain the data traversing between the layers, the diagram shows that data travels from presentation Layer to Business Logic Layer to Data Access layer. Our minds usually sees what it’s trained to see.
http://en.wikipedia.org/wiki/Image:Overview_of_a_three-tier_application.png

From the above diagram it’s obvious that if Security is heavy and 100% in Presentation Layer, then no data can traverse to next Layer and hence the system is absolutely Secure. Alternatively if the Database has heavy Security protocols, though Request gets passed from Business Layer it gets restricted in Database Layer. Hence the Data Traversing is Serial.
This is absolutely true and our minds are trained to think that the same serial access is provided to Security. Data needs to pass through 3 doors, and hence, “Security in one door is good enough.”
Alas when it come Security it behaves likes 3 parallel doors and not 3 Serial Doors.
1) Weak Presentation Layer but strong Business and Database layers – Let’s assume that Presentation Layer allows to User to mask himself as Administrator, then the User enters through the remaining 2 doors as Admin. E.g. URL sending userID with base64 encryption. Encrypt Admin UserName with base 64 and send it across to gain access. Pretty simple, but happening in most projects
2) Weak Business Layer – Most applications use IIS if MS technology is used. In IIS for sites, we usually use Network Authentication or we Give Application Pool with Standard User. Most Cases this Standard User is Super user to let communication happen via WebServices. If user gets restricted in Presentation Layer but masks in Application Layer, the layer gives complete Access to DB
3) Weak DB Layer – If no Logic is employed in DB or in Sprocs to check Authentication but give data for any Valid request, you can still sneak information.
Thus when it comes to Security it’s not 3 Serial Doors but 3 Parallel Doors. J

Note for the Week: Never trust Client Side Request or Web request.
Cross Site Scripting
Cross Site Scripting attacks (a form of content-injection attack) differs from the many other attack methods covered in this article in that it affects the client-side of the application (ie. the user's browser). Cross Site Scripting (XSS) occurs wherever a developer incorrectly allows a user to manipulate HTML output from the application - this may be in the result of a search query, or any other output from the application where the user's input is displayed back to the user without any stripping of HTML content.
A simple example of XSS can be seen in the following URL:
http://server.example.com/browse.cfm?categoryID=1&name=Books
In this example the content of the 'name' parameter is displayed on the returned page. A user could submit the following request:
http://server.example.com/browse.cfm?categoryID=1&name=<h1>Books
If the characters < > are not being correctly stripped or escaped by this application, the "<h1>" would be returned within the page and would be parsed by the browser as valid html. A better example would be as follows:
http://server.example.com/browse.cfm?categoryID=1&name=<script>alert(document.cookie);</script>.
In this case, we have managed to inject Javascript into the resulting page. The relevant cookie (if any) for this session would be displayed in a popup box upon submitting this request.
This can be abused in a number of ways, depending on the intentions of the attacker. A short piece of Javascript to submit a user's cookie to an arbitrary site could be placed into this URL. The request could then be hex-encoded and sent to another user, in the hope that they open the URL. Upon clicking the trusted link, the user's cookie would be submitted to the external site. If the original site relies on cookies alone for authentication, the user's account would be compromised. We will be covering cookies in more detail in part three of this series.
In most cases, XSS would only be attempted from a reputable or widely-used site, as a user is more likely to click on a long, encoded URL if the server domain name is trusted. This kind of attack does not allow for any access to the client beyond that of the affected domain (in the user's browser security settings).
For more details on Cross-Site scripting and it's potential for abuse, please refer to the CGISecurity XSS FAQ at
http://www.cgisecurity.com/articles/xss-faq.shtml.

Thursday, September 11, 2008

Software As A Service (SaaS)
SaaS is a new model of how software is delivered. SaaS refers to software that is accessed via a web browser and is paid on a subscription basis (monthly or yearly). Different from the traditional model where a customer buys a license to software and assumes ownership for its maintenance and installation, SaaS presents significant advantages to the customer. SaaS is faster and a cost effective way to getting implemented. There are no hardware, implementation or acquisition costs involved to run the application from the customer's side. It's the responsibility of the SaaS vendor (us) to manage and run the application with utmost security, performance and reliability. Since customers pay a subscription, they have immediate access to the new features and functionality. Unlike traditional softwares where upgrades would happen once a year or once in 6 months (with the vendor coming to your office with a CD), the SaaS vendor continuously pushes new updates, fixes to the application, which is immediately accessible by the customer. This reduces the length of time it takes a customer to recognize value from the software. Since the software application is delivered as a service, its important for the vendor to focus on customer service and experience. Since this is on a subscription model, the vendor is judged on a month-month basis and the pressure to innovate or risk losing business is greater. SaaS can be used by Windows, Linux, or Max users, providing true platform independence over the Internet.

“Thus as everybody sees, providing service over the Internet through TCPIP. Hence security is the biggest threat”