Thursday, October 02, 2008

Google's Python Test Framework and Aditi's ATLAS
To start with I am completely excited over the Atlas framework. This was a dream for me when I was using Rational Robot, yet I could not accomplish it due to technical difficulties. My dream was to have complete Automation capabilities using normal Manual Script methodology. i.e “1-1” for 1 Line of Code for every 1 line Manual writing. I faced a few practical impossibilities tough. 1) “1-1” this is not completely possible due to the various standards followed across in writing styles. But if some standardization would be accomplished this is possible. I needed to come with standardization or a Tool, which was too complicated at that juncture 2) Manual Scripts are easily written and understood when written in either Excel or Word. Excel is preferred over Word due to its Column ability. Yet reading from Excel was very unstable. It’s still common in any Standard Automation Tool. Due to only these 2 practical problems I never could move forward. Both Problems could be solved if we used Excel, brought in standardization in Excel and made Excel stable. That’s when (an intelligent friend of mine) Dhiren’s idea of XML was so striking. If Excel was standardized, then I could create an XML out of my Excel. And XML’s are always so stable and way faster than reading from Excel. And why am I excited, because all these is possible with Atlas. I can vision Atlas being more Powerful that Rational Robot (I am not expert in QTP so can’t comment), cos Rational Robot still needs an Automater, whereas with Atlas framework we are targeting writing manual and Automation as 1 effort. To achieve this we need 2 levels of expertise. A python expert as the Tool requires it and Market tool expert to help the Tool grow.
To start with
Excel would be converted into a Tool where Manual Scripts would be entered. This Manual Script will have the following
a. First Row would contain columns TestCaseID, Test Case Description and DataFarm (separated by identifier). The TestCase ID can behave like Functions returning Values. Returning one Value is pretty simpe by assigning Test Case ID to return. If more Value need to returned then the fourth column can be used in instantiate multiple output variables. As python needs no declaration for variables it becomes very simple.
b. Row 2 onwards the actual Test Case writing starts. Column 1 would be for used to Conditional statements or Looping. If there is an expert in Automation and Manual Tester there, you would clearly see where I am heading. In most Cases we will not be using too many if-else or looping. Looping is still handled in Datafarm. Datafarm can be both used for accepting Values are hard coding Values like traditional Data Inputs. Column 2 will contain the execution step. Column 3 the Parameters(Multiple Parameters separated by some identifier) and finally Verification. The last column can be used for some description.
Well that’s it, this is the end of story. Looks pretty straight forward right. From a user who used it, I strongly believe this would just like any manual automation but infact even simpler and more specific.
Now how do we make this work
1) Excel needs to be incorporated with macros to perform verification at each cell level. This could be a challenge , yet could be the most interesting part.
2) Defining of Keywords - Column 2. This is common in any Tool
http://msdn.microsoft.com/en-us/library/ms750574.aspx - For Windows.
Will look for such site for Web Components, otherwise pick from any standard Tool :)
3) Convert the same to xml format.
4) Use the Google Framework to execute.
5) Convert the Logs into user readable format. I am still working on this. If anybody is interested or has ideas please share across.
Lastly this is the brain of my work, and I am completely excited over it. A glimpse of it. No wonder ATLAS is going to far superior to any automation Tool. The Tool can be used to create automatic Test Cases. I have heard that Rational Functional Tester does this, but I ain’t the expert. Secondly not only generate Test Cases but Validate Scenario Coverage. I don’t think there is any Tool out there. There are Code Coverage, but why can’t we use intelligence to generate Scenario coverage. If there is anybody out there, who can read my mind, I am sure they can understand my excitement. What looked like a dream now seems like a reality. Thanks to GOOGLE
I can help in Tool expertise and if anybody wants to join me, they are most welcome. If there are any python savvy gurus, it would be so helpful with your knowledge. Will keep all posted on more updates. This is an exciting phase :)

No comments: