QA And PE

This blog discuss the various topics associated with software testing and software automation using automated tools such as QTP. You are most welcome to share topics related to software testing or comment on any topic below

Friday, November 27, 2009

Boundary value analysis and Equivalence partitioning

Boundary value analysis and Equivalence partitioning, explained with simple example:

Boundary value analysis and equivalence partitioning both are test case design strategies in black box testing.

Equivalence Partitioning:

In this method the input domain data is divided into different equivalence data classes. This method is typically used to reduce the total number of test cases to a finite set of testable test cases, still covering maximum requirements.

In short it is the process of taking all possible test cases and placing them into classes. One test value is picked from each class while testing.

E.g.: If you are testing for an input box accepting numbers from 1 to 1000 then there is no use in writing thousand test cases for all 1000 valid input numbers plus other test cases for invalid data.

Using equivalence partitioning method above test cases can be divided into three sets of input data called as classes. Each test case is a representative of respective class.

So in above example we can divide our test cases into three equivalence classes of some valid and invalid inputs.

Test cases for input box accepting numbers between 1 and 1000 using Equivalence Partitioning:
1) One input data class with all valid inputs. Pick a single value from range 1 to 1000 as a valid test case. If you select other values between 1 and 1000 then result is going to be same. So one test case for valid input data should be sufficient.

2) Input data class with all values below lower limit. I.e. any value below 1, as a invalid input data test case.

3) Input data with any value greater than 1000 to represent third invalid input class.

So using equivalence partitioning you have categorized all possible test cases into three classes. Test cases with other values from any class should give you the same result.

We have selected one representative from every input class to design our test cases. Test case values are selected in such a way that largest number of attributes of equivalence class can be exercised.

Equivalence partitioning uses fewest test cases to cover maximum requirements.

Boundary value analysis:

It’s widely recognized that input values at the extreme ends of input domain cause more errors in system. More application errors occur at the boundaries of input domain. ‘Boundary value analysis’ testing technique is used to identify errors at boundaries rather than finding those exist in center of input domain.

Boundary value analysis is a next part of Equivalence partitioning for designing test cases where test cases are selected at the edges of the equivalence classes.

Test cases for input box accepting numbers between 1 and 1000 using Boundary value analysis:
1) Test cases with test data exactly as the input boundaries of input domain i.e. values 1 and 1000 in our case.

2) Test data with values just below the extreme edges of input domains i.e. values 0 and 999.

3) Test data with values just above the extreme edges of input domain i.e. values 2 and 1001.

Boundary value analysis is often called as a part of stress and negative testing.

Note: There is no hard-and-fast rule to test only one value from each equivalence class you created for input domains. You can select multiple valid and invalid values from each equivalence class according to your needs and previous judgments.

E.g. if you divided 1 to 1000 input values in valid data equivalence class, then you can select test case values like: 1, 11, 100, 950 etc. Same case for other test cases having invalid data classes.

This should be a very basic and simple example to understand the Boundary value analysis and Equivalence partitioning concept.

Labels:

Tuesday, May 19, 2009

LoadRunner - Parameterize and increase iteratively a value from database

/* Initialize variables and character buffer */
int i;
int j;
char ssbuf[12];

/* Extract the value of database parameter and convert it into int by using atoi */
i = atoi(lr_eval_string("{UPLOADID}"));

/* Output the value */
lr_output_message("Integer Value of Parameter: %d", i);

/* Extract the iteration number from NewParam which is configured to extract iteration number
j = atoi(lr_eval_string("{NewParam}"));
i = i+j;

/*  Add integer to character stream */
sprintf(ssbuf,"%d", i);

/* save the parameter value */
lr_save_string(ssbuf,"UPLOADID");
lr_output_message("Integer Value of Parameter: %d", i);

Labels:

Tuesday, May 12, 2009

QTP - Test Parameter

Test Parameters can be used to pass values to the test and retrieve values when the test completes. They are global parameters and can be accessed in any action or rather any scope. They are of two type - Input and output. Input Parameters are used to pass values to the test while Output Parameters are used to retrieve values from the Test.

1. Open a New Test.
2. Go to File->Settings->Parameters Tab
3. Click on '+' sign which is on the top right.
4. Enter the Name of the parameter as 'vartest' and its Default Value as 'hello'. Click Apply and then Ok. The Type of this parameter is string.
5. Above we have created a Test Parameter.
6. Now in the Keyword View right click on Action1 and select 'Action Properties'. 'Action Properties' window opens.
7. Go to 'Parameters' Tab. Click on '+' sign which is on the top right.
8. Enter the Name of the parameter as 'varaction' and its Type as string and no default value. Click Ok to come out of that window.
9. Again in the Keyword View right click on Action1 and select 'Action Call Properties'. 'Action Call Properties' window opens.
10. Go to 'Parameter Values' Tab. There you will see the 'varaction' action parameter we created earlier. Make a single click under 'Value' heading, it will show a button like this <#>. Just click on this button to open 'Value Configuration Options' window.
11. Click on 'Parameter' radio button and select 'Test/action Parameter' from that dropdown.
12. 'Test Parameters' radio button will be selected by default and under 'Test Parameters' radio button select 'vartest' from Parameter dropdown.[ remember this vartest is a test parameter we created at the beginning]
13. When you click on ok to come out of 'Value Configuration Options' window, in the 'Action Call Properties' window, under 'Value' it will show . Click Ok.
14. Go to Expert View and type: msgbox(parameter("varaction"))
15. Now run the test. While running it will show 'hello' in the msgbox.

Labels:

QTP - Action Parameter

1. In the Keyword View right click on Action1 and select 'Action Properties'. 'Action Properties' window opens.
2. Go to 'Parameters' Tab. Click on '+' sign which is on the top right.
3. Enter the Name of the parameter as 'a' and its Type as Number. In the same way create another Number parameter 'b'.
4. Click Ok.
5. In the Keyword View right click on Action1 and select 'Action Properties'. 'Action Properties' window opens. In the general tab, click on 'Reusable Action' checkbox at the bottom to make the action reusable.
6. In the Expert View of Action1 type:
s1=parameter("a")
s2=parameter("b")
msgbox(s1+s2)
7. Go to Insert (menu)-> Call to New Action. 'Insert Call to New Action' window opens. Just click ok to insert a new action.
8. Make action2 reusable as we did for action1.
9. In the Expert View of action2 type:
RunAction "Action1", oneIteration, 2,2
10. In the keyword View right click on action2 and select 'Run from Step', it will show you the sum 4 in msgbox.

Labels:

QTP - Environment Variables

User-Defined Internal, User-Defined External, Built-in are the types of environment variables available in QTP.

Built-in variables as the name suggests are predefined by QTP. Examples of such variables are OS, OSVersion, ActionName which are for Operating System, Operating System Version and name of the action which is currently running, respectively.

Example: In the Expert View type and Run the test:
a = environment("ActionName") & " is running on " & environment("OS")
msgbox (a)

User Defined Internal variables are the variables defined by you (the user) within the test and which are saved within the test and also accessible from within the test.
Example: You may add internal enviornment variable in File (Menu)->Settings-> Environment tab.Add New Internal  Environment Parameter by selecting 'User defined' ion dropdown box. Go to expert view and type:
msgbox(environment("a"))

User-Defined external variables are the variables which are defined in the file outside of test. These act as read only for the test. You can create as many files for environment variables as you want and choose them for your test.
Open a new text file and type the following lines in it and save it with .xml extension. 
<Environment>
<Variable>
<Name> a </Name>
<Value> hello </Value>
</Variable>
</Environment>
You may attach external enviornment variable file in File (Menu)->Settings-> Environment tab. It would load Environment Parameter in XML file. Go to expert view and type:
msgbox(environment("Address"))

Labels:

Monday, May 11, 2009

Parameterizing Tests in QTP

By replacing fixed values with parameters QuickTest enables you to enlarge the scope of a basic test. It is known as parameterization, greatly increases the power and flexibility of a test. A parameter is a variable that is assigned a value from an external data source or generator. Values in steps and checkpoints and also the values of action parameters can be parameterize.
Parameters let us check how the application performs the same operations with multiple sets of data.

There are four types of parameters:
Test/action parameters: Test parameters make possible for us to use values passed from the test. Action parameters enable us to pass values from other actions in your test. To use a value within a specific action, the value must be passed down through the action hierarchy of the test to the required action. We can then use that parameter value to parameterize a step in the test. For example, suppose that we want to parameterize a step in Action3 using a value that is passed into the test from the external application that runs (calls) the test. We can pass the value from the test level to Action1 (atop-level action) to Action3 (a nested action of Action1), and then parameterize the required step using this action input parameter value (that was passed through from the external application). Alternatively, we can pass an output action parameter value from an action step to a later sibling action at the same hierarchical level. For example, suppose that Action2, Action3, and Action4 are sibling actions at the same hierarchical level, and that these are all nested actions of Action1. We can parameterize a call to Action4 based on an output value retrieved from Action2 or Action3. We can then use these parameters in the action step.

Data Table parameters allow us to create a data-driven test (or action) that runs several times using the data that we supply. In each repetition, or iteration, QuickTest uses a different value from the Data Table.

Environment variable parameters allow us to use variable values from other sources during the run session. These may be values that we supply, or values that QuickTest generates for us based on conditions and options we choose.

Random number parameters Enable us to insert random numbers as values in your test.

Values in steps and checkpoints can be parameterized while recording or editing the test. The values of object properties can be parameterized for a selected step. The values of the operation (method or function arguments) defined for the step can also be parameterized. When the value of an object property for a local object is parameterized, we are amending the test object description in the local object repository. Therefore, all occurrences of the specified object within the action are parameterized.
Parameterizing the value of a checkpoint property enables us to check how an application or Web site performs the same operation based on different data

Labels:

Checkpoints in QTP - Quick Test Professional

A checkpoint enables you to identify whether the Web site or application under test is functioning correctly or not by comparing a current value for a particular property with the expected value for that property.
After we add a checkpoint, QuickTest adds a checkpoint to the current row in the Keyword View and adds a Check CheckPoint statement in the Expert View.

By default, the checkpoint name receives the name of the test object on which the checkpoint is being performed. We can change the name of the checkpoint if
needed.

Types of Checkpoints:
  1. Standard checkpoint
  2. Image checkpoints
  3. Bitmap Checkpoint
  4. Table checkpoints
  5. Accessibility Checkpoint
  6. Text Checkpoint
  7. Page Checkpoint
  8. Database Checkpoint
  9. XML checkpoints

Standard checkpoints allow checking the object property values in the Web site or application under test. Standard checkpoints evaluate (compare) the expected values of object properties captured during recording to the object's current values during a run session. For
example we can check that a radio button is activated after it is selected. Standard checkpoints are supported for all add-in environments.

Standard checkpoints can be used to perform checks on
  • Images,
  • Tables,
  • Web page properties, and
  • Other objects within your application or Web site.
Standard checkpoints can be created for all supported testing environments (as long as the appropriate add-in(s) are loaded).

Image checkpoints allow you to check the properties of an image in the application or Web page. For example, you can check that a selected image's source file is correct or not. An image checkpoint can also be created by inserting a standard checkpoint on an image object. Image checkpoints are supported for the Web add-in environment

With Bitmap Checkpoint we can check an area of a Web page or application as a bitmap. While creating a test, we have to specify the area to check by selecting an object. An entire object or any area within an object can be checked. Bitmap checkpoints are supported for all add-in environments

By adding table checkpoints to the test, we can check the content of tables displayed in the application. For example, we can check that a specified value is displayed in a certain cell. Certain environments also support checking the properties of the table object. For example, a check that a table has the expected number of rows and columns. A table checkpoint can also be created by inserting a standard checkpoint on a table object.

Accessibility Checkpoint recognizes areas of your Web site that may not conform to the World Wide Web Consortium (W3C) Web Content Accessibility Guidelines. For example, check if the images on a Web page include ALT properties, required by the W3C Web Content Accessibility Guidelines. Accessibility checkpoints are supported for the Web add-in environment

QuickTest can check that a text string is displayed in the appropriate place in an application or on a Web page with Text Checkpoint. Text checkpoints are supported for the Web add-in environment, plus some Web-based add-in environments
Page Checkpoint checks the features of a Web page. For example, you can check how long a Web page takes to load or whether a Web page contains broken links. A page checkpoint can also be created by inserting a standard checkpoint on page object. Page checkpoints are supported for the Web add-in environment

The contents of a database accessed by your application can be checked by Database Checkpoint. Database checkpoints are supported for all add-in environments

By adding XML checkpoints to your test, you can check the contents of individual XML data files or documents that are part of your Web application. The XML Checkpoint option is supported for all add-in environments.

Labels:

Thursday, April 16, 2009

LR - web_reg_find Function

Load Runner - 
When you perform load testing, you have to be fully confident, that your application works correctly. It may be very usefull to check UI of application - is it shown correctly or not.

This verification could be done using web_reg_find function

Please, note:
web_reg_find function has been added before the page opening function (web_url)!
This is because LoadRunner web_reg_find function does not search for text on a page, it just registers a search request for a text string on an HTML page.

This is very important and I would like to pay your attention - web_reg_find function should be placed before the function, which loads a page.

Example: 
// Set up check for successful login by looking for "Welcome" 
       web_reg_find("Text=Welcome", "SaveCount=Welcome_Count", LAST); 
// Now log in 
       web_submit_form("login.pl", ...................

// Check result 
       if (atoi(lr_eval_string("{Welcome_Count}")) > 0)
{lr_output_message("Log on successful."); } 
        else
{ lr_error_message("Log on failed"); 
          return(0);    } 
  

Labels:

Tuesday, April 14, 2009

LR - Concurrent And Simultaneous vusers

This is one of the most frequently asked questions from LoadRunner newbies. If we go by the literal meaning from Dictionary.Com both are same but from LoadRunner’s perspective there is a slight difference.

All the vusers in a particular scenario are called Concurrent vusers. They may or may not perform the same tasks. On the other hand simultaneous vusers is more to do with rendezvous points. When we set rendezvous points we instruct the system to wait till a certain no of vusers arrive so that they all can do a particular task simultaneously. These vusers performing the same task at the same time are called Simultaneous vusers.

For example in a Yahoo Mail application: Suppose a scenario consists of 100 vusers with 3 tasks – 1) Login, 2) Check no of unread mails 3) Logout. Vusers at 1) + 2) + 3) will be called as concurrent vusers as they are part of same scenario performing some task but if have set a rendezvous point so that say 25 vuser perform the 2) task at the same time these 25 vusers would be termed as simultaneous vusers.

Labels:

LR - Process and Thread

Process is defined as the virtual address space and the control information necessary for the execution of a program whileThreads are a way for a program to split itself into two or more simultaneously running tasks. In general, a thread is contained inside a process and different threads in the same process share some resources while different processes do not.

Source

In terms of Loadrunner, when we run Vuser as a process, LoadRunner creates 1 process called mmdrv.exe per Vuser. So if we have 10 Vusers, we will have 10 mmdrv.exe processes on our machines.

while when we run Vuser as a thread, LoadRunner creates 1 thread per Vuser. So if we have 10 Vusers, then we will have 1 process with 10 threads running inside it if the limit is 10 threads per process.

Running Vuser as a thread is more memory efficient that running Vuser as a process for obvious reasons that less memory resources are utilized when we run them as thread.

Labels:

LR - Recording Mode

Load Runner - 

There are three types of recording mode/levels in LoadRunner. GUI-based, HTML based and URL based. For the uninitiated, recording levels tells you the amount of and what information is recorded during the recording process. As the title says, for this post we will keep focus on HTML based and URL based recording levels only and will touch upon GUI based mode, in a later post.

URL vs HTML

  1. HTML based mode, records script for every user action that is performed during recording (hmmm…sounds like QTPwhile URL based mode records each and every browser request to the server and resources received from the server. Confused? ok, HTML based mode does recording as you perform clicks and doesn’t give you inside information like what is happening behind the recording whileURL based mode records each and every step and emulate Javascript code.
  2. From the point1) above you can guess, HTML mode would have lesscorrelation to do while URL mode has much more complex correlation requirements.
  3. HTML mode is smaller and is more intuitive to read as the statements are inside the functions corresponding to the user action performed. In the case of URL based, all statements gets recorded into web_url()
  4. HTML mode is recommended for browser applications while URL mode is recommended for non-browser applications.
  5. Lastly, don’t get the impression that I am advocating for HTML mode :). URL mode can be of real help when you want to have control over the resources that need to be or need not to be downloaded, since you have each and every statement in-front of you (point 1)

Labels:

PE - Memory Leaks/ Page Fault

Performance Engineering -  

memory leak is a particular type of unintentional memory consumption by a computer program where the program fails to release memory when no longer needed. This condition is normally the result of a bug in a program that prevents it from freeing up memory that it no longer needs.This term has the potential to be confusing, since memory is not physically lost from the computer. Rather, memory is allocated to a program, and that program subsequently loses the ability to access it due to program logic flaws.

A page fault is an interrupt that occurs when a program requests data that is not currently in real memory. The interrupt triggers the operating system to fetch the data from a virtual memory and load it into RAM.

An invalid page fault or page fault error occurs when the operating system cannot find the data in virtual memory. This usually happens when the virtual memory area, or the table that maps virtual addresses to real addresses, becomes corrupt.

Labels:

Monday, December 11, 2006

System Testing

System testing is crucial step in Quality management process. To understand system testing, one also need to know about unit testing and integration testing. Let's take it one by one.

1. Unit testing is done by programmers and it is testing each unit of code.
2. Integration testing focuses on testing of integration of unit of code.
3. System testing focuses on testing system as a whole.

Thus, System testing builds on previous levels of unit testing and integration testing.

The prerequisites for System Testing are: -

- All the components should have been successfully Unit Tested

- All the components should have been successfully integrated and Integration Testing should be completed

- An Environment closely resembling the production environment should be created.