jMeter Tutorial - FTP Test Plan

In this chapter we will see how to test a FTP site using JMeter. Let us create a Test Plan to test the FTP site. 

Rename Test Plan

Start the JMeter window by clicking on /home/deepak/apache-jmeter-2.9/bin/jmeter.sh. Click on the Test Plan node. Rename this Test Plan node as TestFTPSite. 


Add Thread Group

Add one Thread Group, which is placeholder for all other elements like Samplers, Controllers, Listeners. Right click on TestFTPSite(our Test Plan) > Add > Threads(Users) > Thread Group. Thread Group will get added under the Test Plan (TestFTPSite) node.

Next let us modify the default properties of the Thread Group to suit our testing. Following properties are changed:

Name: FTPusers

Number of Threads (Users): 4

Ramp-Up Period: leave the the default value of 0 seconds.

Loop Count:1





Add Sampler- FTP Request


Now that we have defined our users, it is time to define the tasks that they will be performing. We will add FTP Request elements. We will add two FTP request elements, one which will retrieve a file and one which will put a file on the ftp site. Begin by selecting the FTPusers element. Click your right mouse button to get the Add menu, and then select Add > Sampler > FTP Request. Then, select the FTP Request element in the tree and edit the following properties as in the image below:



The following details are entered in the this element:

Name: FTP Request Get

Server Name or IP: 184.168.74.29

Remote File: /home/deepak/sample_ftp.txt

Local File:sample_ftp.txt

Select get(RETR)

Username:deepak

Password:system123

Now add another FTP request as above and edit the properties as in the image below:



The following details are entered in the this element:

Name: FTP Request Put

Server Name or IP: 184.168.74.29

Remote File: /home/deepak/examplefile.txt

Local File: /home/deepak/work/examplefile.txt

Select put(STOR)

Username:deepak

Password:system123 


Add Listener

The final element you need to add to your Test Plan is a Listener. This element is responsible for storing all of the results of your FTP requests in a file and presenting a visual model of the data.

Select the FTPusers element and add a View Results Tree listener (Add > Listener > View Results Tree).





Run the Test Plan

Now save the above test plan as ftpsite_test.jmx. Execute this test plan using Run > Start option. 


View Output

The following output can be seen in the listener.









You can see that four requests are made for each FTP request. We see that the test is successful. The retrieved file for GET request is stored in the bin folder. In our case it would be /home/deepak/apache-jmeter-2.9/bin/. For PUT request the file is uploaded at the path /home/deepak/.

jMeter Tutorial - Database Test Plan

In this chapter we will see how to create a simple test plan to test the database server. For our test purpose we have used the MYSQL database server. You can use any other database for testing. For installation and table creation in MYSQL.


Once MYSQL is installed, follow the steps below to setup the database:
  • Create a database with name "tutorial".
  • Create a table tutorials_tbl.
  • Insert records into tutorials_tbl :
mysql> use TUTORIALS;
Database changed
mysql> INSERT INTO tutorials_tbl 
     ->(tutorial_title, tutorial_author, submission_date)
     ->VALUES
     ->("Learn PHP", "John Poul", NOW());
Query OK, 1 row affected (0.01 sec)
mysql> INSERT INTO tutorials_tbl
     ->(tutorial_title, tutorial_author, submission_date)
     ->VALUES
     ->("Learn MySQL", "Abdul S", NOW());
Query OK, 1 row affected (0.01 sec)
mysql> INSERT INTO tutorials_tbl
     ->(tutorial_title, tutorial_author, submission_date)
     ->VALUES
     ->("JAVA Tutorial", "Sanjay", '2007-05-06');
Query OK, 1 row affected (0.01 sec)
mysql>
  • Copy the appropriate JDBC driver to /home/deepak/apache-jmeter-2.9/lib.

Create JMeter Test Plan

First let's start the JMeter from /home/deepak/apache-jmeter-2.9/bin/jmeter.sh.



ADD USERS

Now create a Thread group, right click on Test Plan > Add> Threads(Users)> Thread Group. Thread Group will get added under the Test Plan node. Rename this Thread Group as JDBC Users.


We will not change the default properties of the Thread Group.


ADDING JDBC REQUESTS

Now that we have defined our users, it is time to define the tasks that they will be performing. In this section, you will specify the JDBC requests to perform. Right click on the JDBC Users element, selectAdd > Config Element > JDBC Connection Configuration.


Set up the following fields (we are using MySQL database called tutorial):

  • Variable name bound to pool. This needs to uniquely identify the configuration. It is used by the JDBC Sampler to identify the configuration to be used. We have named it as test
  • Database URL: jdbc:mysql://localhost:3306/tutorial
  • JDBC Driver class: com.mysql.jdbc.Driver
  • Username: root
  • Password: password for root
The other fields on the screen can be left as the defaults as shown below:



Now add a JDBC Request which refers to the JDBC Configuration pool defined above. Select JDBC Users element, click your right mouse button to get the Add menu, and then select Add > Sampler > JDBC Request. Then, select this new element to view its Control Panel. Edit the properties as below:
  • Variable name bound to pool. This needs to uniquely identify the configuration. It is used by the JDBC Sampler to identify the configuration to be used. We have named it as test
  • Name: Learn
  • Enter the Pool Name: test (same as in the configuration element)
  • Query Type: Select statement
  • Enter the SQL Query String field.

CREATE LISTENER

Now add the Listener element. This element is responsible for storing all of the results of your JDBC requests in a file and presenting a visual model of the data.

Select the JDBC Users element and add a View Results Tree listener (Add > Listener > View Results Tree).



SAVE AND EXECUTE TEST PLAN

Now save the above test plan as db_test.jmx. Execute this test plan using Run > Start option.


VERIFY OUTPUT




In the last image you can see that 2 records are selected.

jMeter Tutorial - Web Test Plan

Let's build a simple test plan which tests a web page. We will write a test plan in Apache JMeter so that we can test performance of one web page say page shown by the URL:http://www.testinganswers.com/.
Start JMeter

Open the JMeter window by clicking on /home/deepak/apache-jmeter-2.9/bin/jmeter.sh. The JMeter window will appear as below: 



This is a JMeter window having nothing added yet. Details of the above window are:

  • Test Plan node is where the real test plan is kept.
  • Workbench node is where the temporary stuff is kept.

Rename Test Plan

Change the name of test plan node to Sample Test in the Name text box. You have to change focus to workbench node and back to Test Plan node to see the name getting reflected.




Add Thread Group

Now we will add our first element in the window. We will add one Thread Group, which is placeholder for all other elements like Samplers, Controllers, Listeners. We need one so we can configure number of users to simulate.

In JMeter all the node elements are added by using the context menu. You have to right click the element where you want to add a child element node. Then choose the appropriate option to add.

Right click on Sample Test(our Test Plan )> Add> Threads(Users)> Thread Group. Thread Group will get added under the Test Plan (Sample Test) node.



We will name Thread Group as Users. For us this element means Users visiting the TutorialsPoint Home Page. 




Add Sampler

Now we have to add one Sampler in our Thread Group (Users). As done earlier for adding Thread group, this time we will open the context menu of the Thread Group (Users) node by right clicking and we will add HTTP Request Sampler by choosing Add > Sampler> HTTP request option.



This will add one empty HTTP Request Sampler under the Thread Group (Users) node. Let us configure this node element:



  • Name: We will change the name to reflect the action what we want to achieve. We will name it as Visit TutorialsPoint Home Page
  • Server Name or IP: Here we have to type the web server name. In our case it is www.testinganswers.com. (http:// part is not written this is only the name of the server or its IP)
  • Protocol: We will keep this blank, which means we want HTTP as the protocol.
  • Path: We will type path as / (slash). This means we want the root page of the server.

Add Listener

We will add a listener. Let us add View Results Tree Listener under the Thread Group (User) node. This will ensure that the results of the Sampler will be available to view in this Listener node element. Open the context menu and Right click on Thread Group(Users) choose Add > Listener > View Results Tree option to add the listener.

Run the Test Plan

Now with all the setup, let's execute the test plan. With the configuration of the Thread Group (Users) we have kept it all default values. This means JMeter will execute the sampler only once. It will be like a single user and only one time.

This is similar to like a user visiting a web page through browser, only here we are doing that through JMeter sampler. We will execute the test plan using Run > Start option.

Apache JMeter asks us to save the test plan in a disk file before actually starting the test. This is important if we want to run the test plan again and again. If we say not to save by clicking No option it will run without saving. 

View Output

We have kept the setting of the thread group as single thread (that means one user only) and loop for 1 time (that means run only one time), hence we will get result of one single transaction in the View Result Tree Listener. 


Details of the above result are:

  • Green color against the name Visit TutorialsPoint Home Page indicates success.
  • JMeter has stored all the headers and the response sent by the web server and ready to show us the result in many ways.
  • The first tab is Sampler Results. It shows JMeter data as well as data returned by the web server.
  • The second tab is Request, where all the data which was sent to the web server as part of the request is shown.  

  • The last tab is Response data. In this tab the listener shows the data received from server as it is in text format.


This is just a simple test plan which executes only one request. But JMeter's real strength is in sending the same request like many users are sending it. To test the web servers with multiple users we will have to change the Thread Group (Users) settings.

jMeter Tutorial - Test Plan Elements

A JMeter Test Plan comprises of test elements which are discussed below. A Test Plan would comprise at least one Thread Group. Within each Thread Group we may place a combination of one or more of other elements: Sampler, Logic Controller, Configuration Element, Listener, and Timer. Each Sampler can be preceded by one or more Pre-processor element, followed by Post-processor element, and/or Assertion element. Let's see each of these elements in detail: 

ThreadGroup

Thread Group elements are the beginning points of your test plan. As the name suggests, the thread group elements control the number of threads JMeter will use during the test. We can also control the following via the Thread Group:

  • By setting the number of Threads.
  • By setting the Ramp Up Time
  • By setting the number of test iterations.

The Thread Group Control Panel looks like this:



Details of each component on the above panel are:
  1. Action to be taken after a Sampler error: In case any error occurs during test execution you may let the test either:
  • Continue to the next element in the test
  • Stop Thread to stop the current Thread.
  • Stop Test completely, in case you want to inspect the error before continue running.
  1. Number of Threads: Simulates the number of user(s) or connection(s) to your server application.
  2. Ramp-Up Period: Defines how long it will take JMeter to get all threads running.
  3. Loop Count: Defines the number of times to execute the test.
  4. Scheduler checkbox Once selected, the Scheduler Configuration section will appear at the bottom of the control panel.
  5. Scheduler Configuration You can configure the start and end time of running the test.
Controllers

JMeter has two types of Controllers: Samplers and Logic Controllers.
Samplers

Samplers allow JMeter to send specific types of requests to a server. They simulate a user's request for a page from the target server. For example, you can add a HTTP Request sampler if you need to perform a POST, GET, DELETE on a HTTP service

Some useful samplers are:

  • HTTP Request
  • FTP Request
  • JDBC Request
  • Java Request
  • SOAP/XML Request
  • RPC Requests

An HTTP Request Sampler Control Panel looks like the following figure:



jMeter Tutorial - Build Test Plan

What is a Test Plan? 


A Test Plan defines and provides a layout of how and what to test. For example the web application as well as the client server application. It can be viewed as a container for running tests. A complete test plan will consist of one or more elements such as thread groups, logic controllers, sample-generating controllers, listeners, timers, assertions, and configuration elements. A test plan must have at least one thread group. We shall discuss these elements in detail in the next chapter Test Plan Elements. 


Follow the below steps to write a test plan: 

Start the JMeter window

Open the JMeter window by clicking on /home/apache-jmeter-2.9/bin/jmeter.sh. The JMeter window will appear as below: 



This is a JMeter window having nothing added yet. Details of the above window are: 



  • Test Plan node is where the real test plan is kept.
  • Workbench node simply provides a place to temporarily store test elements while not in use, for copy/paste purposes. When you save your test plan, WorkBench items are not saved with it.

Add/Remove Elements
Elements (which will be discussed in the next chapter Test Plan Elements) of a test plan can be added by right clicking on the Test Plan node and choosing a new element from the "add" list.

Alternatively, elements can be loaded from file and added by choosing the "merge" or "open" option.

For example let's add a Thread Group element to a Test Plan as shown below:






To remove an element, make sure the element is selected, right-click on the element, and choose the "remove" option.




Loading and Saving Elements


To load an element from file, right click on the existing tree element to which you want to add the loaded element, and select the "merge" option. Choose the file where your elements are saved. JMeter will merge the elements into the tree.



To save tree elements, right click on an element and choose the Save Selection As ... option. JMeter will save the element selected, plus all child elements beneath it. By default JMeter doesn't save the elements, you need to explicitly save it as mentioned earlier. 

Configuring Tree Elements

Any element in the Test Plan can be configured in the controls present in JMeter's right-hand frame. These controls allow you to configure the behavior of that particular test element. For example the Thread Group can be configured for number of users, ramp up period etc as below:




Saving the Test Plan

You can save an entire Test Plan either by using Save or "Save Test Plan As ..." from the File menu.



Running a Test Plan

You can run your Test Plan choosing Start (Control + r) from the Run menu item. When JMeter is running, it shows a small green box at the right hand end of the section just under the menu bar.

 




The numbers to the left of the green box are the number of active threads / total number of threads. These only apply to a locally run test; they do not include any threads started on remote systems when using client-server mode. 



Stopping a Test

You can stop your test in two ways:

Using Stop (Control + '.'). This stops the threads immediately if possible. 


Using Shutdown (Control + ','). This requests the threads to stop at the end of any current work.

jMeter Tutorial - Overview

What is JMeter?

JMeter is a software allowing to load test or performance oriented business (functional) test on different protocols or technologies. Stefano Mazzocchi of the Apache Software Foundation was the original developer of JMeter. He wrote it primarily to test the performance of Apache JServ (Now called as Apache Tomcat project). Apache later redesigned JMeter to enhance the GUI and to add functional-testing capabilities.

This is a Java desktop application with a graphical interface using the Swing graphical API, can therefore run on any environment / workstation accepting a Java virtual machine, for example: Windows, Linux, Mac, etc.

The protocols supported by JMeter are:
  • Web: HTTP, HTTPS sites 'web 1.0' web 2.0 (ajax, flex and flex-ws-amf)
  • Web Services: SOAP / XML-RPC
  • Database via JDBC drivers
  • Directory: LDAP
  • Messaging Oriented service via JMS
  • Service: POP3, IMAP, SMTP
  • FTP Service 

JMeter Features

Following are some of the features of JMeter:
  • Its free. Its an open source software.
  • It has simple and intuitive GUI.
  • JMeter can load and performance test many different server types: Web - HTTP, HTTPS, SOAP, Database via JDBC, LDAP, JMS, Mail - POP3
  • It is platform-independent tool. On Linux/Unix, JMeter can be invoked by clicking on JMeter shell script. On Windows it can be invoked by starting the jmeter.bat file.
  • It has full Swing and lightweight component support (precompiled JAR uses packages javax.swing.* ).
  • JMeter store its test plans in XML format. This means you can generate a test plan using a text editor.
  • It's full multi-threading framework allows concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups.
  • It is highly Extensible.
  • Can also be used to perform automated and functional testing of your application.

How JMeter Works?

JMeter simulates a group of users sending requests to a target server, and return statistics that show the performance/functionality of the target server / application via tables, graphs etc. The figure below depicts this process:


jMeter Interview Questions

Q.1: What is JMeter? 

A: JMeter is one of the Java tools which is used to perform load testing client/server applications. Apache JMeter is open source software, a 100% pure Java desktop application designed to load test functional behavior and measure performance of the application. It was originally designed for testing Web Applications but has since expanded to other test functions.

Q.2: What is Performance Testing? 

A: This test sets the ‘best possible’ performance expectation under a given configuration of infrastructure. It also highlights early in the testing process if changes need to be made before application goes into production.

Q.3: What is Load Test? 

A: This test is basically used for exercising\discovering the system under the top load it was designed to operate under.

Q.4: What is Stress Test? 

A: This test is an attempt to break the system by overwhelming its resources.

Q.5: What are the protocols supported by JMeter?

A: The protocols supported by JMeter are: 
1. Web: HTTP, HTTPS sites 'web 1.0' web 2.0 (ajax, flex and flex-ws-amf)
2. Web Services: SOAP / XML-RPC
3. Database via JDBC drivers
4. Directory: LDAP
5. Messaging Oriented service via JMS
6. Service: POP3, IMAP, SMTP
7. FTP Service

Q.6: List some of the features of JMeter.
 

A: Following are some of the features of JMeter:
1. Its free. Its an open source software.It has simple and intuitive GUI.
2. JMeter can load and performance test many different server types: Web - HTTP, HTTPS, SOAP, Database via JDBC, LDAP, JMS, Mail - POP3
3. It is platform-independent tool. On Linux/Unix, JMeter can be invoked by clicking on JMeter shell script. On Windows it can be invoked by starting the jmeter.bat file.
4. It has full Swing and lightweight component support (precompiled JAR uses packages javax.swing.* ).
5. JMeter store its test plans in XML format. This means you can generate a test plan using a text editor.
6. It's full multi-threading framework allows concurrent sampling by many threads and simultaneous sampling of different functions by separate thread groups.
7. It is highly Extensible.
8. Can also be used to perform automated and functional testing of your application. 

Q.7: What is a Test Plan in JMeter?


A: A Test Plan defines and provides a layout of how and what to test. For example the web application as well as the client server application. It can be viewed as a container for running tests. A complete test plan will consist of one or more elements such as thread groups, logic controllers, sample-generating controllers, listeners, timers, assertions, and configuration elements. A test plan must have at least one thread group.
Q.8: List some of the test plan elements in JMeter.
A: Following is a list of some of the test plan elements:

1. ThreadGroup
2. Controllers
3. Listeners
4. Timers
5. Assertions
6. Configuration Elements
7. Pre-Processor Elements
8. Post-Processor Elements

Q.9: What is Thread Group?

A: Thread Group elements are the beginning points of your test plan. As the name suggests, the thread group elements control the number of threads JMeter will use during the test.

Q.10: What are Controllers and its types?

A: JMeter has two types of Controllers:

Samplers Controllers : Samplers allow JMeter to send specific types of requests to a server. They simulate a user's request for a page from the target server. For example, you can add a HTTP Request sampler if you need to perform a POST, GET, DELETE on a HTTP service

Logical Controllers : Logic Controllers let you control order of processing of Samplers in a Thread. Logic Controllers can change the order of request coming from any of their child elements. Some examples are: ForEach Controller, While Controller, Loop Controller, IF Controller, Run Time Controller, Interleave Controller, Throughput Controller, Run Once Controller. 

Q.11: What is Configuration element?

A: Configuration Elements allow you to create defaults and variables to be used by Samplers. They are used to add or modify requests made by Samplers.

They are executed at the start of the scope of which they are part, before any Samplers that are located in the same scope. Therefore, a Configuration Element is accessed only from inside the branch where it is placed.
Q12: What are Listeners?
A: Listeners let you view the results of Samplers in the form of tables, graphs, trees or simple text in some log files. They provide visual access to the data gathered by JMeter about the test cases as a Sampler component of JMeter is executed.

Listeners can be added anywhere in the test, including directly under the test plan. They will collect data only from elements at or below their level.

Q.13: What are Pre-Processor and Post-Processor elements?

A: A Pre-Procesor is something that will happen before a sampler executes. They are often used to modify the settings of a Sample Request just before it runs, or to update variables that are not extracted from response text.

A Post Processor executes after a sampler finishes its execution. This element is most often used to process the response data, for example, to retrieve particular value for later use.

Q.14: What is the execution order of Test Elements

A: Following is the execution order of the test plan elements:

1. Configuration elements
2. Pre-Processors
3. Timers
4. Sampler
5. Post-Processors (unless SampleResult is null)
6. Assertions (unless SampleResult is null)
7. Listeners (unless SampleResult is null)
 
Q.15: How do you ensure re-usability in your JMeter scripts?
A: 
1. Using config elements like "CSV Data Set Config", "User Defined Variables", etc for greater data reuse.
2. Modularizing shared tasks and invoking them via a "Module Controller".
3.Writing your own BeanShell functions, and reusing them.
 
Q.16: Are the test plans built using JMeter OS dependant?
A: Test plans are usually saved in thr XML format, hence they have nothing to do with any particular OS. You can run those test plans on any OS where JMeter can run. 

Q.17: What are the monitor tests?

A: Uses of monitor tests are:

1. Monitors are useful for a stress testing and system management.
2. Used with stress testing, the monitor provides additional information about server performance.
3. Monitors makes it easier to see the relationship between server performance and response time on the client side.
4. As a system administration tool, the monitor provides an easy way to monitor multiple servers from one console.

Q.18: What are JMeter Functions?

A: JMeter functions are special values that can populate fields of any Sampler or other element in a test tree. A function call looks like this: 

${__functionName(var1,var2,var3)}


Q.19: Where can functions and variables be used?

A: Functions and variables can be written into any field of any test component.
 
Q.20: What are regular expressions in JMeter?
A: Regular expressions are used to search and manipulate text, based on patterns. JMeter interprets forms of regular expressions or patterns being used throughout a JMeter test plan, by including the pattern matching software Apache Jakarta ORO.
Q.21: How can you reduce resource requirements in JMeter?

A: Below are some suggestion to reduce resource requirements:

1. Use non-GUI mode: jmeter -n -t test.jmx -l test.jtl.
2. Use as few Listeners as possible; if using the -l flag as above they can all be deleted or disabled.
3. Disable the “View Result Tree” listener as it consumes a lot of memory and can result in the console freezing or JMeter running out of memory. It is, however, safe to use the “View Result Tree” listener with only “Errors” checked.
4. Rather than using lots of similar samplers, use the same sampler in a loop, and use variables (CSV Data Set) to vary the sample. Or perhaps use the Access Log Sampler.
5. Don't use functional mode.
6. Use CSV output rather than XML.
7. Only save the data that you need.
8. Use as few Assertions as possible.
9. Disable all JMeter graphs as they consume a lot of memory. You can view all of the real time graphs using the JTLs tab in your web interface.
10. Do not forget to erase the local path from CSV Data Set Config if used.
11. Clean the Files tab prior to every test run.

Web Applications Testing Techniques

What is Web Testing?

Web Testing in simple terms is checking your web application for potential bugs before its made live or before code is moved into the production environment.
During this stage issues such as that of web application security, the functioning of the site, its access to handicapped as well as regular users and its ability to handle traffic is checked.

Web Application Testing Checklist:

Some or all of the following testing types may be performed depending on your web testing requirements.
 

1. Functionality Testing :

This is used to check if your product is as per the specifications you intended for it as well as the functional requirements you charted out for it in your developmental documentation.Testing Activities Included:
Test all links in your webpages are working correctly and make sure there are no broken links. Links to be checked will include -
  • Outgoing links
  • Internal links
  • Anchor Links
  • MailTo Links
 
Test  Forms are working as expected. This will include-
  • Scripting checks on the form are working as expected. For example- if a user does not fill a mandatory field in a form an error message is shown.
  • Check default values are being populated
  • Once submitted , the data in the forms is submitted to a live database or is linked to an working email address
  • Forms are optimally formatted for better readability
 
 
Test  Cookies are working as expected. Cookies are small files used by websites to primarily remember active user sessions so you do not need to log in every time you visit a website. Cookie Testing will include
  • Testing cookies (sessions) are deleted either when cache is cleared or when they reach their expiry.
  • Delete cookies (sessions) and test that login credentials are asked for when you next visit the site.
 
Test HTML and CSS to ensure that search engines can crawl your site easily. This will include
  • Checking for Syntax Errors
  • Readable Color Schemas
  • Standard Compliance.Ensure standards such W3C, OASIS, IETF, ISO, ECMA, or  WS-I are followed.

Test business workflow- This will include

  • Testing your end - to - end workflow/ business scenarios which takes the user through a series of webpage's to complete.
  • Test negative scenarios as well , such that when a user executes an unexpected step , appropriate error message or help is shown in your web application.

2. Usability testing:

Usability testing has now become a vital part of any web based project. It can be carried out by testers like you or a small focus group similar to the target audience of the web application.
Test the site Navigation:
  • Menus , buttons or Links to different pages on your site should be easily visible and consistent on all webpages
 
Test the Content:
  • Content should be legible with no spelling or grammatical errors.
  • Images if present should contain an "alt" text


3.Interface Testing:

Three areas to be tested here are - Application , Web and Database Server
  • Application: Test  requests are sent correctly to the Database and output at the client side is displayed correctly. Errors if any must be caught by the application and must be only shown to the administrator and not the end user.
  • Web Server: Test  Web server is handling all application requests without any service denial.
  • Database Server: Make sure queries sent to the database give expected results.
Test system response when connection between the three layers (Application, Web and Database) can not be established and appropriate message is shown to the end user.

  4.Database Testing:

Database is one critical component of your web application and stress must be laid to test it thoroughly. Testing activities will include-

  • Test if any errors are shown while executing queries
  • Data Integrity is maintained while creating , updating or deleting data in database.
  • Check response time of queries and fine tune them if necessary.
  • Test data retrieved from your database is shown accurately in your web application

  5. Compatibility testing.

Compatibility tests ensures that your web application displays correctly across different devices. This would include-
Browser Compatibility Test: Same website in different browsers will display differently. You need to test if your web application is being displayed correctly across browsers , javascript , AJAX and authentication is working fine. You may also check for Mobile Browser Compatibility.
The rendering of web elements like buttons , text fields etc changes with change in Operating System. Make sure your website works fine for various combination of Operating systems such as Windows , Linux , Mac and Browsers such as Firefox , Internet Explorer , Safari etc.

  6.Performance Testing:

This will ensure your site works under all loads. Testing activities will include but not limited to -

  • Website application response times at different connection speeds
  • Load test your web  application to determine its behavior under normal and peak loads
  • Stress test your web site to determine its break point when pushed to beyond normal loads at peak time.
  • Test if a crash occurs due to peak load , how does the site recover from such an event
  • Make sure optimization techniques like gzip compression , browser and server side cache enabled to reduce load times


  7. Security testing:

Security testing is vital for e-commerce website that store sensitive customer information like credit cards.Testing Activities will include-
  • Test unauthorized access to secure pages should not be permitted
  • Restricted files should not be downloadable without appropriate access
  • Check sessions are automatically killed after prolonged user inactivity
  • On use of SSL certificates , website should re-direct to encrypted SSL pages.


  8.Crowd Testing:

You will select a large number of people (crowd) to execute tests which otherwise would have been executed a select group of people in the company. Crowdsourced testing is an interesting and upcoming concept and helps unravel many a unnoticed defects.

Software Testing Interview Questions

1. What is the MAIN benefit of designing tests early in the life cycle?
It helps prevent defects from being introduced into the code.
2. What is risk-based testing?
Risk-based testing is the term used for an approach to creating a test strategy that is based on prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.
 3. A wholesaler sells printer cartridges. The minimum order quantity is 5. There is a 20% discount for orders of 100 or more printer cartridges. You have been asked to prepare test cases using various values for the number of printer cartridges ordered. Which of the following groups contain three test inputs that would be generated using Boundary Value Analysis?
4, 5, 99
4. What is the KEY difference between preventative and reactive approaches to testing?
Preventative tests are designed early; reactive tests are designed after the software has been produced.
5. What is the purpose of exit criteria?
The purpose of exit criteria is to define when a test level is completed.
6. What determines the level of risk?
 The likelihood of an adverse event and the impact of the event determine the level of risk.
7. When is used Decision table testing?
Decision table testing is used for testing systems for which the specification takes the form of rules or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs in the same column but below the inputs. The remainder of the table explores combinations of inputs to define the outputs produced.
8. What is the MAIN objective when reviewing a software deliverable?
To identify defects in any software work product.
9. Which of the following defines the expected results of a test? Test case specification or test design specification.
Test case specification defines the expected results of a test.
10. What is the benefit of test independence?
It avoids author bias in defining effective tests.
11. As part of which test process do you determine the exit criteria?
The exit criteria is determined on the bases of ‘Test Planning’.
12. What is beta testing?
Testing performed by potential customers at their own locations.
13. Given the following fragment of code, how many tests are required for 100% decision coverage?
if width > length
   thenbiggest_dimension = width
     if height > width
             thenbiggest_dimension = height
     end_if
elsebiggest_dimension = length  
            if height > length 
                thenbiggest_dimension = height
          end_if
end_if
4
14. You have designed test cases to provide 100% statement and 100% decision coverage for the following fragment of code. if width > length then biggest_dimension = width else biggest_dimension = length end_if The following has been added to the bottom of the code fragment above. print "Biggest dimension is " &biggest_dimensionprint "Width: " & width print "Length: " & length How many more test cases are required?
None, existing test cases can be used.
15. Rapid Application Development?
Rapid Application Development (RAD) is formally a parallel development of functions and subsequent integration. Components/functions are developed in parallel as if they were mini projects, the developments are time-boxed, delivered, and then assembled into a working prototype. This can very quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements. Rapid change and development of the product is possible using this methodology. However the product specification will need to be developed for the product at some point, and the project will need to be placed under more formal controls prior to going into production.
16. What is the difference between Testing Techniques and Testing Tools?
Testing technique: – Is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools.
Testing Tools: – Is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing
Learn More About Testing Tools  here
17. We use the output of the requirement analysis, the requirement specification as the input for writing …
User Acceptance Test Cases
18. Repeated Testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software component:
Regression Testing
19. What is component testing?
Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.
Here is an awesome video on Unit Testing
20. What is functional system testing?
Testing the end to end functionality of the system as a whole is defined as a functional system testing.
21. What are the benefits of Independent Testing?
Independent testers are unbiased and identify different defects at the same time.
22. In a REACTIVE approach to testing when would you expect the bulk of the test design work to be begun?
The bulk of the test design work begun after the software or system has been produced.
23. What are the different Methodologies in Agile Development Model?
There are currently seven different agile methodologies that I am aware of:
  1. Extreme Programming (XP)
  2. Scrum
  3. Lean Software Development
  4. Feature-Driven Development
  5. Agile Unified Process
  6. Crystal
  7. Dynamic Systems Development Model (DSDM) 
24. Which activity in the fundamental test process includes evaluation of the testability of the requirements and system?
A ‘Test Analysis’ and ‘Design’ includes evaluation of the testability of the requirements and system.
25. What is typically the MOST important reason to use risk to drive testing efforts?
Because testing everything is not feasible.
26. What is random/monkey testing? When it is used?
Random testing often known as monkey testing. In such type of testing data is generated randomly often using a tool or automated mechanism. With this randomly generated input the system is tested and results are analysed accordingly. These testing are less reliable; hence it is normally used by the beginners and to see whether the system will hold up under adverse effects.
27. Which of the following are valid objectives for incident reports?
  1. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.
  2. Provide ideas for test process improvement.
  3. Provide a vehicle for assessing tester competence.
  4. Provide testers with a means of tracking the quality of the system under test.  
28. Consider the following techniques. Which are static and which are dynamic techniques?
  1. Equivalence Partitioning.
  2. Use Case Testing.
  3. Data Flow Analysis.
  4. Exploratory Testing.
  5. Decision Testing.
  6. Inspections.
Data Flow Analysis and Inspections are static; Equivalence Partitioning, Use Case Testing, Exploratory Testing and Decision Testing are dynamic.
29. Why are static testing and dynamic testing described as complementary?
Because they share the aim of identifying defects but differ in the types of defect they find.
30. What are the phases of a formal review?
In contrast to informal reviews, formal reviews follow a formal process. A typical formal review process consists of six main steps:
  1. Planning
  2. Kick-off
  3. Preparation
  4. Review meeting
  5. Rework
  6. Follow-up.
31. What is the role of moderator in review process?
The moderator (or review leader) leads the review process. He or she determines, in co-operation with the author, the type of review, approach and the composition of the review team. The moderator performs the entry check and the follow-up on the rework, in order to control the quality of the input and output of the review process. The moderator also schedules the meeting, disseminates documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and stores the data that is collected.
32. What is an equivalence partition (also known as an equivalence class)?
An input or output ranges of values such that only one value in the range becomes a test case.
33. When should configuration management procedures be implemented?
During test planning.
34. A Type of functional Testing, which investigates the functions relating to detection of threats, such as virus from malicious outsiders?
Security Testing
35. Testing where in we subject the target of the test , to varying workloads to measure and evaluate the performance behaviours and ability of the target and of the test to continue to function properly under these different workloads?
Load Testing
36. Testing activity which is performed to expose defects in the interfaces and in the interaction between integrated components is?
Integration Level Testing
37. What are the Structure-based (white-box) testing techniques?
Structure-based testing techniques (which are also dynamic rather than static) use the internal structure of the software to derive test cases. They are commonly called 'white-box' or 'glass-box' techniques (implying you can see into the system) since they require knowledge of how the software is implemented, that is, how it works. For example, a structural technique may be concerned with exercising loops in the software. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the functionality of the software.
38. When “Regression Testing” should be performed?
After the software has changed or when the environment has changed Regression testing should be performed.
39What is negative and positive testing?
A negative test is when you put in an invalid input and receives errors. While a positive testing, is when you put in a valid input and expect some action to be completed in accordance with the specification. 
40. What is the purpose of a test completion criterion?
The purpose of test completion criterion is to determine when to stop testing
41. What can static analysis NOT find?
For example memory leaks.
42. What is the difference between re-testing and regression testing?
Re-testing ensures the original fault has been removed; regression testing looks for unexpected side effects.
43. What are the Experience-based testing techniques?
In experience-based techniques, people's knowledge, skills and background are a prime contributor to the test conditions and test cases. The experience of both technical and business people is important, as they bring different perspectives to the test analysis and design process. Due to previous experience with similar systems, they may have insights into what could go wrong, which is very useful for testing.
44. What type of review requires formal entry and exit criteria, including metrics?
Inspection
45. Could reviews or inspections be considered part of testing?
Yes, because both help detect faults and improve quality.
46. An input field takes the year of birth between 1900 and 2004 what are the boundary values for testing this field?
1899,1900,2004,2005
47. Which of the following tools would be involved in the automation of regression test? a. Data tester b. Boundary tester c. Capture/Playback d. Output comparator.
d. Output comparator
48. To test a function, what has to write a programmer, which calls the function to be tested and passes it test data.
 Driver
49. What is the one Key reason why developers have difficulty testing their own work?
Lack of Objectivity
50.“How much testing is enough?”
The answer depends on the risk for your industry, contract and special requirements.
51. When should testing be stopped?
It depends on the risks for the system being tested. There are some criteria bases on which you can stop testing.
  1. Deadlines (Testing, Release)
  2. Test budget has been depleted
  3. Bug rate fall below certain level
  4. Test cases completed with certain percentage passed
  5. Alpha or beta periods for testing ends
  6. Coverage of code, functionality or requirements are met to a specified point
52. Which of the following is the main purpose of the integration strategy for integration testing in the small?
The main purpose of the integration strategy is to specify which modules to combine when and how many at once.
53.What are semi-random test cases?
Semi-random test cases are nothing but when we perform random test cases and do equivalence partitioning to those test cases, it removes redundant test cases, thus giving us semi-random test cases.
54. Given the following code, which statement is true about the minimum number of test cases required for full statement and branch coverage?
     Read p
     Read q
     IF p+q> 100
          THEN Print "Large"
    ENDIF
    IF p > 50
          THEN Print "p Large"
    ENDIF
1 test for statement coverage, 2 for branch coverage
55.  What is black box testing? What are the different black box testing techniques?
Black box testing is the software testing method which is used to test the software without knowing the internal structure of code or program. This testing is usually done to check the functionality of an application. The different black box testing techniques are
  1. Equivalence Partitioning
  2. Boundary value analysis
  3. Cause effect graphing
56. Which review is normally used to evaluate a product to determine its suitability for intended use and to identify discrepancies?
Technical Review.
57. Why we use decision tables?
The techniques of equivalence partitioning and boundary value analysis are often applied to specific situations or inputs. However, if different combinations of inputs result in different actions being taken, this can be more difficult to show using equivalence partitioning and boundary value analysis, which tend to be more focused on the user interface. The other two specification-based techniques, decision tables and state transition testing are more focused on business logic or business rules. A decision table is a good way to deal with combinations of things (e.g. inputs). This technique is sometimes also referred to as a 'cause-effect' table. The reason for this is that there is an associated logic diagramming technique called 'cause-effect graphing' which was sometimes used to help derive the decision table
58. Faults found should be originally documented by whom?
By testers.
59. Which is the current formal world-wide recognized documentation standard?
There isn’t one.
60. Which of the following is the review participant who has created the item to be reviewed?
Author
61. A number of critical bugs are fixed in software. All the bugs are in one module, related to reports. The test manager decides to do regression testing only on the reports module.
Regression testing should be done on other modules as well because fixing one module may affect other modules.
62. Why does the boundary value analysis provide good test cases?
Because errors are frequently made during programming of the different cases near the ‘edges’ of the range of values.
63. What makes an inspection different from other review types?
It is led by a trained leader, uses formal entry and exit criteria and checklists.
64. Why can be tester dependent on configuration management?
Because configuration management assures that we know the exact version of the testware and the test object.
65. What is a V-Model?
A software development model that illustrates how testing activities integrate with software development phases
66. What is maintenance testing?
Triggered by modifications, migration or retirement of existing software
67. What is test coverage?
Test coverage measures in some specific way the amount of testing performed by a set of tests (derived in some other way, e.g. using specification-based techniques). Wherever we can count things and can tell whether or not each of those things has been tested by some test, then we can measure coverage.
68. Why is incremental integration preferred over “big bang” integration?
Because incremental integration has better early defects screening and isolation ability
69. When do we prepare RTM (Requirement traceability matrix), is it before test case designing or after test case designing?
It would be before test case designing. Requirements should already be traceable from Review activities since you should have traceability in the Test Plan already. This question also would depend on the organisation. If the organisations do test after development started then requirements must be already traceable to their source. To make life simpler use a tool to manage requirements.
70. What is called the process starting with the terminal modules?
Bottom-up integration
71. During which test activity could faults be found most cost effectively?
During test planning
72. The purpose of requirement phase is
To freeze requirements, to understand user needs, to define the scope of testing
73. Why we split testing into distinct stages?
We split testing into distinct stages because of following reasons,
  1. Each test stage has a different purpose
  2. It is easier to manage testing in stages
  3. We can run different test into different environments
  4. Performance and quality of the testing is improved using phased testing
74. What is DRE?
To measure test effectiveness a powerful metric is used to measure test effectiveness known as DRE (Defect Removal Efficiency) From this metric we would know how many bugs we have found from the set of test cases. Formula for calculating DRE is
DRE=Number of bugs while testing  / number of bugs while testing + number of bugs found by user
75. Which of the following is likely to benefit most from the use of test tools providing test capture and replay facilities? a) Regression testing b) Integration testing c) System testing d) User acceptance testing
Regression testing
76. How would you estimate the amount of re-testing likely to be required?
Metrics from previous similar projects and discussions with the development team
77. What studies data flow analysis?
The use of data on paths through the code.
78. What is Alpha testing?
Pre-release testing by end user representatives at the developer’s site.
79. What is a failure?
Failure is a departure from specified behaviour.
80. What are Test comparators?
Is it really a test if you put some inputs into some software, but never look to see whether the software produces the correct result? The essence of testing is to check whether the software produces the correct result, and to do that, we must compare what the software produces to what it should produce. A test comparator helps to automate aspects of that comparison.
81. Who is responsible for document all the issues, problems and open point that were identified during the review meeting
Scribe
82. What is the main purpose of Informal review
Inexpensive way to get some benefit
83. What is the purpose of test design technique?
Identifying test conditions and Identifying test cases
84. When testing a grade calculation system, a tester determines that all scores from 90 to 100 will yield a grade of A, but scores below 90 will not. This analysis is known as:
85. A test manager wants to use the resources available for the automated testing of a web application. The best choice is Tester, test automater, web specialist, DBA
86. During the testing of a module tester ‘X’ finds a bug and assigned it to developer. But developer rejects the same, saying that it’s not a bug. What ‘X’ should do?
Send to the detailed information of the bug encountered and check the reproducibility
87. A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages.
Big-Bang Testing
88. In practice, which Life Cycle model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing, and system integration testing after system testing.
89. Which technique can be used to achieve input and output coverage? It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing.
90. “This life cycle model is basically driven by schedule and budget risks” This statement is best suited for…
91. In which order should tests be run?
The most important one must tests first
92. The later in the development life cycle a fault is discovered, the more expensive it is to fix. Why?
The fault has been built into more documentation, code, tests, etc
93. What is Coverage measurement?
It is a partial measure of test thoroughness.
94. What is Boundary value testing?
Test boundary conditions on, below and above the edges of input and output equivalence classes. For instance, let say a bank application where you can withdraw maximum Rs.20,000 and a minimum of Rs.100, so in boundary value testing we test only the exact boundaries, rather than hitting in the middle.  That means we test above the maximum limit and below the minimum limit.
95. What is Fault Masking?
Error condition hiding another error condition.
96. What does COTS represent?
Commercial off The Shelf.
97.The purpose of wich is allow specific tests to be carried out on a system or network that resembles as closely as possible the environment where the item under test will be used upon release?
Test Environment
98. What can be thought of as being based on the project plan, but with greater amounts of detail?
Phase Test Plan
99. What is exploratory testing?
 Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used. The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techniques will not be used. For example, the tester may decide to use boundary value analysis but will think through and test the most important boundary values without necessarily writing them down. Some notes will be written during the exploratory-testing session, so that a report can be produced afterwards.
100. What is “use case testing”?
In order to identify and execute the functional requirement of an application from end to finish “use case” is used and the techniques used to do this is known as “Use Case Testing” 
101. What is the difference between STLC (  Software Testing Life Cycle) and SDLC ( Software Development Life  Cycle) ?
The complete Verification and Validation of software is done in SDLC, while STLC only does Validation of the system. SDLC is a part of STLC.
102. What is traceability matrix?
The relationship between test cases and requirements is shown with the help of a document. This document is known as traceability matrix.
 103. What is Equivalence partitioning testing?
Equivalence partitioning testing is a software testing technique which divides the application input test data into each partition at least once of equivalent data from which test cases can be derived.  By this testing method it reduces the time required for software testing.
104. What is white box testing and list the types of white box testing?
White box testing technique involves selection of test cases based on an analysis of the internal structure (Code coverage, branches coverage, paths coverage, condition coverage etc.)  of a component or system. It is also known as Code-Based testing or Structural testing.  Different types of white box testing are
  1. Statement Coverage
  2. Decision Coverage
105.  In white box testing what do you verify?
In white box testing following steps are verified.
  1. Verify the security holes in the code
  2. Verify the incomplete or broken paths in the code
  3. Verify the flow of structure according to the document specification
  4. Verify the expected outputs
  5. Verify all conditional loops in the code to check the complete functionality of the application
  6. Verify the line by line coding and cover 100% testing
106. What is the difference between static and dynamic testing?
Static testing: During Static testing method, the code is not executed and it is performed using the software documentation.
Dynamic testing:  To perform this testing the code is required to be in an executable form.
107. What is verification and validation?
Verification is a process of evaluating software  at development phase and to decide whether the product of a given  application satisfies the specified requirements. Validation is the process of evaluating software at the end of the development process and to check whether it meets the customer requirements.
108. What are different test levels?
There are four test levels
  1. Unit/component/program/module testing
  2. Integration testing
  3. System testing
  4. Acceptance testing
109. What is Integration testing?
Integration testing is a level of software testing process, where individual units of an application are combined and tested. It is usually performed after unit and functional testing.
110. What are the tables in testplans?
Test design, scope, test strategies , approach are various details that Test plan document consists of.
  1. Test case identifier
  2. Scope
  3. Features to be tested
  4. Features not to be tested
  5. Test strategy & Test approach
  6. Test deliverables
  7. Responsibilities
  8. Staffing and training
  9. Risk and Contingencies
111.  What is the difference between UAT (User Acceptance Testing) and System testing?
System Testing: System testing is finding defects when the system under goes testing as a whole, it is also known as end to end testing. In such type of testing, the application undergoes from beginning till the end.
UAT: User Acceptance Testing (UAT) involves running a product through a series of specific  tests  which determines whether the product wil meet the needs of its users.