Web Applications Testing Techniques

What is Web Testing?

Web Testing in simple terms is checking your web application for potential bugs before its made live or before code is moved into the production environment.
During this stage issues such as that of web application security, the functioning of the site, its access to handicapped as well as regular users and its ability to handle traffic is checked.

Web Application Testing Checklist:

Some or all of the following testing types may be performed depending on your web testing requirements.

1. Functionality Testing :

This is used to check if your product is as per the specifications you intended for it as well as the functional requirements you charted out for it in your developmental documentation.Testing Activities Included:
Test all links in your webpages are working correctly and make sure there are no broken links. Links to be checked will include -
  • Outgoing links
  • Internal links
  • Anchor Links
  • MailTo Links
Test  Forms are working as expected. This will include-
  • Scripting checks on the form are working as expected. For example- if a user does not fill a mandatory field in a form an error message is shown.
  • Check default values are being populated
  • Once submitted , the data in the forms is submitted to a live database or is linked to an working email address
  • Forms are optimally formatted for better readability
Test  Cookies are working as expected. Cookies are small files used by websites to primarily remember active user sessions so you do not need to log in every time you visit a website. Cookie Testing will include
  • Testing cookies (sessions) are deleted either when cache is cleared or when they reach their expiry.
  • Delete cookies (sessions) and test that login credentials are asked for when you next visit the site.
Test HTML and CSS to ensure that search engines can crawl your site easily. This will include
  • Checking for Syntax Errors
  • Readable Color Schemas
  • Standard Compliance.Ensure standards such W3C, OASIS, IETF, ISO, ECMA, or  WS-I are followed.

Test business workflow- This will include

  • Testing your end - to - end workflow/ business scenarios which takes the user through a series of webpage's to complete.
  • Test negative scenarios as well , such that when a user executes an unexpected step , appropriate error message or help is shown in your web application.

2. Usability testing:

Usability testing has now become a vital part of any web based project. It can be carried out by testers like you or a small focus group similar to the target audience of the web application.
Test the site Navigation:
  • Menus , buttons or Links to different pages on your site should be easily visible and consistent on all webpages
Test the Content:
  • Content should be legible with no spelling or grammatical errors.
  • Images if present should contain an "alt" text

3.Interface Testing:

Three areas to be tested here are - Application , Web and Database Server
  • Application: Test  requests are sent correctly to the Database and output at the client side is displayed correctly. Errors if any must be caught by the application and must be only shown to the administrator and not the end user.
  • Web Server: Test  Web server is handling all application requests without any service denial.
  • Database Server: Make sure queries sent to the database give expected results.
Test system response when connection between the three layers (Application, Web and Database) can not be established and appropriate message is shown to the end user.

  4.Database Testing:

Database is one critical component of your web application and stress must be laid to test it thoroughly. Testing activities will include-

  • Test if any errors are shown while executing queries
  • Data Integrity is maintained while creating , updating or deleting data in database.
  • Check response time of queries and fine tune them if necessary.
  • Test data retrieved from your database is shown accurately in your web application

  5. Compatibility testing.

Compatibility tests ensures that your web application displays correctly across different devices. This would include-
Browser Compatibility Test: Same website in different browsers will display differently. You need to test if your web application is being displayed correctly across browsers , javascript , AJAX and authentication is working fine. You may also check for Mobile Browser Compatibility.
The rendering of web elements like buttons , text fields etc changes with change in Operating System. Make sure your website works fine for various combination of Operating systems such as Windows , Linux , Mac and Browsers such as Firefox , Internet Explorer , Safari etc.

  6.Performance Testing:

This will ensure your site works under all loads. Testing activities will include but not limited to -

  • Website application response times at different connection speeds
  • Load test your web  application to determine its behavior under normal and peak loads
  • Stress test your web site to determine its break point when pushed to beyond normal loads at peak time.
  • Test if a crash occurs due to peak load , how does the site recover from such an event
  • Make sure optimization techniques like gzip compression , browser and server side cache enabled to reduce load times

  7. Security testing:

Security testing is vital for e-commerce website that store sensitive customer information like credit cards.Testing Activities will include-
  • Test unauthorized access to secure pages should not be permitted
  • Restricted files should not be downloadable without appropriate access
  • Check sessions are automatically killed after prolonged user inactivity
  • On use of SSL certificates , website should re-direct to encrypted SSL pages.

  8.Crowd Testing:

You will select a large number of people (crowd) to execute tests which otherwise would have been executed a select group of people in the company. Crowdsourced testing is an interesting and upcoming concept and helps unravel many a unnoticed defects.

Software Testing Interview Questions

1. What is the MAIN benefit of designing tests early in the life cycle?
It helps prevent defects from being introduced into the code.
2. What is risk-based testing?
Risk-based testing is the term used for an approach to creating a test strategy that is based on prioritizing tests by risk. The basis of the approach is a detailed risk analysis and prioritizing of risks by risk level. Tests to address each risk are then specified, starting with the highest risk first.
 3. A wholesaler sells printer cartridges. The minimum order quantity is 5. There is a 20% discount for orders of 100 or more printer cartridges. You have been asked to prepare test cases using various values for the number of printer cartridges ordered. Which of the following groups contain three test inputs that would be generated using Boundary Value Analysis?
4, 5, 99
4. What is the KEY difference between preventative and reactive approaches to testing?
Preventative tests are designed early; reactive tests are designed after the software has been produced.
5. What is the purpose of exit criteria?
The purpose of exit criteria is to define when a test level is completed.
6. What determines the level of risk?
 The likelihood of an adverse event and the impact of the event determine the level of risk.
7. When is used Decision table testing?
Decision table testing is used for testing systems for which the specification takes the form of rules or cause-effect combinations. In a decision table the inputs are listed in a column, with the outputs in the same column but below the inputs. The remainder of the table explores combinations of inputs to define the outputs produced.
8. What is the MAIN objective when reviewing a software deliverable?
To identify defects in any software work product.
9. Which of the following defines the expected results of a test? Test case specification or test design specification.
Test case specification defines the expected results of a test.
10. What is the benefit of test independence?
It avoids author bias in defining effective tests.
11. As part of which test process do you determine the exit criteria?
The exit criteria is determined on the bases of ‘Test Planning’.
12. What is beta testing?
Testing performed by potential customers at their own locations.
13. Given the following fragment of code, how many tests are required for 100% decision coverage?
if width > length
   thenbiggest_dimension = width
     if height > width
             thenbiggest_dimension = height
elsebiggest_dimension = length  
            if height > length 
                thenbiggest_dimension = height
14. You have designed test cases to provide 100% statement and 100% decision coverage for the following fragment of code. if width > length then biggest_dimension = width else biggest_dimension = length end_if The following has been added to the bottom of the code fragment above. print "Biggest dimension is " &biggest_dimensionprint "Width: " & width print "Length: " & length How many more test cases are required?
None, existing test cases can be used.
15. Rapid Application Development?
Rapid Application Development (RAD) is formally a parallel development of functions and subsequent integration. Components/functions are developed in parallel as if they were mini projects, the developments are time-boxed, delivered, and then assembled into a working prototype. This can very quickly give the customer something to see and use and to provide feedback regarding the delivery and their requirements. Rapid change and development of the product is possible using this methodology. However the product specification will need to be developed for the product at some point, and the project will need to be placed under more formal controls prior to going into production.
16. What is the difference between Testing Techniques and Testing Tools?
Testing technique: – Is a process for ensuring that some aspects of the application system or unit functions properly there may be few techniques but many tools.
Testing Tools: – Is a vehicle for performing a test process. The tool is a resource to the tester, but itself is insufficient to conduct testing
Learn More About Testing Tools  here
17. We use the output of the requirement analysis, the requirement specification as the input for writing …
User Acceptance Test Cases
18. Repeated Testing of an already tested program, after modification, to discover any defects introduced or uncovered as a result of the changes in the software being tested or in another related or unrelated software component:
Regression Testing
19. What is component testing?
Component testing, also known as unit, module and program testing, searches for defects in, and verifies the functioning of software (e.g. modules, programs, objects, classes, etc.) that are separately testable. Component testing may be done in isolation from the rest of the system depending on the context of the development life cycle and the system. Most often stubs and drivers are used to replace the missing software and simulate the interface between the software components in a simple manner. A stub is called from the software component to be tested; a driver calls a component to be tested.
Here is an awesome video on Unit Testing
20. What is functional system testing?
Testing the end to end functionality of the system as a whole is defined as a functional system testing.
21. What are the benefits of Independent Testing?
Independent testers are unbiased and identify different defects at the same time.
22. In a REACTIVE approach to testing when would you expect the bulk of the test design work to be begun?
The bulk of the test design work begun after the software or system has been produced.
23. What are the different Methodologies in Agile Development Model?
There are currently seven different agile methodologies that I am aware of:
  1. Extreme Programming (XP)
  2. Scrum
  3. Lean Software Development
  4. Feature-Driven Development
  5. Agile Unified Process
  6. Crystal
  7. Dynamic Systems Development Model (DSDM) 
24. Which activity in the fundamental test process includes evaluation of the testability of the requirements and system?
A ‘Test Analysis’ and ‘Design’ includes evaluation of the testability of the requirements and system.
25. What is typically the MOST important reason to use risk to drive testing efforts?
Because testing everything is not feasible.
26. What is random/monkey testing? When it is used?
Random testing often known as monkey testing. In such type of testing data is generated randomly often using a tool or automated mechanism. With this randomly generated input the system is tested and results are analysed accordingly. These testing are less reliable; hence it is normally used by the beginners and to see whether the system will hold up under adverse effects.
27. Which of the following are valid objectives for incident reports?
  1. Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary.
  2. Provide ideas for test process improvement.
  3. Provide a vehicle for assessing tester competence.
  4. Provide testers with a means of tracking the quality of the system under test.  
28. Consider the following techniques. Which are static and which are dynamic techniques?
  1. Equivalence Partitioning.
  2. Use Case Testing.
  3. Data Flow Analysis.
  4. Exploratory Testing.
  5. Decision Testing.
  6. Inspections.
Data Flow Analysis and Inspections are static; Equivalence Partitioning, Use Case Testing, Exploratory Testing and Decision Testing are dynamic.
29. Why are static testing and dynamic testing described as complementary?
Because they share the aim of identifying defects but differ in the types of defect they find.
30. What are the phases of a formal review?
In contrast to informal reviews, formal reviews follow a formal process. A typical formal review process consists of six main steps:
  1. Planning
  2. Kick-off
  3. Preparation
  4. Review meeting
  5. Rework
  6. Follow-up.
31. What is the role of moderator in review process?
The moderator (or review leader) leads the review process. He or she determines, in co-operation with the author, the type of review, approach and the composition of the review team. The moderator performs the entry check and the follow-up on the rework, in order to control the quality of the input and output of the review process. The moderator also schedules the meeting, disseminates documents before the meeting, coaches other team members, paces the meeting, leads possible discussions and stores the data that is collected.
32. What is an equivalence partition (also known as an equivalence class)?
An input or output ranges of values such that only one value in the range becomes a test case.
33. When should configuration management procedures be implemented?
During test planning.
34. A Type of functional Testing, which investigates the functions relating to detection of threats, such as virus from malicious outsiders?
Security Testing
35. Testing where in we subject the target of the test , to varying workloads to measure and evaluate the performance behaviours and ability of the target and of the test to continue to function properly under these different workloads?
Load Testing
36. Testing activity which is performed to expose defects in the interfaces and in the interaction between integrated components is?
Integration Level Testing
37. What are the Structure-based (white-box) testing techniques?
Structure-based testing techniques (which are also dynamic rather than static) use the internal structure of the software to derive test cases. They are commonly called 'white-box' or 'glass-box' techniques (implying you can see into the system) since they require knowledge of how the software is implemented, that is, how it works. For example, a structural technique may be concerned with exercising loops in the software. Different test cases may be derived to exercise the loop once, twice, and many times. This may be done regardless of the functionality of the software.
38. When “Regression Testing” should be performed?
After the software has changed or when the environment has changed Regression testing should be performed.
39What is negative and positive testing?
A negative test is when you put in an invalid input and receives errors. While a positive testing, is when you put in a valid input and expect some action to be completed in accordance with the specification. 
40. What is the purpose of a test completion criterion?
The purpose of test completion criterion is to determine when to stop testing
41. What can static analysis NOT find?
For example memory leaks.
42. What is the difference between re-testing and regression testing?
Re-testing ensures the original fault has been removed; regression testing looks for unexpected side effects.
43. What are the Experience-based testing techniques?
In experience-based techniques, people's knowledge, skills and background are a prime contributor to the test conditions and test cases. The experience of both technical and business people is important, as they bring different perspectives to the test analysis and design process. Due to previous experience with similar systems, they may have insights into what could go wrong, which is very useful for testing.
44. What type of review requires formal entry and exit criteria, including metrics?
45. Could reviews or inspections be considered part of testing?
Yes, because both help detect faults and improve quality.
46. An input field takes the year of birth between 1900 and 2004 what are the boundary values for testing this field?
47. Which of the following tools would be involved in the automation of regression test? a. Data tester b. Boundary tester c. Capture/Playback d. Output comparator.
d. Output comparator
48. To test a function, what has to write a programmer, which calls the function to be tested and passes it test data.
49. What is the one Key reason why developers have difficulty testing their own work?
Lack of Objectivity
50.“How much testing is enough?”
The answer depends on the risk for your industry, contract and special requirements.
51. When should testing be stopped?
It depends on the risks for the system being tested. There are some criteria bases on which you can stop testing.
  1. Deadlines (Testing, Release)
  2. Test budget has been depleted
  3. Bug rate fall below certain level
  4. Test cases completed with certain percentage passed
  5. Alpha or beta periods for testing ends
  6. Coverage of code, functionality or requirements are met to a specified point
52. Which of the following is the main purpose of the integration strategy for integration testing in the small?
The main purpose of the integration strategy is to specify which modules to combine when and how many at once.
53.What are semi-random test cases?
Semi-random test cases are nothing but when we perform random test cases and do equivalence partitioning to those test cases, it removes redundant test cases, thus giving us semi-random test cases.
54. Given the following code, which statement is true about the minimum number of test cases required for full statement and branch coverage?
     Read p
     Read q
     IF p+q> 100
          THEN Print "Large"
    IF p > 50
          THEN Print "p Large"
1 test for statement coverage, 2 for branch coverage
55.  What is black box testing? What are the different black box testing techniques?
Black box testing is the software testing method which is used to test the software without knowing the internal structure of code or program. This testing is usually done to check the functionality of an application. The different black box testing techniques are
  1. Equivalence Partitioning
  2. Boundary value analysis
  3. Cause effect graphing
56. Which review is normally used to evaluate a product to determine its suitability for intended use and to identify discrepancies?
Technical Review.
57. Why we use decision tables?
The techniques of equivalence partitioning and boundary value analysis are often applied to specific situations or inputs. However, if different combinations of inputs result in different actions being taken, this can be more difficult to show using equivalence partitioning and boundary value analysis, which tend to be more focused on the user interface. The other two specification-based techniques, decision tables and state transition testing are more focused on business logic or business rules. A decision table is a good way to deal with combinations of things (e.g. inputs). This technique is sometimes also referred to as a 'cause-effect' table. The reason for this is that there is an associated logic diagramming technique called 'cause-effect graphing' which was sometimes used to help derive the decision table
58. Faults found should be originally documented by whom?
By testers.
59. Which is the current formal world-wide recognized documentation standard?
There isn’t one.
60. Which of the following is the review participant who has created the item to be reviewed?
61. A number of critical bugs are fixed in software. All the bugs are in one module, related to reports. The test manager decides to do regression testing only on the reports module.
Regression testing should be done on other modules as well because fixing one module may affect other modules.
62. Why does the boundary value analysis provide good test cases?
Because errors are frequently made during programming of the different cases near the ‘edges’ of the range of values.
63. What makes an inspection different from other review types?
It is led by a trained leader, uses formal entry and exit criteria and checklists.
64. Why can be tester dependent on configuration management?
Because configuration management assures that we know the exact version of the testware and the test object.
65. What is a V-Model?
A software development model that illustrates how testing activities integrate with software development phases
66. What is maintenance testing?
Triggered by modifications, migration or retirement of existing software
67. What is test coverage?
Test coverage measures in some specific way the amount of testing performed by a set of tests (derived in some other way, e.g. using specification-based techniques). Wherever we can count things and can tell whether or not each of those things has been tested by some test, then we can measure coverage.
68. Why is incremental integration preferred over “big bang” integration?
Because incremental integration has better early defects screening and isolation ability
69. When do we prepare RTM (Requirement traceability matrix), is it before test case designing or after test case designing?
It would be before test case designing. Requirements should already be traceable from Review activities since you should have traceability in the Test Plan already. This question also would depend on the organisation. If the organisations do test after development started then requirements must be already traceable to their source. To make life simpler use a tool to manage requirements.
70. What is called the process starting with the terminal modules?
Bottom-up integration
71. During which test activity could faults be found most cost effectively?
During test planning
72. The purpose of requirement phase is
To freeze requirements, to understand user needs, to define the scope of testing
73. Why we split testing into distinct stages?
We split testing into distinct stages because of following reasons,
  1. Each test stage has a different purpose
  2. It is easier to manage testing in stages
  3. We can run different test into different environments
  4. Performance and quality of the testing is improved using phased testing
74. What is DRE?
To measure test effectiveness a powerful metric is used to measure test effectiveness known as DRE (Defect Removal Efficiency) From this metric we would know how many bugs we have found from the set of test cases. Formula for calculating DRE is
DRE=Number of bugs while testing  / number of bugs while testing + number of bugs found by user
75. Which of the following is likely to benefit most from the use of test tools providing test capture and replay facilities? a) Regression testing b) Integration testing c) System testing d) User acceptance testing
Regression testing
76. How would you estimate the amount of re-testing likely to be required?
Metrics from previous similar projects and discussions with the development team
77. What studies data flow analysis?
The use of data on paths through the code.
78. What is Alpha testing?
Pre-release testing by end user representatives at the developer’s site.
79. What is a failure?
Failure is a departure from specified behaviour.
80. What are Test comparators?
Is it really a test if you put some inputs into some software, but never look to see whether the software produces the correct result? The essence of testing is to check whether the software produces the correct result, and to do that, we must compare what the software produces to what it should produce. A test comparator helps to automate aspects of that comparison.
81. Who is responsible for document all the issues, problems and open point that were identified during the review meeting
82. What is the main purpose of Informal review
Inexpensive way to get some benefit
83. What is the purpose of test design technique?
Identifying test conditions and Identifying test cases
84. When testing a grade calculation system, a tester determines that all scores from 90 to 100 will yield a grade of A, but scores below 90 will not. This analysis is known as:
85. A test manager wants to use the resources available for the automated testing of a web application. The best choice is Tester, test automater, web specialist, DBA
86. During the testing of a module tester ‘X’ finds a bug and assigned it to developer. But developer rejects the same, saying that it’s not a bug. What ‘X’ should do?
Send to the detailed information of the bug encountered and check the reproducibility
87. A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages.
Big-Bang Testing
88. In practice, which Life Cycle model may have more, fewer or different levels of development and testing, depending on the project and the software product. For example, there may be component integration testing after component testing, and system integration testing after system testing.
89. Which technique can be used to achieve input and output coverage? It can be applied to human input, input via interfaces to a system, or interface parameters in integration testing.
90. “This life cycle model is basically driven by schedule and budget risks” This statement is best suited for…
91. In which order should tests be run?
The most important one must tests first
92. The later in the development life cycle a fault is discovered, the more expensive it is to fix. Why?
The fault has been built into more documentation, code, tests, etc
93. What is Coverage measurement?
It is a partial measure of test thoroughness.
94. What is Boundary value testing?
Test boundary conditions on, below and above the edges of input and output equivalence classes. For instance, let say a bank application where you can withdraw maximum Rs.20,000 and a minimum of Rs.100, so in boundary value testing we test only the exact boundaries, rather than hitting in the middle.  That means we test above the maximum limit and below the minimum limit.
95. What is Fault Masking?
Error condition hiding another error condition.
96. What does COTS represent?
Commercial off The Shelf.
97.The purpose of wich is allow specific tests to be carried out on a system or network that resembles as closely as possible the environment where the item under test will be used upon release?
Test Environment
98. What can be thought of as being based on the project plan, but with greater amounts of detail?
Phase Test Plan
99. What is exploratory testing?
 Exploratory testing is a hands-on approach in which testers are involved in minimum planning and maximum test execution. The planning involves the creation of a test charter, a short declaration of the scope of a short (1 to 2 hour) time-boxed test effort, the objectives and possible approaches to be used. The test design and test execution activities are performed in parallel typically without formally documenting the test conditions, test cases or test scripts. This does not mean that other, more formal testing techniques will not be used. For example, the tester may decide to use boundary value analysis but will think through and test the most important boundary values without necessarily writing them down. Some notes will be written during the exploratory-testing session, so that a report can be produced afterwards.
100. What is “use case testing”?
In order to identify and execute the functional requirement of an application from end to finish “use case” is used and the techniques used to do this is known as “Use Case Testing” 
101. What is the difference between STLC (  Software Testing Life Cycle) and SDLC ( Software Development Life  Cycle) ?
The complete Verification and Validation of software is done in SDLC, while STLC only does Validation of the system. SDLC is a part of STLC.
102. What is traceability matrix?
The relationship between test cases and requirements is shown with the help of a document. This document is known as traceability matrix.
 103. What is Equivalence partitioning testing?
Equivalence partitioning testing is a software testing technique which divides the application input test data into each partition at least once of equivalent data from which test cases can be derived.  By this testing method it reduces the time required for software testing.
104. What is white box testing and list the types of white box testing?
White box testing technique involves selection of test cases based on an analysis of the internal structure (Code coverage, branches coverage, paths coverage, condition coverage etc.)  of a component or system. It is also known as Code-Based testing or Structural testing.  Different types of white box testing are
  1. Statement Coverage
  2. Decision Coverage
105.  In white box testing what do you verify?
In white box testing following steps are verified.
  1. Verify the security holes in the code
  2. Verify the incomplete or broken paths in the code
  3. Verify the flow of structure according to the document specification
  4. Verify the expected outputs
  5. Verify all conditional loops in the code to check the complete functionality of the application
  6. Verify the line by line coding and cover 100% testing
106. What is the difference between static and dynamic testing?
Static testing: During Static testing method, the code is not executed and it is performed using the software documentation.
Dynamic testing:  To perform this testing the code is required to be in an executable form.
107. What is verification and validation?
Verification is a process of evaluating software  at development phase and to decide whether the product of a given  application satisfies the specified requirements. Validation is the process of evaluating software at the end of the development process and to check whether it meets the customer requirements.
108. What are different test levels?
There are four test levels
  1. Unit/component/program/module testing
  2. Integration testing
  3. System testing
  4. Acceptance testing
109. What is Integration testing?
Integration testing is a level of software testing process, where individual units of an application are combined and tested. It is usually performed after unit and functional testing.
110. What are the tables in testplans?
Test design, scope, test strategies , approach are various details that Test plan document consists of.
  1. Test case identifier
  2. Scope
  3. Features to be tested
  4. Features not to be tested
  5. Test strategy & Test approach
  6. Test deliverables
  7. Responsibilities
  8. Staffing and training
  9. Risk and Contingencies
111.  What is the difference between UAT (User Acceptance Testing) and System testing?
System Testing: System testing is finding defects when the system under goes testing as a whole, it is also known as end to end testing. In such type of testing, the application undergoes from beginning till the end.
UAT: User Acceptance Testing (UAT) involves running a product through a series of specific  tests  which determines whether the product wil meet the needs of its users.