Testing Start Process

Testing is sometimes incorrectly thought as an after-the-fact activity; performed after programming is done for a product. Instead, testing should be performed at every development stage of the product. Test data sets must be derived and their correctness and consistency should be monitored throughout the development process. 
If we divide the lifecycle of software development into “Requirements Analysis”, “Design”, “Programming/Construction” and “Operation and Maintenance”, then testing should accompany each of the above phases. If testing is isolated as a single phase late in the cycle, errors in the problem statement or design may incur exorbitant costs. Not only must the original error be corrected, but the entire structure built upon it must also be changed. Therefore, testing should not be isolated as an inspection activity. Rather testing should be involved throughout the SDLC in order to bring out a quality product.

Testing Activities in Each Phase

The following testing activities should be performed during the phases:
    - Determine correctness
    - Generate functional test data.
    - Determine correctness and consistency
    - Generate structural and functional test data.

      - Determine correctness and consistency
      - Generate structural and functional test data
      - Apply test data
      - Refine test data
      - Retest

Operations and Maintenance

Corrections, modifications and extensions are bound to occur even for small programs and testing is required every time there is a change. Testing during maintenance is termed regression testing. The test set, the test plan, and the test results for the original program should exist.

Modifications must be made to accommodate the program changes, and then all portions of the program affected by the modifications must be re-tested. After regression testing is complete, the program and test documentation must be updated to reflect the changes.


Here the main testing points are:

- Check the code for consistency with design - the areas to check include modular structure, module interfaces, data structures, functions, algorithms and I/O handling.

- Perform the Testing process in an organized and systematic manner with test runs dated, annotated and saved. A plan or schedule can be used as a checklist to help the programmer organize testing efforts. If errors are found and changes made to the program, all tests involving the erroneous segment (including those which resulted in success previously) must be rerun and recorded.

- Asks some colleague for assistance - Some independent party, other than the programmer of the specific part of the code, should analyze the development product at each phase. The programmer should explain the product to the party who will then question the logic and search for errors with a checklist to guide the search. This is needed to locate errors the programmer has overlooked.

- Use available tools - the programmer should be familiar with various compilers and interpreters available on the system for the implementation language being used because they differ in their error analysis and code generation capabilities.

- Apply Stress to the Program - Testing should exercise and stress the program structure, the data structures, the internal functions and the externally visible functions or functionality. Both valid and invalid data should be included in the test set.
- Test one at a time - Pieces of code, individual modules and small collections of modules should be exercised separately before they are integrated into the total program, one by one. Errors are easier to isolate when the no. of potential interactions should be kept small. Instrumentation-insertion of some code into the program solely to measure various program characteristics – can be useful here. A tester should perform array bound checks, check loop control variables, determine whether key data values are within permissible ranges, trace program execution, and count the no. of times a group of statements is executed.

- Measure testing coverage/When should testing stop? - If errors are still found every time the program is executed, testing should continue. Because errors tend to cluster, modules appearing particularly error-prone require special scrutiny.

The metrics used to measure testing thoroughness include statement testing (whether each statement in the program has been executed at least once), branch testing (whether each exit from each branch has been executed at least once) and path testing (whether all logical paths, which may involve repeated execution of various segments, have been executed at least once). Statement testing is the coverage metric most frequently used as it is relatively simple to implement.

The amount of testing depends on the cost of an error. Critical programs or functions require more thorough testing than the less significant functions.


The design document aids in programming, communication, and error analysis and test data generation. The requirements statement and the design document should together give the problem and the organization of the solution i.e. what the program will do and how it will be done.

The design document should contain:

  • Principal data structures.

  • Functions, algorithms, heuristics or special techniques used for processing.

  • The program organization, how it will be modularized and categorized into external and internal interfaces.

  • Any additional information.

Here the testing activities should consist of:

- Analysis of design to check its completeness and consistency - the total process should be analyzed to determine that no steps or special cases have been overlooked. Internal interfaces, I/O handling and data structures should specially be checked for inconsistencies.

- Analysis of design to check whether it satisfies the requirements - check whether both requirements and design document contain the same form, format, units used for input and output and also that all functions listed in the requirement document have been included in the design document. Selected test data which is generated during the requirements analysis phase should be manually simulated to determine whether the design will yield the expected values.

- Generation of test data based on the design - The tests generated should cover the structure as well as the internal functions of the design like the data structures, algorithm, functions, heuristics and general program structure etc. Standard extreme and special values should be included and expected output should be recorded in the test data.

- Re-examination and refinement of the test data set generated at the requirements analysis phase.

The first two steps should also be performed by some colleague and not only the designer/developer.

Requirements Analysis

The following test activities should be performed during this stage:

1.1 Invest in analysis at the beginning of the project - Having a clear, concise and formal statement of the requirements facilitates programming, communication, error analysis and test data generation.

The requirements statement should record the following information and decisions:

a. Program function - What the program must do?

b. The form, format, data types and units for input.

c. The form, format, data types and units for output.

d. How exceptions, errors and deviations are to be handled.

e. For scientific computations, the numerical method or at least the required accuracy of the solution.

f. The hardware/software environment required or assumed (e.g. the machine, the operating system, and the implementation language).

Deciding the above issues is one of the activities related to testing that should be performed during this stage.
1.2 Start developing the test set at the requirements analysis phase - Data should be generated that can be used to determine whether the requirements have been met. To do this, the input domain should be partitioned into classes of values that the program will treat in a similar manner and for each class a representative element should be included in the test data.

In addition, following should also be included in the data set:

(1) boundary values

(2) any non-extreme input values that would require special handling.

The output domain should be treated similarly.

Invalid input requires the same analysis as valid input.

1.3 The correctness, consistency and completeness of the requirements should also be analyzed - Consider whether the correct problem is being solved, check for conflicts and inconsistencies among the requirements and consider the possibility of missing cases.

Game Tester Interview Questions

What Game Playing Platforms are You Familiar With?

  • The fact that you can score well on a particular video game played on one particular system does not qualify you to be a video game tester. Video game development companies often create games for several of the most popular video game systems, as well as the current PC and Mac platforms. To be effective as a game tester, you need to be proficient in all of those platforms.

What Kinds of Video Games Do You Play?

  • This question is one that you need to be prepared for by researching the game genres the company focuses on. If you are interviewing with a sports game developer, then your extensive experience in role-playing war games is not going to be effective. Prepare a list of genres that you are experienced in that apply to the company you are interviewing with.

Are You Familiar with Game Testing Terminology?

  • Video game testers are asked to evaluate several aspects of a game, including gameplay, graphics and programming defects. Terms such as controller lag, frame popping and frame rates are essential to understand in the video game testing industry. Do extensive research on these terms, understand their meaning and learn how to apply them when evaluating a video game.

Have You Played Our Latest Game and What Did You Think of It?

  • The interviewer is looking for a very specific answer to this question that will indicate you have the basic knowledge to be a video game tester. Apply your knowledge of testing terms and develop a practical evaluation, as opposed to an overall opinion of the game. The interviewer will want to see your ability to break down a video game and analyze its parts when answering this question.



I recommend that you have a college degree (even if it's from an online university) before applying for a job as a tester, but it's possible to get a testing job without one. But consider for a moment -- what is your ultimate goal? If you eventually want to become a designer or producer or move up into marketing or become an executive, a college degree is definitely helpful. If you just want to be a tester (and do not have any goals beyond that), then fine, a high school diploma might suffice. But guess what three attributes or skills you need first and foremost to be a tester...? These are the sort of things they'll grill you on if you apply for a QA job:

  •    Communication skills - The tester must be able to communicate in two ways: via the written word and via the spoken word.
         * Written communication skills. Bug reports are submitted in writing. They have to be clear and concise. The tester needs to be a gud speler (and needs to be fluent with punctuation marks and the Shift key). Darn my hide, I put that in parentheses, and it's really important. Let me say that again. A tester must type in complete sentences. A tester must understand, and habitually use, proper punctuation and capitalization. You cannot become a tester at a game company where everybody uses English, if you cannot communicate properly in written English. Here's an exercise that will help you...
             To develop your written communication skills, write an essay or a game critique or a game idea. As you write, put yourself in the place of the reader. Every time you express an idea that could raise a question in the mind of the reader, answer the question. By the time your article is complete, there should be no questions in the mind of the reader - except questions that you want to remain unanswered.
            - The bug-writing exercise. Check out this example of a written bug report: https://bugzilla.mozilla.org/show_bug.cgi?id=407098. Now make up a totally different bug, on a different platform. That example bug describes a Firefox bug (Bugzilla is the bug-tracking system of Mozilla, who makes Firefox), but it contains the important elements of a good bug:
        1. The actual result (what happened that shouldn't have);
        2. The expected result (what should have happened instead);
        3. Steps to reproduce the bug.
        So this exercise is to write an imaginary bug for a PS3 game. Make up a bug; be creative. You have to write your bug in a word processor or text editor (you can't report a bug to a game publisher using Bugzilla, and you can't report a bug to a game publisher using that publisher's bug-tracking system, since you don't have access to it). And after you write the PS3 bug, write another one for a DS game. Write a few bugs and become comfortable with bug-writing.
         * Verbal communication skills. The tester must be well-spoken. Words that come out of a tester's mouth must convey his thoughts clearly, giving information to the listener. Imagine these two exercises, which will help the tester in developing verbal communication skills. How a tester performs in these exercises also reveals the level of his existing verbal communication skills. Both of these exercises are best performed in neighboring cubicles -- the two people taking part in the exercise can easily converse but cannot see what the other is doing.
             - The paperclip exercise. In this exercise, the tester must describe a randomly-bent paper clip to another person who has a pencil and paper. The goal is for the tester to get the listener/"customer" to draw a picture of the bent paper clip, without the tester ever saying the words "paper clip" or describing what the object is made of or was originally used for in any way whatsoever. Simply describing how the paper clip looks in its present state, the tester must obtain a correct picture of the paper clip on the second person's piece of paper. It can be enlightening for the tester to see what the drawing looks like, after completing the exercise. This exercise can also be performed using pipecleaners or twist-ties. The clip should be bent in a flat (2D) shape, not a 3D shape, since the listener/"customer" is drawing on 2D paper.

            - The building blocks exercise. This exercise is used at Nintendo of America to train or test their Customer Support representatives, but I think it applies equally well to the communication skills needed for testing. Both parties to the exercise have identical boxes of wooden building blocks (it could also work with Legos, I suppose). The tester builds a structure from his building blocks and describes his structure to the other participant in the exercise. If the tester does it well, the two structures will be identical. If the two structures are not identical, the tester can learn how he ought to improve.
            - The telephone exercise. This is an actual question that a testing applicant was tested with. "Describe the use of a telephone." He thought it was a stupid question and gave a stupid answer. Don't do what he did! When you're applying for a QA job, you will be asked to prove that you'd make a good tester. So if you're asked how to use a common everyday appliance like a telephone, give a clear and coherent description of how to use it. "There are two uses of a telephone: it's for receiving calls, and it's for making calls." Then describe how to act when the telephone rings. Describe how this works for a user of a phone with a wired handset, a wireless handset, and a mobile phone. Then describe how to make a call - if you have a dial phone, if you have a touch-tone phone, and if you have a mobile phone. If you can't do this, you'll never get hired to test games.
  •    Computer literacy. Testers know how to take computers apart and put them back together. Testers know how to browse the Internet, and they know all about email, instant messaging, and chat room netiquette. Testers know how to troubleshoot installation issues, download drivers, update virus DAT files, and upgrade computers. Testers know how to use word processors, imaging programs, scanners, and modems. Testers are often called upon to make screen shots of games, so you need to know how to grab a shot, and crop it in Photoshop or GIMP. Especially important: know how to use a database program. Check out Bugzilla and Mantis, fool around with them to create some sample bug reports.
  •    Game literacy. Play as many games as you can. Compare the pros and cons of this game versus that game. Read game magazines. Know the difference between an FRP and an RTS. Online games, console games, handheld games, board games, CCGs.
Snap reading comprehension quiz: What are the three attributes needed for a game tester?
For extra credit: Can you think of any other ways to improve your skills in these three areas?



    'A' bug -- The 'A' bug is the very worst kind of bug. This type of bug can be summed up thusly: "It would be an unthinkably bad disaster if the game was released with this problem unfixed." Some examples:
       - the game crashes;
       - there is a virus in the game;
       - there are obvious spelling errors;
       - there are obvious graphical or audio problems;
       - a feature (in a menu or accessed by pressing a button) does not function;
       - there is no copyright language in the game anywhere;
       - the game is not fun to play.
    Releasing a game with this sort of flaw would generate very bad public reaction and bad press, or there could be legal ramifications against the company.

    'B' bug
    -- The 'B' bug is not quite as bad as the 'A' bug. It can be summed up thusly: "It would be unfortunate if the game was released with this problem unfixed, but the game is good regardless." In a pinch, if the company has a need to release the game and stop spending money testing and fixing it, and if Customer Support, Sales, Marketing, QA, and the executive staff all agree, the game may be released with minor flaws. For example:
       - bugs which do not ruin the experience of playing the game;
       - noticeable graphical or audio problems (especially if you know where to look for them);
       - highly desirable features were left out (and are not mentioned anywhere in the game).
    'B' bugs will likely show up in press reviews of the game but are things that are probably hard (expensive; time-consuming) to fix. The playing public won't be happy with these problems, but the overall playing experience is not ruined by the existence of these problems.

    'C' bug
    -- The 'C' bug can be summed up, "It would be nice to fix this problem." The tester may feel strongly about this problem s/he has identified, but when weighed against the company's larger need to release the game, the bug isn't that big a problem in the decision-makers' view. When push comes to shove, 'C' bugs may have to fall by the wayside (if they're hard to fix, that is -- a 'C' bug that's easy and quick to fix is likely to simply get fixed, unless the project is coming down to the wire).

    'D' bug -- "It would be nice to add this feature." Especially when reported later in the test process, 'D' bugs are likely to remain unfixed.

    "All bugs should be fixed." -- Ideally, of course, this is true. But some games are so big and complicated that the fixing would simply never end. And some testers are pickier than other as to what constitutes a bug that needs to be fixed. There have to be checks and balances in a game company (just as there are in a governing body).

    "Alpha" -- The terms "Alpha" and "Beta" are defined differently by every company. Especially, developers' definitions of these terms may vary from publishers' definitions of these terms. Some developers may prefer to define Alpha as "code that demonstrates how the game will play." But most publishers (specifically a publisher's QA department) would prefer to define Alpha as "everything has been implemented in the game but there are bugs and the gameplay needs tweaking."

    "Beta" -- Some developers may prefer to define Beta as "everything has been implemented in the game but there are bugs and the gameplay needs tweaking." But most publishers (or their QA departments) would prefer to define Beta as "everything has been implemented and as far as the developer knows, there are no bugs and the gameplay has been fully tweaked."

    "Beta testing" -- Quality Assurance testing is a different thing from Beta testing. We usually use the term "beta tester" to refer to volunteers who test for free from their homes. Q.A., on the other hand, is a full-time position, a paid job. Beta testing is a good way to break into real testing. Look for opportunities to volunteer when you see that a game company is seeking beta testers (usually in an online game bulletin board or something - it's hard to seek out beta testing opportunities, you just have to be active in the game community's online forums. I also hear fileplanet is a place where beta testing opportunities can be found, if you really want to do it). Do the beta testing well, and you might get offered a real testing job.

    "Can Not Replicate." -- Sometimes a problem will happen to a tester but he can't provide steps to replicate the problem. If the programmer can't cause the problem to occur, with the debugger running to reveal the source of the problem, it may be difficult to fix. A good tester will try to make the problem happen again or figure out why it happened.

    "Gold Master" -- The CD or DVD released by QA to manufacturing. This disc has been verified and virus-checked and has gone through an extensive checklist before it's sent out the door.

    "It's a feature." -- The corollaries to this one are "Not a bug" and "Works as designed" (below). Sometimes what the tester expects the game to do, and what the game does instead, cause a bug report to be written. The bug report goes to the designer, who says "that's not a problem -- that's the way I designed it to work, and here's why it should remain as is ..." If the testers can present a convincing argument that the "feature" is counterintuitive or unfriendly, then perhaps it needs to be changed.

    "Need more info (NMI)." -- This comment is likely scribbled on a bug report that doesn't tell the programmer enough information about how to replicate a bug, or why the tester feels that it is a bug.

    "Not a bug (NAB)." -- See "It's a feature" (above).

    "Psychotic user behavior." -- Term used to characterize a problem caused by unreasonable user input. For example: "The game crashes if you press F10, then Esc, 30 or 40 times in a row." No reasonable user would do this, and even if someone does do it, it would be unreasonable to fault the game for crashing under these circumstances. If the problem is hard to fix and the project is coming down to the wire, it may be simply written off.

    "Release" -- QA signs off on the game and puts their stamp of approval on sending the game off to be manufactured.

    "Ship it." -- This phrase is heard at the tail end of the test process, when the test team is starting to see the light at the end of the tunnel.
       - Tester: "I found a bug."
       - Lead tester: "What kind of bug?"
       - Tester: "It's just a 'C' bug, not a biggie. Psychotic user behavior."
       - Lead tester: "Ship it!"
    Finished (tested, quality approved) games are shipped by Fedex or other courier service (or sometimes, if really important and timely, delivered by a member of the team) to the manufacturing facility. Manufactured product is shipped by truck to the stores. "Ship it" is the mantra used to seal the importance of cutting off testing and releasing the game into the wild.

    "Tweak" -- Synonymous with "adjust."

    "Will not fix (WNF)." -- When time is running short, and minor bugs are reported, the programmer or the designer or the producer may scribble this cryptic note on the bug report. All bugs have to be "closed" (resolved) before the game can be released.

    "Works as designed (WAD)." -- See "It's a feature" (above).

Team Leader Role and Responsibilities

Team Leaders have a wide range of responsibilities, and may be call on to complete any task that need a group to succeed. We’ve broken the list into categories:

Coaching for Team Success

1. Provide your team with the company’s vision and the objectives of all projects.
2. Create an environment oriented to open communications, creative thinking, cohesive team effort and workplace trust.
3. Lead by example (be a role model) – make your behavior consistent with your words
4. Manage, train, and help the development of team members; help resolve any dysfunctional behavior
5. Attempt to achieve team consensus and create win-win agreements wherever possible
6. Lead problem solving and collaboration
7. Keep discussions focused and ensure decisions lead toward closure
8. Build and foster healthy group dynamics
9. Assure that all team members have the required education and training to effectively participate on their assigned project.
10. Acknowledge and reward team and team member accomplishments, as well as exceptional performance
11. Lead creativity, risk-taking, and continuous improvements in workflow

Informational Leadership

1. Familiarize the team with the customer needs, specifications, design targets, the development process, design standards, techniques and tools to support task performance
2. Provide all necessary business information
3. Initiate sub-groups or sub-teams as appropriate to resolve issues and perform tasks in parallel
4. Help keep the team focused and on track

Coordinate for Team Success

1. Work with functional managers and the team sponsor to obtain necessary resources to support the team’s requirements
2. Establish meeting times, places and agendas
3. Coordinate the review, presentation and release of design layouts, drawings, analysis and other documentation
4. Coordinates meetings with the product committee, project manager and functional management to discuss project impediments, needed resources or issues/delays in completing the task

Professional Directional Communication

1. Provide status reporting of team activities against the program plan or schedule
2. Keep the project manager and product committee informed of task accomplishment, issues and status
3. Serve as a focal point to communicate and resolve interface and integration issues with other teams
4. Escalate issues which cannot be resolved by the team
5. Provide guidance to the team based on management direction

Acting Not Reacting on Project Threats

Some examples of typical project threats are:
1. Unreasonable business requirements
2. System performance roadblocks
3. Unproven technical solutions
4. Hostile business clients
The astute leader remains on the alert for these potential treats, and as soon as they are recognized, he or she then deals with them in their early stages.

Roles and responsibilities of a Project Manager

The below four lines would be the starting sentences of a typical project team meeting.

“Is every one there?” Manager asked to his team in meeting.
Answer “Yes” came in asynchronously from all different voices.
“Is everything fine? Anyone facing any issues in their project objectives?” This is manager’s voice.
One member raised the voice and told “I have one issue regarding the database design”
………….. And it goes on for an hour...

Why am I presenting the dialogs at the first place to explain the role & responsibilities of Project Manager?
In the above conversation between PM and team, we understood project monitoring role of the PM. Actually, the role & responsibilities of a Project Manager is little complex and needs to be explained elaborately in clear terms for each project. Let me list down few important roles & responsibilities of a Project Manager.(This is not a complete list)

* The Project Manager is the person responsible for managing the project.

* The Project Manager is the person responsible for accomplishing the project objectives within the constraints of the project. He is responsible for the outcome(success or failure) of the project.

* The Project Manager is involved with the planning, controlling and monitoring, and also managing and directing the assigned project resources to best meet project objectives.

* The Project Manager controls and monitors “triple constraints”—project scope, time and cost(quality also)—in managing competing project requirements.

* The Project Manager examines the organizational culture and determine whether project management is recognized as a valid role with accountability and authority for managing the project.

* The Project Manager collects metrics data(such as baseline, actual values for costs, schedule, work in progress, and work completed) & reports on project progress and other project specific information to stakeholders.

* The Project Manager is responsible for identifying, monitoring, and responding to risk.

* The Project Manager is responsible to the project stakeholders for delivering a project’s objectives within scope, schedule, cost, and quality.

* The reporting structure of a Project Manager changes depends on organizational structure. He may reports to a Functional Manager or to a Program Manager.

In a bit exaggerating terms, Project Manager is the ‘God’ of his project and he is the one who decides the success of the project.

Automation tools

What are the various automation tools availble in testing? How will you decide on a tool for test automation?

There are quite lot of Automation tools available in market. Notable and reliable tools as follows:
-HP QuickTest Professional HP 11.0
-Load Runner
-IBM Rational Functional Tester IBM Rational
-Rational robot IBM Rational 2003
-Selenium Open source 1.0.10
-SilkTest Micro Focus 2010 R2
-TestComplete SmartBear Software 8.2
-TestPartner Micro Focus 6.3
Decision on which tool to be used for Automation is solely depended on the project requirement. There are few points that need to be considered while selecting the tool:
-Cost of the Software tool which support your platform and technology
-Programming language - Easy to learn and use. This covers:
  - Easy dedugging and logging
  - Test Data Management
  - Reporting Features Failure and Error Logging
  - Re-usability of components and libraries

Testing types

Explain a.)Upward Compression testing b.)Usability Testing c.)Gray box testing d.)Structural Testing e.) Reliability Testing

a.)Upward Compression testing
Upward compression testing is testing the compression of a subordinate module into a superior module when de-modularization is done.
b.)Usability Testing
Usability testing focuses on how usable the product is and is generally performed by the actual users who will be using the product. This technique is generally used for web and mobile applications where usability is of high importance.
c.)Gray box testing
Gray box testing is combination of white and black box testing wherein the tester is required to know little understanding of the program's internal logic and will verify the functionality of the program based on the understanding.
d.)Structural Testing
Structural testing technique uses structural or internal perspective of the system to design test cases and carry execution. They are also called as white box or clear box testing.
e.) Reliability Testing
Reliability testing is testing for the software's reliability which is checking if the program performs its intended function with required precision as expected.

Differentiate between smoke testing and sanity testing.

-Smoke testing verifies all areas of application; sanity testing verifies one or few areas only.
-Smoke testing is done before accepting code for testing - Sanity testing is a subset of regression testing and is used whenever it is sufficient to prove that the application is working as per requirements.

Test Methodology, Scenario and Test Case

Explain a.) Test Methodology b.) Test Scenario c.) Test Case d.) Requirement traceability matrix

a.) Test Methodology
Testing methodology determines how an application will be tested and what will be tested. Example of methodologies: waterfall, agile etc.

b.) Test Scenario
Test scenario is a logical grouping of test cases and it mentions the sequence in which the test cases are to be executed.

c.) Test Case
A test case is a unit level document describing the inputs, steps of execution and the expected result of each test condition for every requirement from the BRD. Testers determine whether the application is working correctly or not based on the test case that is being executed. A test case is marked as "Pass" if the application works as expected and is marked as "Fail" if otherwise. Test cases also aide in generating test status metrics.

d.) Requirement traceability matrix
RTM is a matrix tying up requirements with the testcases. It is a way of making sure that every requirement has a corresponding testcase which will be tested thereby ensuring complete requirements coverage.

Differentiate between smoke testing and sanity testing.

-Smoke testing verifies all areas of application; sanity testing verifies one or few areas only.
-Smoke testing is done before accepting code for testing - Sanity testing is a subset of regression testing and is used whenever it is sufficient to prove that the application is working as per requirements.

What is STLC?

STLC is Software test life cycle. It includes
1. Impact assessment: How much testing impact a new project has and how much level of effort will be required.
2. Work allocation: Assigning resources to testing the project
3. Requirements knowledge gaining : Understanding the BRD and SRS thoroughly
4. Test planning : Determining what will be the high level test scenarios, what test data will be required, how to stage them, preparing the test plan etc.
5. Writing testcases : Preparing test cases based on the understanding of the requirements and high level test plan
6. Executing : Once the code is dropped, verifying the application based on the testcases
7. Bug tracking : Raising defects and tracking them to closure
8. Traceability: Providing traceability documents to clients for passed testcases.
9. Sign off: Signing off the tested code once exit criteria is reached.

What would you do if you see a functionality in the software which was not there in the requirements?

This scenario is called gold plating which is offering more functionality in the software than what was required by the client. In this situation, I would get in touch with the client contact and get their consensus about the new functionality. If they are happy with the new function, I will get it updated in the BRD and provide necessary updates to RTM, testcases etc.

What is automation testing? Can automating a test improve the effectiveness of test?

Test Automation is execution of testcases with help of software's like Win runner, QTP, Selenium to compare the actual results with expected outcomes, by setting up preconditions or checkpoints. Automation testing is often used in regression testing than progression testing. Its quite reliable and provides better results when used on the applications or system which is quite stable. Otherwise it would be laborious and time consuming.
Yes, Automating a test makes the test process:
1. Fast
2. Reliable
3. Repeatable
4. Programmable
5. Reusable
6. Comprehensive

What are the various automation tools available in testing? How will you decide on a tool for test automation?

There are quite lot of Automation tools available in market. Notable and reliable tools as follows:
-HP Quick Test Professional HP 11.0
-Load Runner
-IBM Rational Functional Tester IBM Rational
-Rational robot IBM Rational 2003
-Selenium Open source 1.0.10
-Silk Test Micro Focus 2010 R2
-Test Complete Smart Bear Software 8.2
-Test Partner Micro Focus 6.3
Decision on which tool to be used for Automation is solely depended on the project requirement. There are few points that need to be considered while selecting the tool:
-Cost of the Software tool which support your platform and technology
-Programming language - Easy to learn and use. This covers:
  - Easy debugging and logging
  - Test Data Management
  - Reporting Features Failure and Error Logging
  - Re-usability of components and libraries

How do you develop a test plan and schedule?

How do you develop a test plan and schedule? Describe bottom-up and top-down approaches.

A test plan is contract between the testers and the project team describing the role of testing in the project. The purpose of test plan is to prescribe the scope, approach, resources and schedule of the testing activities. To identify items being tested, the feature to be tested, the testing task to be performed, the personnel responsible for each task and the risks associated with the plan. From this, it is imperative that test plan is made by taking inputs from the product development team, keeping in consideration the project deadlines and risks involved while testing the product or components of the product.
The steps in creating test plan are:
1. Identifying requirements for Test: This includes tests for Functionality, Performance, and Reliability etc.
2. Assess Risk and Establish Test Priorities: In this step risks are identified and risk magnitude indicators (high, medium, low) are assigned.
3. Develop Test Strategy: This includes following things:
i. Type of test to be implemented and its objective
ii. Stage in which test will be implemented
iii. Test Completion Criteria
When all these 3 steps are completed thoroughly, a formal document is published stating above things which is known as “Test Plan”.

Bottom up Integration Testing:
The program is combined and tested from the bottom of the tree to the top. Each component at the lowest level of the system hierarchy is tested individually first, then next component is to be tested. Since testing starts at the very low level of software, drivers are needed to test these lower layers. Drivers are simply programs designed specifically for testing that make calls to these lower layers. They are developed for temporary use and need to be replaced when actual top level module is ready.
Eg: Consider a Leave Management System. In order to approve leave, there has to be a module to apply leave. If this module for apply leave is not ready, we need to create a driver (which will apply for leave) in order to test the approve leave functionality.

Top down Integration Testing:
Modules are tested by moving downwards in control hierarchy, beginning with main control module. A module being tested may call another that is not yet tested. For substituting lower modules, stubs are used. Stubs are dummy modules developed to test the control hierarchy. Stubs are special purpose programs that simulate the activity of missing component.
Eg: In Leave Management System, once leave is approved, the leave status can be seen in leave report. So we need to create a dummy implementation of a leave report (stub).