Here you can get articles, news, tips, testing techniques, Software testing tips, free articles on software testing only related to software testing.

Tuesday, September 30, 2008

How to get job in Software Testing quickly?

In recent days this is the most asked question to me by readers. How to get software testing job? How to come in software testing field? or Can I get job in testing?

All these questions are similar and I want to give also similar answer for this. I have written post on choosing software testing as your career where you can analyze your abilities and know which are the most important skills required for software testing.

I will continue mentioning that “know your interest before going into any career field”. Just going to software testing career or any other hot career is wrong and which may result into loss of your job interest as well as your job.

Now you know your abilities, skills, interests right? and you have made decision to go for software testing career as it is your favorite career and you suit most in this career. So here is a guideline for how to get a good job in software testing field.

If you are a fresher and just passed out from your college or passing out in coming months then you need to prepare well for some software testing methodologies. Prepare all the manual testing concepts. If possible have some hands-on on some automation and bug tracking tools like winrunner and test director. It is always a good idea to join any software testing institute or class which will provide you a good start and direction of preparation. You can join any 4 months duration software testing course or can do diploma in software testing which is typically of 6 months to 1 year. Keep the preparation going on in your course duration. This will help you to start giving interviews right after getting over your course.

If you have some sort of previous IT experience and want to switch to software testing then it’s somewhat simple for you. Show your previous IT experience in your resume while applying for software testing jobs. If possible do some crash course to get idea of software testing concepts like I mentioned for freshers above. Keep in mind that you have some kind of IT experience so be prepared for some tough interview questions here.

As companies always prefer some kind of relevant experience for any software job, its better if you have relevant experience in software testing and QA. It may be any kind of software testing tools hands-on or some testing course from reputed institutes.

Please always keep in mind- Do not add fake experience of any kind. This can ruin your career forever. Wait and try for some more days to get the job by your abilities instead of getting into trap of fake experience.

Last important words, Software testing is not ‘anyone can do career!’ Remove this attitude from your mind if someone has told such kind of foolish thing to you. Testing requires in depth knowledge of SDLF, out of box thinking, analytical skill and some programming language skill apart from software testing basics.

Labels:

How to write effective Test cases, procedures and definitions

Writing effective test cases is a skill and that can be achieved by some experience and in-depth study of the application on which test cases are being written.

Here I will share some tips on how to write test cases, test case procedures and some basic test case definitions.

What is a test case?
“A test case has components that describes an input, action or event and an expected response, to determine if a feature of an application is working correctly.” Definition by Glossary

There are levels in which each test case will fall in order to avoid duplication efforts.
Level 1: In this level you will write the basic test cases from the available specification and user documentation.
Level 2: This is the practical stage in which writing test cases depend on actual functional and system flow of the application.
Level 3: This is the stage in which you will group some test cases and write a test procedure. Test procedure is nothing but a group of small test cases maximum of 10.
Level 4: Automation of the project. This will minimize human interaction with system and thus QA can focus on current updated functionalities to test rather than remaining busy with regression testing.

So you can observe a systematic growth from no testable item to a Automation suit.

Why we write test cases?
The basic objective of writing test cases is to validate the testing coverage of the application. If you are working in any CMMi company then you will strictly follow test cases standards. So writing test cases brings some sort of standardization and minimizes the ad-hoc approach in testing.

How to write test cases?
Here is a simple test case format

Fields in test cases:

Test case id:
Unit to test:
What to be verified?
Assumptions:
Test data:
Variables and their values
Steps to be executed:
Expected result:
Actual result:
Pass/Fail:
Comments:

So here is a basic format of test case statement:

Verify
Using
[tool name, tag name, dialog, etc]
With [conditions]
To [what is returned, shown, demonstrated]

Verify: Used as the first word of the test case statement.
Using: To identify what is being tested. You can use ‘entering’ or ‘selecting’ here instead of using depending on the situation.

For any application basically you will cover all the types of test cases including functional, negative and boundary value test cases.

Keep in mind while writing test cases that all your test cases should be simple and easy to understand. Don’t write explanations like essays. Be to the point.

Try writing the simple test cases as mentioned in above test case format. Generally I use Excel sheets to write the basic test cases. Use any tool like ‘Test Director’ when you are going to automate those test cases.


Article Source: http://www.softwaretestinghelp.com/how-to-write-effective-test-cases-test-cases-procedures-and-definitions/

Labels: , ,

Monday, September 29, 2008

How Do You Spell Testing?

In exploratory testing, we design and execute tests in real time. But how do we organize our minds so that we think of worthwhile tests? One way is through the use of heuristics and mnemonics. A heuristic is "a rule of thumb, simplification, or educated guess." For example, the idea of looking under a welcome mat to find a key is a heuristic. A mnemonic, by contrast, is a "word, rhyme, or other memory aid used to associate a complex or lengthy set of information with something that is simple and easy to remember." Heuristics and mnemonics go together very well to help us solve problems under pressure.
SFDPO Spells Testing

A mnemonic and heuristic I use a lot in testing is "San Francisco Depot," or SFDPO. These letters stand for Structure, Function, Data, Platform, and Operations. Each word represents a different aspect of a software product. By thinking of the product from each of those points of view, I think of many interesting tests. So, when I'm asked to test something I haven't seen before, I say "San Francisco Depot" to myself, recite each of the five product element categories and begin thinking of what I will test.

* Structure (what the product is): What files does it have? Do I know anything about how it was built? Is it one program or many? What physical material comes with it? Can I test it module by module?
* Function (what the product does): What are its functions? What kind of error handling does it do? What kind of user interface does it have? Does it do anything that is not visible to the user? How does it interface with the operating system?
* Data (what it processes): What kinds of input does it process? What does its output look like? What kinds of modes or states can it be in? Does it come packaged with preset data? Is any of its input sensitive to timing or sequencing?
* Platform (what it depends upon): What operating systems does it run on? Does the environment have to be configured in any special way? Does it depend on third-party components?
* Operations (how it will be used): Who will use it? Where and how will they use it? What will they use it for? Are there certain things that users are more likely to do? Is there user data we could get to help make the tests more realistic?

Bringing Ideas to Light

I can get ideas about any product more quickly by using little tricks like SFDPO. But it isn't just speed I like, it's reliability. Before I discovered SFDPO, I could think of a lot of ideas for tests, but I felt those ideas were random and scattered. I had no way of assessing the completeness of my analysis. Now that I have memorized these heuristics and mnemonics, I know that I still may forget to test something, but at least I have systematically visited the major aspects of the product. I now have heuristics for everything from test techniques to quality criteria.

Just because you know something doesn't mean you'll remember it when the need arises. SFDPO is not a template or a test plan, it's just a way to bring important ideas into your conscious mind while you're testing. It's part of your intellectual toolkit. The key thing if you want to become an excellent and reliable exploratory tester is to begin collecting and creating an inventory of heuristics that work for you. Meanwhile, remember that there is no wisdom in heuristics. The wisdom is in you. Heuristics wake you up to ideas, like a sort of cognitive alarm clock, but can't tell you for sure what the right course of action is here and now. That's where skill and experience come in.

Good testing is a subtle craft. You should have good tools for the job.

Labels:

Reasons to Repeat Tests

Testing to find bugs is like searching a minefield for mines. If you just travel the same path through the field again and again, you won't find a lot of mines. Actually, that's a great way to avoid mines. The space represented by a modern software product is hugely more complex than a minefield, so it's even more of a problem to assume that some small number of "paths", say, a hundred, thousand, or million, when endlessly repeated, will find every important bug. As many tests as a team of testers can physically perform in a few weeks or months is still not that many tests compared to all the things that can happen to a product in the field.

The minefield analogy is really just another way of saying that testing is a sampling process, and we probably want a larger sample, rather than a tiny sample repeated over and over again. Hence the minefield heuristic is do different tests instead of repeating the same tests.

But what do I mean by repeat the same test? It's easy to see that no test can be repeated exactly, any more than you can exactly retrace your footsteps. You can get close, but you will always be a tiny bit off. Does repeating a test mean that the second time you run the test you have to make sure that sunlight is shining at the same angle onto your mousepad? Maybe. Don't laugh. I did experience a bug, once, that was triggered by sunlight hitting an optical sensor inside a mouse. You just can't say for sure what factors are going to affect a test. However, when you test you have a certain goal and a certain theory of the system. You may very well be able to repeat a test with respect to that goal and theory in every respect that A) you know about and B) you care about and C) isn't too expensive to repeat. Nothing is necessarily intractable about that.

Therefore, by a repeated test, I mean a test that includes elements already known to be covered in other tests. To repeat a test is to repeat some aspect of a previous test. The minefield heuristic is saying that it's better to try to do something you haven't yet done, then to do something you already have done.

If you disagree with this idea, or if you agree with it, please read further. Because...

...this analysis is too simplistic! In fact, even though diversity in testing is important and powerful, and even though the argument against repetition is generally valid, I do know of ten exceptions. There are ten specific reasons why, in some particular situation, it is not unreasonable to repeat tests. It may even be important to repeat some tests.

For technical reasons you might rationally repeat tests...

1. Recharge: if there is a substantial probability of a new problem or a recurring old problem that would be caught by a particular existing test, or if an old test is applied to a new code base. This includes re-running a test to verify a fix, or repeating a test on successively earlier builds as you try to discover when a particular problem or behavior was introduced. This also includes running an old test on the same software that is running on a new O/S. In other words, a tired old test can be "recharged" by changes to the technology under test. Note that the recharge effect doesn't necessarily mean you should run the same old tests, only that it isn't necessarily irrational to do so.
2. Intermittence: if you suspect that the discovery of a bug is not guaranteed by one correct run of a test, perhaps due to important variables involved that you can't control in your tests. Performing a test that is, to you, exactly the same as a test you've performed before, may result in discovery of a bug that was always there but not revealed until the uncontrolled variables line up in a certain way. This is the same reason that a gambler at a slot machine plays again after losing the first time.
3. Retry: if you aren't sure that the test was run correctly the other time(s) it was performed. A variant of this is having several testers follow the same instructions and check to see that they all get the same result.
4. Mutation: if you are changing an important part of the test while keeping another part constant. Even though you are repeating some elements of the test, the test as a whole is new, and may reveal new behavior. I mutate a test because although I have covered something before, I haven't yet covered it well enough. A common form of mutation is to operate the product the same way while using different data. The key difference between mutating a test and intermittence or retry is that with mutation the change is directly under your control. Mutation is intentional, intermittence results from incidental factors, and you retry a test because of accidental factors.
5. Benchmark: if the repeated tests comprise a performance standard that gets its value by comparison with previous executions of the same exact tests. When historical test data is used as an oracle, then you must take care that the tests you perform are comparable to the historical data. Holding tests constant may not be the only way to make results comparable, but it might be the best choice available.

For business reasons you might rationally repeat tests...

6. Inexpensive: if they have some value and are sufficiently inexpensive compared to the cost of new and different tests. These tests may not be enough to justify confidence in the product, however.
7. Importance: if a problem that could be discovered by those tests is likely to have substantially more importance than problems detectable by other tests. The distribution of the importance of product behavior is not necessarily uniform. Sometimes a particular problem may be considered intolerable just because it's already impacted an important user once (a "never let it happen again" situation). This doesn't necessarily mean that you must run the same exact test, just something that is sufficiently similar to catch the problem (see Mutation). Be careful not to confuse the importance of a problem with the importance of a test. A test might be important for many reasons, even if the problems it detects are not critical ones. Also, don't make the mistake of spending so much effort on one test that looks for an important bug that you neglect other tests that might be just as good or better at finding that kind of problem.
8. Enough: if the tests you repeat represent the only tests that seem worth doing. This is the virus scanner argument: maybe a repeated virus scan is okay for an ordinary user, instead of constantly changing virus tests. However, we may introduce variation because we don't know which tests truly are worth doing, or we are unable to achieve enoughness via repeated tests.
9. Mandated: if, due to contract, management edict, or regulation, you are forced to run the same exact tests. However, even in these situations, it is often not necessary that the mandated tests be the only tests you perform. You may be able to run new tests without violating the mandate.
10. Indifference/Avoidance: if the "tests" are being run for some reason other than finding bugs, such as for training purposes, demo purposes (such as an acceptance test that you desperately hope will pass when the customer is watching), or to put the system into a certain state. If one of your goals in running a test is to avoid bugs, then the principal argument for variation disappears.

Labels:

Sunday, September 28, 2008

An Explanation of Performance Testing on an Agile Team (Part 1 of 2)

This two article series describes activities that are central to successfully integrating application performance testing into an agile process. The activities described here specifically target performance specialists who are new to the practice of fully integrating performance testing into an agile or other iteratively-based process, though many of the concepts and considerations can be valuable to any team member. Combined, the two articles will cover the following topics:

  • Introduction to Integrated Performance Testing on an Agile Team
  • Understand the Project Vision and Context
  • Identify Reasons for Testing Performance
  • Identify the Value Performance Testing Adds to the Project
  • Configure Test Environment
  • Identify and Coordinate Immediately Valuable Tactical Tasks
  • Execute Task(s)
  • Analyze Results and Report
  • Revisit Value and Criteria
  • Reprioritize Tasks
  • Additional Considerations
  • Additional Resources

This first article in the series will cover the first four items (Introduction to Integrated Performance Testing on an Agile Team through Identify the Value Performance Testing Adds to the Project).

The keys to fully integrating performance testing into an agile process are team-wide collaboration, effective communication, a commitment to adding value to the project with every task, and the flexibility to change focus. This article aims to provide the new performance specialist with the concepts and methods necessary to enable the team to reap the benefits of integrating performance testing into the agile process without facing unacceptable risks.

Introduction to Integrated Performance Testing on an Agile Team

Because implementing an agile philosophy implies different things to different teams, there is no single formula for integrating performance testing into an agile process. Add to this the reality that effectively integrating performance testing into any development philosophy is difficult at best, and most teams decide that integrating performance testing into their agile process is too hard or too risky to even attempt. Fortunately, performance testing is naturally iterative, making its integration into an agile process highly effective when it works.

At a high level, performance testing within an agile team loosely follows the flow depicted by the graphic below.

performance testing within an agile team

This flow embraces change and variable-length iterations within a project's life cycle. Because the iteration goal is to deliver working code, it encourages planners to plan just far enough in advance to facilitate team coordination, but not so far ahead that the plan is likely to need significant revision to execute.

Additionally, an iteration may go over the same area of code and re-factor it several times. This means that in practice, any activity can happen at any moment in time, in any sequence, one or more at a time. One day the team might work on each activity several times in no discernible order, while the next two days might be spent entirely within a single activity - it is all about doing whatever can be accomplished right now to deliver working code at the end of the iteration, thus providing the greatest value to the project as a whole. The performance specialist's challenge in this type of process is that he or she will most frequently be testing parts of the overall system instead of the overall completed system.

While the perspective of this article focuses on the activities that the performance specialist frequently drives or champions, this is neither an attempt to minimize the concept of team responsibility nor an attempt to segregate roles. The team is best served if the performance specialist is an integrated part of the team who participates in team practices such as pairing. Any sense of segregation is unintentional and a result of trying to simplify explanations.

Understand the Project Vision and Context

Project Vision

Even though the features, implementation, architecture, timeline, and environment(s) are likely to be fluid, the project is being conducted for a reason. Before tackling performance testing, ensure that you understand the current project vision. Revisit the vision document regularly, as it has the potential to change as well. Although everyone on the team should be thinking about performance, it is the performance specialist's responsibility to be proactive in understanding and keeping up to date with the relevant details across the entire team. The following are some examples of high-level vision goals for a project:

  • Evaluate a new architecture for an existing system.
  • Develop a new custom system to solve business problem X.
  • Evaluate the new software-development tools.
  • As a team, become proficient with a new language or technology.
  • Re-engineer an inadequate application before the "holiday rush" to avoid the same negative reaction "we got last year when the application failed."

Project Context

The project context is nothing more than those circumstances and considerations that are, or may become, relevant to achieving the project vision. Some examples of items that may be relevant in your project context include:

  • Client expectations
  • Budget
  • Timeline
  • Staffing
  • Project environment
  • Management approach

Team members will often gain an initial understanding of these items during a project kickoff meeting, but the project's contextual considerations should be revisited regularly throughout the project as more details become available and as the team learns more about the system they are developing.

Tips for the Performance Specialist

Understand the Project Management Environment

In terms of the project environment, the most important thing to understand is how the team is organized, how it operates, and how it communicates. Agile teams tend to use rapid communication and management methods rather than long-lasting documents and briefings, instead opting for daily stand-ups, story cards, and interactive discussions. Failure to identify and agree upon these methods at the outset can put performance testing behind before it begins. Asking questions similar to the following may be helpful:

  • Does the team have any meetings, stand-ups, or scrums scheduled?
  • How are issues raised or results reported?
  • If I need to collaborate with someone, should I send e-mail? Schedule a meeting? Use Instant Messenger? Walk over to their office?
  • Does this team employ a "do not disturb" protocol when an individual or sub-team desires "quiet time" to complete a particularly challenging task?
  • Who is authorized to update the project plan or project board?
  • How are tasks assigned and tracked? A software system? Story cards? Sign-ups?
  • How do I determine which builds I should focus on for performance testing? Daily builds? Friday builds? Builds with a special tag?
  • How do performance testing builds get promoted to the performance test environment?
  • Will the developers be writing performance unit tests? Can I pair with them periodically so we can share information?
  • How do you envision coordination for performance-testing tasks taking place?

Understand the Timeline and Build Schedule

Understanding the project build schedule is critical for a performance specialist. If you do not have a firm grasp of how and when builds are made, your performance testing will not only be perpetually behind schedule, but also will waste time conducting tests against builds that are not appropriate for the test being conducted. It is important that some person or artifact can communicate to you the anticipated sequence of deliveries, features, and/or hardware implementations that relate to the work you are doing far enough in advance for you to coordinate your tests. Because you will not be creating a formal performance test plan at the onset of the project, it is not important to concern yourself with dates, resources, or details sufficient for long range planning. What is important is that you have enough understanding of the anticipated time line and the immediate tasks at hand, and confidence that you understand the build process well enough to make good recommendations about what tests are most likely to add the greatest value at any particular point in time.

Understand the System

At this stage, you need to understand the intent of the system to be built, what is currently known or assumed about its hardware and software architecture, and the available information about the customer or user of the completed system. In addition, the performance specialist should be involved in decisions about the system and the architecture, making appropriate suggestions and raising performance-related concerns even before features or components are implemented.

With many agile projects, the architecture and functionality of the system changes during the course of the project. This is to be expected. In fact, the performance testing you do is frequently the driver behind at least some of those changes. By keeping this in mind, you will neither over plan nor under plan performance-testing tasks in advance of starting them.

Identify Reasons for Testing Performance

Every project team has different reasons for deciding to include, or not include, performance testing as part of its process. Failure to identify and understand these reasons virtually guarantees that the performance-testing aspect of the project will not be as successful as it could have been. Examples of possible reasons for integrating performance testing as part of the project might include the following:

  • Improve performance unit testing by pairing with developers.
  • Assess and configure new hardware by pairing with administrators.
  • Evaluate algorithm efficiency.
  • Monitor resource usage trends.
  • Measure response times.
  • Collect data for scalability and capacity planning.

It is generally useful to identify the reasons for conducting performance testing very early in the project. These reasons are bound to change and/or shift priority as the project progresses, so you should revisit them regularly as you and your team learns more about the application, its performance, and the customer or user.

Tips for the Performance Specialist

The reasons for conducting performance testing equate to those considerations that will ultimately be used to judge the success of the performance-testing effort. A successful performance test involves not only the performance requirements, goals, and targets for the application, but also the reasons for conducting performance testing at all, including those that are financial or educational in nature. For example, criteria for evaluating the success of a performance-testing effort might include:

  • Identifying significant performance issues in the hardware and third-party software early in the project.
  • Performance team, developers, and administrators working together with minimal supervision to tune and determine the capacity of the architecture.
  • Conducting performance testing effectively without extending the duration or cost of the project.
  • Determining the most likely failure modes for the application under higher-than-expected load conditions.
  • Determining the number of users a particular configuration can support.
  • Determining the end-user response time under various conditions.
  • Validating that performance tests predict production performance within +/- 10%.

It is important to record and keep up to date the criteria that will ensure that performance testing is successful for your project, in a manner that is appropriate to your project's standards and expectations. It is also valuable to maintain those criteria in a place where they are readily accessible to the entire team, whether that is in a document, team wiki, task-management system, or on story cards or a whiteboard, is only important to the degree that it works for your team.

The initial determination of performance-testing success criteria can often be accomplished in a single work session, or possibly during the project kickoff. Remember that at this point you are articulating and recording success criteria for the performance-testing effort, not collecting performance goals and requirements for the application.

Some other information to consider when determining performance-testing success criteria include:

  • Exit criteria (how to know when you are done)
  • Key areas of investigation
  • Key data to be collected
  • Contractually binding performance requirements or Service Level Agreements (SLAs)

Identify the Value Performance Testing Adds to the Project

The value of performance testing is not limited to reporting the volumes and response times of a nearly completed application. Some other value-adds could include:

  • Helping developers create better performance unit and component tests
  • Helping administrators tune hardware and commercial, off-the-shelf software more efficiently
  • Validating the adequacy of network components
  • Collecting data for scalability and capacity planning
  • Providing resource consumption trends from build to build

Tips for the Performance Specialist

Once you have an understanding of the system, the project, and the performance-testing success criteria, the potential value that performance testing can add should start to become clear. You now have what you need to begin to conceptualize an overall strategy for performance testing. Whatever strategy you choose, it will be most effective when communicated with the entire team using a method that encourages feedback and discussion. Strategies should not contain excessive detail or narrative text. The point is that strategies are intended to help focus decisions, be readily available to the entire team, include a method for anyone to make notes or comments, and be easy to modify as the project progresses.

Although there is a wide range of information that could be included in the strategy, the critical components are the envisioned goals or outcomes of the test and the anticipated tasks to achieve that outcome. Other types of information that might be valuable to discuss with the team when preparing a performance test strategy for a performance build include:

  • The reason or intent for performance-testing this delivery
  • Prerequisites for strategy execution
  • Tools and scripts required
  • External resources required
  • Risks to accomplishing the strategy
  • Data of special interest
  • Areas of concern
  • Pass/Fail criteria
  • Completion criteria
  • Planned variants on tests
  • Load range
  • Tasks to accomplish the strategy

Conclusion

The keys to fully integrating performance testing into an agile process are team-wide collaboration, effective communication, a commitment to adding value to the project with every task, and the flexibility to change focus. In this article we discussed:

  • Introduction to Integrated Performance Testing on an Agile Team
  • Understand the Project Vision and Context
  • Identify Reasons for Testing Performance
  • Identify the Value Performance Testing Adds to the Project

The second article in this series will go on to discuss:

  • Configure Test Environment
  • Identify and Coordinate Immediately Valuable Tactical Tasks
  • Execute Task(s)
  • Analyze Results and Report
  • Revisit Value and Criteria
  • Reprioritize Tasks
  • Additional Considerations
  • Additional Resources

Labels:

NASA's Anomaly: A Lesson for Software Testing

On Thursday, July 14, 2005 I was one of more than 25,000 people who spent all day at the Kennedy Space Center's Visitor Center with the intent of watching the launch of the Space Shuttle Discovery's STS-114, Return-to-Flight mission. If you've never been to a launch, as I hadn't, it is a very long day.

The scheduled launch time was 3:51 p.m. My wife, my stepson and I had to arrive at the Space Center by 9:30 a.m. After hours of standing in lines, going through security checkpoints, waiting in more lines and being transported to the observation area-virtually the entire time spent outside, unshaded from the 90-degree-plus heat and the central Florida summer sun-we finally arrived at the viewing area to wait the final two and a half hours until launch. Needless to say, I was quite excited. Getting to see a shuttle launch was something I'd wanted to do for 20 years. We had our cameras, a camcorder, binoculars, lawn chairs and everything else we could think of to make it a comfortable and memorable experience.

Of course, if you follow the space program at all, you already know what happened next. About 10 minutes after we settled in and got our binoculars focused on the shuttle, the announcement came over the loudspeaker: "Ladies and gentlemen, I'm sorry to have to inform you that we just received word that the launch has been scrubbed for today. Please return to your buses."

Needless to say, we were all rather disappointed. At that point there were no answers to the questions of "What happened?"; "Is there a new launch time/date set?"; or "Will we get a refund for our launch tickets?" By the time we made it back to the Visitor Center, I learned by visiting space.com on my Web-enabled PDA that the reason the launch was scrubbed was a faulty fuel sensor.

Within the hour, the following status message appeared on space.com: "NASA experts acknowledged that the sensor problem-which they described as an intermittent event with no obvious cause-represented a difficult challenge."

The sensors "for some reason did not behave today, and so we're going to have to scrub this launch attempt," launch director Mike Leinbach told the launch team. "So I appreciate all we've been through together, but this one is not going to result in a launch attempt today.

"Launch control said it will take some time to figure out the problem."

By the time I arrived at home, additional information was available: "The fuel tank contains four sensors that show how much hydrogen remains in the tank. One sensor indicated that the tank was almost empty, even though it had been fully loaded with 535,000 gallons of liquid hydrogen and oxygen.

"A faulty reading could cause the shuttle's main engines to cut off prematurely or to burn for too long, either of which could be potentially disastrous for the craft and crew."

And by the time we'd finished eating dinner: "Similar fuel-gauge problems cropped up intermittently during a test of Discovery back in April. The external fuel tank, along with cables and electronics equipment aboard Discovery itself that are associated with the fuel gauges, were replaced, and even though NASA could not explain the failure, it thought the problem was resolved and pressed ahead with launch.

Hale defended that decision.

"We became comfortable as a group, as a management team, that this was an acceptable posture to go fly in," he said, "and we also knew that if something were to happen during a launch countdown, we would do this test and we would find it. And guess what? We did the test, we found something and we stopped. We took no risk. We're not flying with this."

Shuttle program manager Bill Parsons stressed that it was not clear whether the problem was with the fuel gauge itself, or with other electronics aboard the spacecraft. NASA is looking closely at the possibility that flawed transistors in an electronic black box aboard Discovery might be to blame. The box used in the April test also had bad transistors, and when it was removed from the shuttle, the problem disappeared. Managers now suspect a manufacturing defect with these transistors.

Parsons nixed a fueling test of Discovery's replacement tank in June, over the protests of some engineers. Such a test would have pushed the flight later into July, and Parsons and others maintained that the ultimate test would come on launch day. Moreover, Hale said there was no guarantee that the malfunction would have turned up during a tanking test.

The issue came up again at launch readiness reviews earlier in the week, and to everyone's satisfaction, it was deemed an "unexplained anomaly," according to Hale.

The launch scrub cost NASA an estimated $616,000 in fuel and labor costs.

Similar Decisions during Software Testing

Fascinating, isn't it? Reading that, my feelings suddenly shifted from disappointment to empathy. How many times have I made similar decisions during testing? Think about it. How often have you seen the results of a performance test without a single anomaly? If your experience is anything like mine, the answer is probably "rarely." In fact, when I think back, those times when I did not find any anomalies in the results, my instincts told me to question the validity of the tests.

It immediately occurred to me that I'd have recommended exactly the same approach to handle a performance testing anomaly that the NASA engineers took for the fuel gauge. Try to reproduce it. Explore the results in more detail. Possibly swap out, rebuild or instrument the offending code or machine. Then eventually decide that the project would be better served by proceeding with our testing and "keeping an eye out" for recurrences than by continuing to search for something that may well never happen again.

How Much Effort to Understand a Single Anomaly?

So the question is, "How much effort should be put into trying to understand a single anomalous test result?" Obviously, it shouldn't just be blindly discounted, but what if it really was an unrepeatable quirk, a testing error or even someone in the server room mistakenly sitting down and trying to access the wrong test server? Ultimately, there needs to be some heuristic for deciding to accept the possibility of recurrence and just move on, since, as Bill Parsons went on to say in his statement to the press, "It's difficult to find a glitch that won't stay glitched."

I started thinking about what my heuristic was for performance test results. My first thought was of an article I wrote several years back that borrowed statistical models from several industries to come up with a point of reference in determining outliers in response-time data. The summary is that it appears to be statistically valid to say that data points that represent less than 1 percent of the entire data set and are at least three standard deviations off the mean are candidates for omission in results analysis if (and only if) identical data points are not found in previous or subsequent tests-or, in layman's terms, "really weird results that you can't immediately explain accounting for a very small subset of the results which are not identical to any results from other tests." I'm still pretty comfortable with that, so let's agree to use that as a working definition of an anomalous result.

Once I have detected a results anomaly, I realized, I always ask myself the same questions to guide my next steps:

* If this happened one out of every hundred possible times in production (the worst case, based on our definition), what would that mean to the company/ product/client/user?
* Would stakeholders consider delaying the project, going over budget, etc., over this worst case?
* Is this worst case more or less severe than other issues I am likely to uncover by continuing testing in other areas versus spending more time on this?

No matter what the answers are, I always document the anomaly somewhere so that it can be found easily, and then I modify my tests to highlight that anomaly if it should crop up again.

Thinking about these questions led me to the realization that all I am really doing is a very simplistic risk analysis that could be restated as "Is the potential cost of a particular failure greater or less than the potential cost of trying to eliminate the possibility of that failure occurring?" I guess that doesn't really surprise me, but the realization that this thought process occurs with virtually every performance test I execute-and further, that it is so easy to explain-makes me wonder why I didn't think of it sooner. I already set the expectation that the priority of the planned performance tests needs to be reviewed to potentially revise testing priorities every time an issue is detected. Should we not review priorities based on anomalies as well? NASA did.

They chose to move on toward launch, keeping an eye on the fuel sensors, and they ultimately caught the problem before disaster could strike. Realistically, that's a pretty positive outcome. From what I can tell, the launch would have been delayed if they had chosen to chase the anomaly when it was first detected, and there was no guarantee that it would have been found before Discovery showed up on the launch pad anyway! At least now they have a known issue and two data points to analyze.

Does that make it worth the $600,000-plus price tag? I can't answer that, but I can say that it's hard to stay disappointed about not getting to see the launch when I step back and realize that in the same situation, I most likely would have done exactly what NASA did.

I guess it's time to review my approach and my reference material to explicitly address how to handle test anomalies.

Labels: ,

High-Performance Testing

Introduction

As an activity, performance testing is widely misunderstood, particularly by executives and managers. This misunderstanding can cause a variety of difficulties—including outright project failure. This article details the topics that I find myself teaching executives and managers time and time again. Learning, understanding, and applying this knowledge on your performance testing projects will put you on the fast track to success.

Insist on Experience

Experienced performance testers will speak your language and guide you through the process of meeting your goals, even if you can't yet verbalize those goals. Experienced performance testers not only know how to relate to executives in terms of business risks, short- and long-term costs, and implications of schedule adjustment but they also know how to explain their trade without all the jargon and techno-babble. Experienced performance testers are used to explaining the relevance of the latest "performance buzz" to your system. They have spent years learning how to extract performance goals from such words as "fast," "maximum throughput," and "xxxx concurrent users" - none of which has meaning in isolation. Consider this example: An executive dictates that, "Each page will display in under x seconds, 100 percent of the time." While this is both quantifiable and testable, it has no meaning on its own. It is the job of the performance tester to define the conditions under which the goal applies, in other words, to determine the goal's context. To have meaning, this goal must address such things as client connection speed, number of people accessing the site, and types of activities those people are performing. These are all variables that affect page response time. A more complete goal would take this form:

Using the "peak order" workload model, under a 500-hourly user load, static pages will display in under x seconds, dynamic pages in under y seconds, and search/order pages in under z seconds, 95 percent of the time with no errors when accessed via corporate LAN.

Experienced performance testers also know how to collect and present data in ways that show whether the system is missing or meeting goals, in what areas, and by how much, without requiring the viewer of this data to have any special technical knowledge of the system under test.

Notice that I use the term "goal" instead of "requirement" when speaking about performance. I do this because I have never been involved in a performance testing project that delayed or canceled a release due to performance test results not meeting the stated criteria. I also choose the term "goal" because virtually no one expects the performance to be as good as he wants prior to Release One. What people hope for is "good enough for now." There is an assumption that performance will be improved during testing, that the production environment will resolve the performance issues detected in the test environment, and/or that adoption will be gradual enough to deal with performance problems as they arise in production. An experienced performance tester will be able to help you convert your feelings about performance into goals and project plans. Above all, performance testers want you, the executive, to understand performance testing so you can make sound, informed decisions. As an executive, you have several important decisions to make about an application during the development lifecycle. Most of the decisions center around three fundamental questions:

* Does it meet the need/specification for which it was developed?
* Will the application function adequately for the users of the system?
* Will the user of the system be frustrated by using it?

The experienced performance tester knows the importance of these questions - and their answers - and will work with you (literally by your side at times) to help you answer the questions in terms of performance.

First and foremost, you must make it known that you expect experienced performance testers on your projects, not "fools with tools" as some folks refer to them. Set the expectation early that the performance tester is expected to interact with you, and that his job is to provide you with the information you need to make sound business decisions about performance issues and risks. Always make a point to personally review performance goals to make sure they contain enough context to make them meaningful for executive-level decision making.

Review the performance test plan and deliverables and ask yourself the following questions:

* Will this assist with "go-live" decisions?
* Is it likely that the results from this plan could lead to a better experience for the end-user?
* Is this likely to be representative of the actual production environment?
* Is this likely to be useful to developers if tuning is necessary?
* Will it provide an answer to each specific requirement, goal, or service level agreement?
* Is taking action based on the results part of the plan?

Finally, invite the performance tester to educate you along the way. In helping you to expand your knowledge about performance testing, the tester will gain a wealth of knowledge about what is most important to you in terms of performance. This mutual understanding and open communication are the best things that can happen to the overall performance of a system.

Begin Performance Testing Before the Application Is Fully Functional

There is a common perception that performance testing can't effectively start until the application is stable and mostly functional - meaning that performance test data won't be available until significantly into a beta or qualification release. This leaves virtually no time to react if, or more realistically when, the results show that the application isn't performing up to expectations.

In actuality, an experienced performance tester can accomplish a large number of tasks and generate a significant amount of useful data even before the first release to the functional testing team. He can create and gain approval of User Community Models and test data and can gather these kinds of statistics:

* Network and/or Web server throughput limits
* Individual server resource utilization under various loads
* Search speeds, query optimization, table/row locking, and speed versus capacity measurements for databases
* Load balancing overhead/capacity/ effectiveness
* Speed and resource cost of security measures

Some developers and system architects argue that the majority of these tasks belong to them, but developers rarely have the ability to generate the types of load needed to complete the tasks effectively. Adding the performance tester to these tasks early on will minimize the number of surprises and provide foundational data that will greatly speed up the process of finding root causes and fixing performance issues detected late in the project lifecycle.

This one is pretty obvious. Plan to have a performance tester assigned to the project from kickoff through roll out. Encourage the development team to use the tester's skills and resources as a development tool, not just as a post-development validation tool. It is worth noting that, depending on the project, the performance tester is used for performance-related activities between 50 and 100 percent of the time. The upside is that, because of the skill set noted in the "Skills and Experience" sidebar, this individual can be fully utilized as a supplemental member of virtually every project team. There is one caveat: Make it clear that performance testing is this person's primary responsibility - not an additional duty. This distinction is critical because "crunch time" for performance testing typically coincides with "crunch time" for most of the other teams with which the performance tester may be working.

Don't Confuse Delivery with Done

"Delivery" is an informed decision based on risks and should not be confused with "done." Anyone who has been around testing for a while knows that the system will be deployed when management thinks holding up the release is riskier than releasing it, even if that means releasing it with unresolved or untested performance issues. However, releasing the software is no reason to stop performance testing.

Most applications have a rollout plan or an adoption rate that ensures that the peak supported load won't occur for a significant period of time after the go-live day. That is prime time to continue performance testing. There are fewer distractions, the existence of actual live usage data rather than predictions or estimations, the ability to observe performance on actual production hardware, and often the availability of more resources. If there isn't a maintenance release scheduled soon enough to get the post-release fixes into production before usage reaches the performance limit, surely it's more cost effective to schedule one than to contend with a performance issue when it presents itself in production.

Plan to continue performance testing after the initial release. Plan to push maintenance releases with performance enhancements prior to the first expected load peak. Incorporating these plans into the project plan from the beginning allows you to release software when performance is deemed acceptable for early adopters rather than holding up releases until the performance is tuned for a larger future load.

Labels:

Saturday, September 27, 2008

Testing Is Everything !

It’s amazing how lousy software is. That we as a society have come to accept buggy software as an inevitability is either a testament to our collective tolerance, or—much more likely—the near ubiquity of crappy software. So we are guilty of accepting low standards for software, but the smaller we of software writers are guilty of setting those low expectations. And I mean we: all of us. Every programmer has at some time written buggy software (or has never written any software of any real complexity), and while we’re absolutely at fault its not from lack of exertion. From time immemorial PhD candidates have scratched their whiteboard markers dry in attempts to eliminate bugs with new languages, analysis, programming techniques, and styles. The simplest method for finding bugs before they’re released into the wild remains the most generally effective: testing.

Of course, programmers perform at least nominal checks before integrating new code, but there’s only so much a person can test by hand. So we’ve invented tests suites—collections of tests that require no interaction. Testing rigor is regarded by university computer science departments a bit like ditch-digging is by civil engineering departments: a bit pedestrian. So people tend to sort it out for themselves.

Labels: , ,