Thursday, August 31, 2017

How Many Test Cases do You Have?

 A recent discussion with a top-brass guy in software QA business brought up this very question which I really didn’t bother about for past six years as a QA professional. After the discussion was over I managed to find some spare time to reflect on it: Does it matter how many test cases are there to test a particular application?

First, a bit of theoretical background

 Well, this is something we all know. According to the ISTQB foundation level text (which is, by the way, can be considered as a comprehensive handbook for a test engineer) and IEEE829 Test Documentation Standard, test cases are stemmed out as a part of test design and documentation procedures.
 Given a software system to test, the test engineers first analyze the system specification to find test conditions - which in simple terms, are things that could be tested. Once the engineers are satisfied that they have found a good amount of conditions that covers the system's functional and non-functional aspects (which is named as 'Test Oracle'), they select which of these to be actually used for testing and prioritize them based on importance and risk levels assigned to each one.
 Then comes the test cases - they are the detailed specification on how one or more test conditions are actually tested. A pre-determined set of pre-conditions, inputs, steps to follow and expected outputs are documented as a test case which resides in a formal test specification.
 On top of these, test procedures are also documented to specify how the test cases should be executed. In case test automation comes in to the picture, the procedures may extend to automation scripts etc.
 But the most important point is, no matter how these standards and 'conventional ways of doing it' are set, when it comes to the point where you find yourself actually doing the thing, it all depends on several different things.

[To read the complete article visit blog.zone24x7.com/testcases]

Wednesday, October 12, 2016

Get a QA Internship - How hard can it be?

We conducted interviews recently with some QA Internship candidates. Having gone through the details stated in the CVs and then conducting face-to-face interviews to assess their capabilities, I feel that those candidates could use some guidance.

Apparently, such guidances are in abundance if you care to take a search. The point is, most of those are based on the experience of either European or US contexts, whereas in Sri Lanka the situation could significantly differ.

So, how hard can it be? not much! IF - of course - you would invest some time to improve yourself on following key points:

1. Your CV is your persona

The CV is the first agent that tells your potential employers about yourself. It should be capable of landing you an opportunity to get an interview where you could show your true colors. Some researches say it takes about 5 seconds for reviewers to take a glance on a CV and decide whether to check its details. 

So do your research on how to prepare a good CV that could make the reviewers want to have an interview with you.

2. Don't lie on your CV

You could be tempted to list down all the theoretical subject knowledge items in your CV. Same goes for the latest technical concepts. But the critical question is, what is your level of understanding about those skills, technologies and theories you mention in that document. More often than not, you are selected for an interview based on those items (given you do not have prior working experience) 

The interviewers you meet at the first interview are typically those that have many years of actual work experience of those theories, technical aspects and they actually have similar or more advanced set of skills that are time tested in practical work. If you have lied about such things, you are just one or two questions away from getting caught red-handed. Then it's just a waste of time for both you and the interviewers

3. Know the basics

If you want to take up an internship in QA field, you are expected to possess at least the basic knowledge about the common things such as defects, defect life-cycle, test cases, test types, testing methods, SDLC models etc. Knowledge on basic data base management concepts, and programming would be an added advantage.

Of course the interviewers will not seek for detailed theoretical knowledge nor the hands-on practical expertise. But you can not get selected with no basic understanding of what you are going to do either.

4. Do not lie at the interview

If you do not know the answer to something the interviewers ask, it is OK to say that you don't know that. As I mentioned earlier also, your interviewers will know right away if and when you do so. They may still pretend to accept that answer and move forward with other questions, but you have already lost points.

5. Smile and be confident

I mean, come on - you want to be a knowledge-worker isn't it? You have learned the things from your coursework. You may also have practical know-how on some areas. That's your selling point. So be confident on what you know and what you can do. Those are the points that could land you on the job.

Being a candidate for an internship, the most important aspects expected from you are confidence on your knowledge and skills with more importantly - positive attitudes towards learning and improving yourself.

Sunday, December 13, 2015

Are your test cases efficient & effective?


Now that I have put myself in the camp of those who agree that the number of test cases does not matter as long as all the test conditions are covered efficiently and effectively, it seems to be a good time to find out the technical background of the claim.

What is being 'efficient & effective'?

As I mentioned in my previous post also, you can ignore the number of test cases you write and execute if you are efficient and effective at designing the test cases. So again, what is being efficient and what is being effective? As Peter Drucker puts it in simple terms, "Efficiency is doing things right; effectiveness is doing the right things". In our context of test case writing, to be efficient you have to write an adequate number of test cases, so that no extra time is spent on repetitive or invalid tests. To be effective you have to cover the right set of test conditions.

So how hard can it be? As it turns out, 'not much'! The best way is to utilize some of the many test design techniques.

Test case design techniques

There are many techniques in common practice when it comes to the matter of designing and implementing an efficient and effective set of test cases.

Equivalence partitioning and Boundary value analysis (EP & BVA) are a combination of two most used black-box test designing techniques. As Kanif Fattepurkar correctly states here, "This technique is used to reduce an infinite number of test cases to a finite number, while ensuring that the selected test cases are still effective test cases which will cover all possible scenarios." 

Decision tables (AKA cause-effect tables) help to test combinations of inputs and other conditions in addition. Given the number of combinations between all available test conditions, this technique helps to systematically determine which of those combinations to be tested.

Given the system under test gives different outputs for the same inputs depending on changes made by previous inputs, (which make it a 'finite state system') a selection of these states and transitions can also be considered for testing. Identifying critical paths among the possible set of transitions helps to determine the level of testing that should be incorporated

The latter two techniques also have the by-product of supporting both the development and test teams to get a better understanding of the system

How many of those techniques are out there?

There are many other techniques such as Use case based testing, Coverage based testing, Error-guessing and Exploratory testing to name the most commonly used. Besides, most of these are considered as 'black-box' techniques whereas many white-box and other techniques are also available.

The most important point would be that it is always up to you - the one who do the designing - to select which techniques to use.

Assumptions, Assumptions everywhere!

The most important point is, almost all these techniques seek refuge of assumptions in order to mark their level of success. Following are a couple of those:
  • In EP, it is assumed that all the 'members' of a given partition are processed the same way. Which in turn translate to the point that if one member passes the test so would the others - and vice-versa
  • Also, both EP and BVA techniques assumes that a given failure is caused by a single defect, not a possible combination of multiple defects ('single fault assumption')
So, as you can see applicability these assumptions to the test conditions at hand along with the ability of test designers to correctly identify the required inputs to use the techniques (such as identifying the boundaries and partitions correctly) has  considerable impact on the end results.

'WIIFY'

What's in it for you? Well, two major things:
One - the number of test cases you have to write and execute become less. Yet, rest assured, you have not missed anything important because:
Two - coverage obtained over the identified test conditions will be at an above average level. That in the sense, you have covered all important and 'could-go-wrong' conditions.

So at the end of the day, if you properly use the techniques (probably a combination of multiple techniques, rather than just 'one-fits-all') you will get to write and execute a lower number of test cases without missing (ideally) anything important.

Saturday, November 28, 2015

Why Smoke Test?

According to the common text book definition, a smoke test in software context is the preliminary testing done to check whether there are severe and straight forward defects in a given release, which are sufficient to reject the release without any further tests.

Though commonly used in software industry nowadays, the term is believed to have been originated from electronics industry. When an electronic device comes out from the production line for quality checking, the very first test done on it usually is to provide power and check whether any actual smoke comes out. Of course, the device under test will be rejected if it smoked at this stage.

More or less similarly, in software industry also the very first line of testing done on a particular build or release is to check whether it 'smokes' - that in the sense, to check whether there are any straight forward issues that could stop any further testing.

I gave this blog the name of this specific test method. The intention actually spans the borders of common objectives of a typical smoke test. I intend to write in this blog about certain experiences i have collected for past years as a software quality assurance professional. So let's see how it goes on.