Via LinkedIn:
https://www.testingalgorithms.com/survey.html
Friday, July 8, 2016
Friday, December 18, 2015
Troubleshooting checklist
Anybody, who is facing a necessity to troubleshoot, hopes for a miracle.
For something like closing your eyes and seeing the problem just solving itself without much fuss or explanation, when you open them. We all know this feeling. We all know it almost never works this way, because there is no such a thing as a miracle.
But there are causes and effects instead. And some brain energy is required to make sense of what you see and to get out of the entanglement. And that brain energy is something that is taken for granted but is not that available.
Those little gray cells
Brainwork is something we think is always there and always will be. You probably no longer believe that storks bring children[1], but for some reason you are sure that your brain is a kind of perpetuum mobile which functions under any condition and knows nothing about sabotage.
That's a lie. It does. And a good (really boring but really good) book[2] by Daniel Kahnneman reminds you a good number of times that the rational, non-automatic, weighing part of your mind is lazy (which is where Kahneman ends and general phycology begins) and being lazy means that your resource is far from enough.
Brainwork is something we think is always there and always will be. You probably no longer believe that storks bring children[1], but for some reason you are sure that your brain is a kind of perpetuum mobile which functions under any condition and knows nothing about sabotage.
That's a lie. It does. And a good (really boring but really good) book[2] by Daniel Kahnneman reminds you a good number of times that the rational, non-automatic, weighing part of your mind is lazy (which is where Kahneman ends and general phycology begins) and being lazy means that your resource is far from enough.
Monday, November 30, 2015
What if test certification systems put us in a box to think inside it?
With vertical thinking one uses the negative in order to block of certain pathways. With lateral thinking there is no negative.
Edward de Bono, Lateral Thinking
Sometimes I get those anarchistic ideas that may, if voiced at inappropriate moment, seriously damage your reputation, unless you put them in a nice colored gift package of justification and proof. And though some may see this as creating a Pandora box, at the bottom there is always hope of finding an important clue or even leading yourself out of the dead end.
Today's one is the following. Let's say we limit access to any exams or certification systems for any tester with relevant hands-on experience under 2 or 3 years? The same way they do for MBA or certain levels of trainings for the system administrators? Mind you, not information, or books, or trainings, but certification. Because I strongly believe there is a huge difference between critically reading a book and drilling something in order to check certain boxes in test.
Edward de Bono, Lateral Thinking
Sometimes I get those anarchistic ideas that may, if voiced at inappropriate moment, seriously damage your reputation, unless you put them in a nice colored gift package of justification and proof. And though some may see this as creating a Pandora box, at the bottom there is always hope of finding an important clue or even leading yourself out of the dead end.
Today's one is the following. Let's say we limit access to any exams or certification systems for any tester with relevant hands-on experience under 2 or 3 years? The same way they do for MBA or certain levels of trainings for the system administrators? Mind you, not information, or books, or trainings, but certification. Because I strongly believe there is a huge difference between critically reading a book and drilling something in order to check certain boxes in test.
Even if you are fully conscious about the fact that your own opinion is different. Or (which is worse) if you have no opinion and take things on faith. Testing is an activity of the scientific type, not a religious practice, and we need to be extremely careful with faith and our ability to say what you think. Test practice proves that a good tester needs to be honest and courageous, because sometime this is exactly what makes the difference between true and false.
Thursday, January 8, 2015
SoapUi, Groovy and the meaning of life
SoapUi is generally known as a tool for testing web services. Opinion divides on whether to call working with it a pleasure or a torture due to certain differences in professional background of those who gives that opinion. Personally, I believe that is it is a really good tool, but, unlike a washing machine, it needs its fantastic manual to be looked though at the very least: http://www.soapui.org/
Like many other IT tools SoapUi comes in two variants: paid and free of charge community edition. And on top of these two there are two ways of using it (which one you chose depends on your level of expertise). They are:
-- using it as any other UI-based tool
-- use it as extendible multi-purpose multi-tool with some UI for backward compatibility with the brains of normal users.
The latter will be discussed as a remedi against professional midlife crisis and the like.
Why meaning of life though? The thing is I see SoapUi and testing of web services as a great opportunity for those people who crave technical tasks, but can not get them either due to a limited skill with Java or because of all the tasty vacancies being filled in already. Practice proves that dissatisfaction with your job does not always result from shortcomings of your profession but from your lack of understanding what would make up for them. In other words, because you failed to identify the root cause of your dissatisfaction. Please note that it is assumed that the root cause is purely professional and not psychological (as psychological issues are out of scope for this post).
Like many other IT tools SoapUi comes in two variants: paid and free of charge community edition. And on top of these two there are two ways of using it (which one you chose depends on your level of expertise). They are:
-- using it as any other UI-based tool
-- use it as extendible multi-purpose multi-tool with some UI for backward compatibility with the brains of normal users.
The latter will be discussed as a remedi against professional midlife crisis and the like.
Why meaning of life though? The thing is I see SoapUi and testing of web services as a great opportunity for those people who crave technical tasks, but can not get them either due to a limited skill with Java or because of all the tasty vacancies being filled in already. Practice proves that dissatisfaction with your job does not always result from shortcomings of your profession but from your lack of understanding what would make up for them. In other words, because you failed to identify the root cause of your dissatisfaction. Please note that it is assumed that the root cause is purely professional and not psychological (as psychological issues are out of scope for this post).
Testers biggest grudges
For some reasons testers are believed to be less important than
developers. And the less important you are the less paid you get. Most
popular justification is that testers are less skilled than developers. I
think there is a kind of bias at work as there are more skilled and
less skilled people in either profession. And higly skilled testers
usually have experience with different areas of sowftware development
domain including technical ones. And still testing is considered to be
the second best profession compared with development.
Different factors play into it such as country specifics, company and product specifics, cultural specifics and so on, but for the limited scope of this post I will concentrate on the following:
-- history of an issue
-- specificities of test profession
-- specificities of testing mindset
-- professional evolution of a tester
Different factors play into it such as country specifics, company and product specifics, cultural specifics and so on, but for the limited scope of this post I will concentrate on the following:
-- history of an issue
-- specificities of test profession
-- specificities of testing mindset
-- professional evolution of a tester
Testing scientifically - Differential diagnosis in testing (troubleshooting scientifically)
Everybody lies
you know perfectly well where this comes from
Sometimes your testing goes smoothly and dully, but sometimes you just can't make head or tail of what you observe. It feels like there is some system behind but it is quite complex as if more that one factor were influencing the result. Sometimes it is extremely useful to realize there may be more than one problem behind. Luckily, testing was not created with the software industry so we may go for help to some older sciences.
In the medical world there is a tool known as differencial diagnosis or DDx[5]. It employs four basic steps that help to take our big chaos and sort it into several meaningful lumps. Having been translated from wikimedical to software testing language these basic steps[6] will look like this:
1. Find out all info about your piece of software (i.e. requirements, configuration etc)
2. Make up a list of observed symptoms (not just where it contradicts requirement but full description of actual behavior)
3. Make up a list of possible causes (aka "candidate conditions")
4. Prioterize these causes by serverity (i.e. impact or how it affects interested parties)
5. Eliminate causes one by one until you get the botton of it
Note: remember that there may be more than one cause. The fact that some cause was found but it did not explain all the symptoms may happen because
-- there is another cause responsible for the rest of symptoms
-- your guess is wrong
Note: sometimes you may get no result, it might mean that something was not taken into consideration or we do not posess all the required information
Note: Occam's razor may be of some help as well: "When you hear hoofbeats, look for horses, not zebras" [7]
--------------
Footnotes:
[5] Differencial diagnosis at wiki: <a href="en.wikipedia.org/wiki/Differential_diagnosis">link to the article</a>
[6] Original description of steps from wiki (same as [5]):
Differential diagnosis has four steps.
First, the physician should gather all information about the patient and create a symptoms list.
Second, the physician should make a list of all possible causes (also termed "candidate conditions") of the symptoms.
Third, the physician should prioritize the list by placing the most urgently dangerous possible cause of the symptoms at the top of the list.
Fourth, the physician should rule out or treat the possible causes beginning with the most urgently dangerous condition and working his or her way down the list.
"Rule out" practically means to use tests and other scientific methods to render a condition of clinically negligible probability of being the cause. In some cases, there will remain no diagnosis; this suggests the physician has made an error, or that the true diagnosis is unknown to medicine. Removing diagnoses from the list is done by making observations and using tests that should have different results, depending on which diagnosis is correct.
[7] Original quote from wiki (same as [5]):
As a reminder, medical students are taught the adage, "When you hear hoofbeats, look for horses, not zebras," which means look for the simplest, most common explanation first. Only after the simplest diagnosis has been ruled out should the clinician consider more complex or exotic diagnoses.
you know perfectly well where this comes from
Sometimes your testing goes smoothly and dully, but sometimes you just can't make head or tail of what you observe. It feels like there is some system behind but it is quite complex as if more that one factor were influencing the result. Sometimes it is extremely useful to realize there may be more than one problem behind. Luckily, testing was not created with the software industry so we may go for help to some older sciences.
In the medical world there is a tool known as differencial diagnosis or DDx[5]. It employs four basic steps that help to take our big chaos and sort it into several meaningful lumps. Having been translated from wikimedical to software testing language these basic steps[6] will look like this:
1. Find out all info about your piece of software (i.e. requirements, configuration etc)
2. Make up a list of observed symptoms (not just where it contradicts requirement but full description of actual behavior)
3. Make up a list of possible causes (aka "candidate conditions")
4. Prioterize these causes by serverity (i.e. impact or how it affects interested parties)
5. Eliminate causes one by one until you get the botton of it
Note: remember that there may be more than one cause. The fact that some cause was found but it did not explain all the symptoms may happen because
-- there is another cause responsible for the rest of symptoms
-- your guess is wrong
Note: sometimes you may get no result, it might mean that something was not taken into consideration or we do not posess all the required information
Note: Occam's razor may be of some help as well: "When you hear hoofbeats, look for horses, not zebras" [7]
--------------
Footnotes:
[5] Differencial diagnosis at wiki: <a href="en.wikipedia.org/wiki/Differential_diagnosis">link to the article</a>
[6] Original description of steps from wiki (same as [5]):
Differential diagnosis has four steps.
First, the physician should gather all information about the patient and create a symptoms list.
Second, the physician should make a list of all possible causes (also termed "candidate conditions") of the symptoms.
Third, the physician should prioritize the list by placing the most urgently dangerous possible cause of the symptoms at the top of the list.
Fourth, the physician should rule out or treat the possible causes beginning with the most urgently dangerous condition and working his or her way down the list.
"Rule out" practically means to use tests and other scientific methods to render a condition of clinically negligible probability of being the cause. In some cases, there will remain no diagnosis; this suggests the physician has made an error, or that the true diagnosis is unknown to medicine. Removing diagnoses from the list is done by making observations and using tests that should have different results, depending on which diagnosis is correct.
[7] Original quote from wiki (same as [5]):
As a reminder, medical students are taught the adage, "When you hear hoofbeats, look for horses, not zebras," which means look for the simplest, most common explanation first. Only after the simplest diagnosis has been ruled out should the clinician consider more complex or exotic diagnoses.
Thursday, August 22, 2013
Testing scientifically - Scientific method in testing
Scientific method in testing
begins here
Aristotle is believed to think that women had less teeth than men. At the moment we can't say for sure if he could not count or women in Greece of his time experienced problems with nutrition.
A fact.
To tell the truth best ever definition of scientific method is already provided by wikipedia and according to it
-- method should be empirical and have measurable evidence [2]
-- method should be proving or disproving a theory/assumption [3]
-- approach and analysis should be as unbiased as possible [4]
-- knowledge should be documented and sharable [4]
-- results should be reproducible [4]
All these map on what we know about testing perfectly. In fact any good testing professional has been doing this for years but did not always realise that there was a solid scientific background all the time. Let's look at each of the above points closely.
Well, being empirical. Unless you are in SQA department your job 100% emprirical anyway.
Measurability. In this case measuring is not about centimeters, but should be used in a bit wider sense, say, you do not expect your test to return some ok result, you expect it to return either specific number of lines, or no error message, or what not. But you need to be specific.
Being an applied technique testing is quite remote from most of theories and deals with more low-level thing like requirements and assumptions (derived directly from said requirements). As such your test is supposed not just to do something abstract but to either prove or disprove an assumption. Suppose that under certain circumstances some button is expected to tunr blue. Assumptions are:
-- button must look blue given preconditions are met
-- button should not look non-blue
-- etc
It is possible to talk endlessly on human biases how testers being human suffer from them severely. Here are some of them
-- fear of looking incompetent (and not reporting a potential issue)
-- fear of aggressive reactions (and not reporting a potential issue)
-- being lazy to deal with consequences (and singing it off in hope that by the moment it fires you'll be far away)
-- etc
Sharing knowledge. Ahh. Knowledge means power and unless you are lucky enough to work for a company with right corporate culture you will have to beg for it until you build your own informational capital and be able to trade. There is a lot of literature on business and functional analysis with a log of explanations of why documenting is important so I'll do it briefly here:
-- if information is documented and shared you do not have to waste anybody's time and effort to get it
-- if information is documented and shared it is less likely to be lost if key persons leave
-- if information is documented and shared it is much easier to make sure your tests are well aligned with requirements
-- etc
Reproducible results. Tiny but crucial difference between a bug and a glitch. To prove there is a bug it is important to be able to show that a very specific cause leads to a very specific result. Being inaccurate in this usually leads to non-reproducibles and I-do-not-wanna-know-it-works-on-my-machine stuff. Golden rule: make sure to reproduce it twice at least before you claim it works ok or it is a bug. As I mentioned earlier there are both false positive and false negative pitfalls.
-------------
Footnotes:
[2] "To be termed scientific, a method of inquiry must be based on empirical and measurable evidence subject to specific principles of reasoning." Source: http://en.wikipedia.org/wiki/Scientific_method
[3] "The chief characteristic which distinguishes the scientific method from other methods of acquiring knowledge is that scientists seek to let reality speak for itself, supporting a theory when a theory's predictions are confirmed and challenging a theory when its predictions prove false." Source: http://en.wikipedia.org/wiki/Scientific_method
[4] "Scientific inquiry is generally intended to be as objective as possible in order to reduce biased interpretations of results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, giving them the opportunity to verify results by attempting to reproduce them." Source: http://en.wikipedia.org/wiki/Scientific_method
-------------
To be continued in part 3
begins here
Aristotle is believed to think that women had less teeth than men. At the moment we can't say for sure if he could not count or women in Greece of his time experienced problems with nutrition.
A fact.
To tell the truth best ever definition of scientific method is already provided by wikipedia and according to it
-- method should be empirical and have measurable evidence [2]
-- method should be proving or disproving a theory/assumption [3]
-- approach and analysis should be as unbiased as possible [4]
-- knowledge should be documented and sharable [4]
-- results should be reproducible [4]
All these map on what we know about testing perfectly. In fact any good testing professional has been doing this for years but did not always realise that there was a solid scientific background all the time. Let's look at each of the above points closely.
Well, being empirical. Unless you are in SQA department your job 100% emprirical anyway.
Measurability. In this case measuring is not about centimeters, but should be used in a bit wider sense, say, you do not expect your test to return some ok result, you expect it to return either specific number of lines, or no error message, or what not. But you need to be specific.
Being an applied technique testing is quite remote from most of theories and deals with more low-level thing like requirements and assumptions (derived directly from said requirements). As such your test is supposed not just to do something abstract but to either prove or disprove an assumption. Suppose that under certain circumstances some button is expected to tunr blue. Assumptions are:
-- button must look blue given preconditions are met
-- button should not look non-blue
-- etc
It is possible to talk endlessly on human biases how testers being human suffer from them severely. Here are some of them
-- fear of looking incompetent (and not reporting a potential issue)
-- fear of aggressive reactions (and not reporting a potential issue)
-- being lazy to deal with consequences (and singing it off in hope that by the moment it fires you'll be far away)
-- etc
Sharing knowledge. Ahh. Knowledge means power and unless you are lucky enough to work for a company with right corporate culture you will have to beg for it until you build your own informational capital and be able to trade. There is a lot of literature on business and functional analysis with a log of explanations of why documenting is important so I'll do it briefly here:
-- if information is documented and shared you do not have to waste anybody's time and effort to get it
-- if information is documented and shared it is less likely to be lost if key persons leave
-- if information is documented and shared it is much easier to make sure your tests are well aligned with requirements
-- etc
Reproducible results. Tiny but crucial difference between a bug and a glitch. To prove there is a bug it is important to be able to show that a very specific cause leads to a very specific result. Being inaccurate in this usually leads to non-reproducibles and I-do-not-wanna-know-it-works-on-my-machine stuff. Golden rule: make sure to reproduce it twice at least before you claim it works ok or it is a bug. As I mentioned earlier there are both false positive and false negative pitfalls.
-------------
Footnotes:
[2] "To be termed scientific, a method of inquiry must be based on empirical and measurable evidence subject to specific principles of reasoning." Source: http://en.wikipedia.org/wiki/Scientific_method
[3] "The chief characteristic which distinguishes the scientific method from other methods of acquiring knowledge is that scientists seek to let reality speak for itself, supporting a theory when a theory's predictions are confirmed and challenging a theory when its predictions prove false." Source: http://en.wikipedia.org/wiki/Scientific_method
[4] "Scientific inquiry is generally intended to be as objective as possible in order to reduce biased interpretations of results. Another basic expectation is to document, archive and share all data and methodology so they are available for careful scrutiny by other scientists, giving them the opportunity to verify results by attempting to reproduce them." Source: http://en.wikipedia.org/wiki/Scientific_method
-------------
To be continued in part 3
Testing scientifically - Testing and quality
Testing and quality
Quality is conformance to requirements, not 'goodness' or 'elegance'
usually attributed to Phil Crosby
Testing is measure taken in order to make sure you get exactly what you expect. Or very much like it. Or has some of expected features. At least. Testing deals with how you put your expectations into words and as such is closely connected with message-sent-is-not-the-same-as-message-received thing. This is where requirements come in and analyst plays his/her part.
Ok. Suppose we agreed on meaning of words and shared goals, how do we prove we got what we wanted? And this is where testing starts and test specialist appears on the scene. In an everyday common context test stands for "checking". In a narrower scientific context it implies a number of actions that prove some statement. Do you remember functional logic? If something in a statement contradicts actual reality or common sense then statement is considered to be false. Otherwise it is said to be true. It works pretty similarly with the testing except that reality and quite often common sense are replaced with requirements. If test results contradict requirements then test is failed. If test result are in perfect harmony with those requirements then test is passed or, in other words, we got what we wanted (or agreed to think that we wanted this).
NB: Sometimes results turn out ot be false positive or false negative. This may happen due to a number of reasons like poor understanding of what is going on, ambiguos requirement or something being wrong with the test design. We are dicussing this a bit later.
Roughly all tests may be split into two categories: tests that try to confirm that anything works alright and tests that try to prove there is something wrong. Scientifically these two approaches are called verification and falsification and their history goes as far back as Karl Popper's writings and even further then that. It is important to remember that this in no way contradicts or replaces specifict methods of building test coverage such as equivalence partitioning, boundary values analysis, cause and effect, error guessing or exhastive testing.[1]
Why mess with science when the only thing we want is to make sure it works during sales demo? In fact this question is crucial and the answer is simple. If your goal is just to sell something that somewhat works then testing is not necessary. Testing is only required if you expect to get something specific and getting what you expect constitues quality. In case you need quality messing with science is unavoidable.
-------------
Footnotes:
[1] Based on this page (Russian with automatic traslator): About testing
-------------
To be continued in part 2
Quality is conformance to requirements, not 'goodness' or 'elegance'
usually attributed to Phil Crosby
Testing is measure taken in order to make sure you get exactly what you expect. Or very much like it. Or has some of expected features. At least. Testing deals with how you put your expectations into words and as such is closely connected with message-sent-is-not-the-same-as-message-received thing. This is where requirements come in and analyst plays his/her part.
Ok. Suppose we agreed on meaning of words and shared goals, how do we prove we got what we wanted? And this is where testing starts and test specialist appears on the scene. In an everyday common context test stands for "checking". In a narrower scientific context it implies a number of actions that prove some statement. Do you remember functional logic? If something in a statement contradicts actual reality or common sense then statement is considered to be false. Otherwise it is said to be true. It works pretty similarly with the testing except that reality and quite often common sense are replaced with requirements. If test results contradict requirements then test is failed. If test result are in perfect harmony with those requirements then test is passed or, in other words, we got what we wanted (or agreed to think that we wanted this).
NB: Sometimes results turn out ot be false positive or false negative. This may happen due to a number of reasons like poor understanding of what is going on, ambiguos requirement or something being wrong with the test design. We are dicussing this a bit later.
Roughly all tests may be split into two categories: tests that try to confirm that anything works alright and tests that try to prove there is something wrong. Scientifically these two approaches are called verification and falsification and their history goes as far back as Karl Popper's writings and even further then that. It is important to remember that this in no way contradicts or replaces specifict methods of building test coverage such as equivalence partitioning, boundary values analysis, cause and effect, error guessing or exhastive testing.[1]
Why mess with science when the only thing we want is to make sure it works during sales demo? In fact this question is crucial and the answer is simple. If your goal is just to sell something that somewhat works then testing is not necessary. Testing is only required if you expect to get something specific and getting what you expect constitues quality. In case you need quality messing with science is unavoidable.
-------------
Footnotes:
[1] Based on this page (Russian with automatic traslator): About testing
-------------
To be continued in part 2
Subscribe to:
Posts (Atom)