{"id":15414,"date":"2021-05-24T22:03:27","date_gmt":"2021-05-24T16:33:27","guid":{"rendered":"https:\/\/cigniti.com\/blog\/?p=15414"},"modified":"2022-05-17T18:30:09","modified_gmt":"2022-05-17T13:00:09","slug":"demonstrations-experiments-software-testing","status":"publish","type":"post","link":"https:\/\/www.cigniti.com\/blog\/demonstrations-experiments-software-testing\/","title":{"rendered":"Demonstrations, Experiments, and Software Testing"},"content":{"rendered":"

Did you know there\u2019s a difference between an experiment and a demonstration? Also, did you ever realize why this difference is critical to the team that includes coders, testers, and managers.<\/p>\n

In the software development life cycle, both the experiments and demonstrations are referred to by the same name known as \u201ctests\u201d.<\/p>\n

While demonstrations are considered to tell us something that we knew before, experiments are envisioned to help us study things we need or want to know.<\/p>\n

The difference is critical as the need of testing must not be simply to show that the product can work.<\/p>\n

We test to study about the product so that we can comprehend it well, and address glitches before it’s too late.<\/p>\n

According to Michael Bolton<\/strong><\/em>, Lead Consultant, DevelopSense \u201cThe more similar a test is to a previous instance of it, the less likely it is to find a bug. That\u2019s why it\u2019s essential to include plenty of variation in your testing.\u201d<\/strong><\/em><\/p>\n

In one of his articles titled Alternatives to Manual Testing, Michael Bolton<\/em><\/strong> explains how experiential and exploratory testing are not the same.<\/p>\n

\u201cOf course, there’s overlap between those two kinds of encounters. A key difference is that the tester, upon encountering a problem, will investigate and report it. A user is much less likely to do so. (Notice this phenomenon, while trying to enter a link from LinkedIn’s Articles editor; the “apply” button isn’t visible and hides off the right-hand side of the popup. I found this while interacting with Linked experientially. I’d like to hope that I would have find that problem when testing intentionally, in an exploratory way, too.)\u201d<\/em><\/p>\n

While the gamut of testing is huge, a context-driven approach to automation in testing certainly brings in more value.<\/p>\n

A context-driven approach to Automation in Testing<\/strong><\/p>\n

Test Automation can certainly do much more than simply feigning a user who is pressing buttons.<\/p>\n

Context-driven testers pick their testing purposes, practices, and deliverables by looking first at the facts of the explicit situation, including the needs of the investors who commissioned the testing.<\/p>\n

In a paper authored by James Bach<\/strong>, creator of Rapid Software Testing methodology, and Michael Bolton, \u201cThere are many wonderful ways tools can be used to help software testing. Yet, all across industry, tools are poorly applied, which adds terrible waste, confusion, and pain to what is already a hard problem. Why is this so? What can be done? We think the basic problem is a shallow, narrow, and ritualistic approach to tool use. This is encouraged by the pandemic, rarely examined, and absolutely false belief that testing is a mechanical, repetitive process. Good testing, like programming, is instead a challenging intellectual process. Tool use in testing must therefore be mediated by people who understand the complexities of tools and of tests. This is as true for testing as for development, or indeed as it is for any skilled occupation from carpentry to medicine.\u201d<\/em><\/strong><\/p>\n

The spirit of context-driven testing is a project-appropriate request for dexterity and finding. The context-driven testing places this approach to testing within a humanistic societal and principled framework.<\/p>\n

Eventually, context-driven testing is about doing the best we can with what we get. Rather than trying to apply \u201cbest practices,\u201d we accept that very diverse practices will work best under different situations.<\/p>\n

Seven Basic Principles of Context-Driven Testing<\/strong><\/p>\n

As laid down by Cem Karner, J.D., Ph.D, Michael Bolton,<\/strong><\/em> and James Bach,<\/em><\/strong> the seven basic principles of context-driven testing include \u2013<\/p>\n

    \n
  1. The best software testing is a challenging intelligent process.<\/li>\n
  2. Individuals, working together, are the most significant part of any project\u2019s context.<\/li>\n
  3. Projects reveal over time in ways that are often not expectable.<\/li>\n
  4. The worth of any practice depends on the situation.<\/li>\n
  5. The product is a solution. If the issue isn\u2019t resolved, the product will not work.<\/li>\n
  6. There is nothing like best practices but only good practices in context.<\/li>\n
  7. Only through judgment and talent, exercised supportively during the complete project, are we able to do the correct things at the right times to efficiently test our products.<\/li>\n<\/ol>\n

    The illustrations of the principles in action include \u2013<\/p>\n