{"id":16563,"date":"2021-11-15T19:17:33","date_gmt":"2021-11-15T13:47:33","guid":{"rendered":"https:\/\/cigniti.com\/blog\/?p=16563"},"modified":"2022-12-01T16:11:37","modified_gmt":"2022-12-01T10:41:37","slug":"shifting-left-quality-cost","status":"publish","type":"post","link":"https:\/\/www.cigniti.com\/blog\/shifting-left-quality-cost\/","title":{"rendered":"Shifting Left: Using quality cost as a force to overcome resistance"},"content":{"rendered":"

A Petro-chemical company is collaborating on the requirements for a new order transaction type that must meet the needs of customers requesting that orders be split across multiple shipments. Rep from transportation group is absent. The team defines the split shipment requirement to the business analyst team for design.<\/p>\n

The vital piece of information that never made it into the requirements was the compartment capacity of individual tanker truck types. The lifecycle continues with the requirements defect now nested in the solution, technical specifications are drafted, the new order type is coded, interfaces are modified, data objects are defined and mapped, test cases are written and executed.<\/p>\n

Not until the User Acceptance Test is the design flaw identified. At that point, it is estimated that the effort to resolve the defect, including modifying impacted technical and data objects, changing supporting designs, and re-writing test cases, will run into thousands of man-hours. In fact, several studies have indicated that the cost of fixing this defect during UAT will cost about 50 times more than what it would have cost to resolve during the design phase.<\/p>\n

Causes for Test Squeeze<\/strong><\/p>\n

Some of the causes for Test Squeeze includes underestimated development timelines, Scope creep, and Complexity of requirements. Also, the habit of recreating what we do now and lack of investment in process optimisation leads to unnecessary mods.<\/p>\n

Few other causes include Underestimation of the downstream impacts especially when legacy systems are involved. The perception of \u201ctest\u201d is not value add and the language has to be changed to quality management \/ assurance.<\/p>\n

The Waterfall habitually leaves all testing to the end rather than dropping modules into test. The high defect rate with defects built in so much harder to change.<\/p>\n

The data quality is often poor, leading to invalid test cases. For example, GML Reporting: 100% success rate on data integrity testing reported.<\/p>\n

Digging further showed that out of a data set of 10s of 1000s of rows, the Test Data set had been reduced to 19 rows, all of which succeeded.\u00a0 Every other row had failed. The \u201chappy path\u201d testing does not allow us to anticipate issues in production. There is a need to model \u201cwhat could possibly go wrong?\u201d<\/p>\n

The cause of the test squeeze is also due to the underestimation of the scale of test effort, exacerbated by unrealistic estimates. For instance, the quote to test vanilla Oracle RMS core functionality > 11-man years.<\/p>\n

What drives this behavior, cognitive dissonance?<\/strong><\/p>\n

Retailers are very cost sensitive: focus is always on reducing costs, maximizing profit and IT projects are high cost up front, often with long paybacks. Reluctance to invest at the beginning leads to panicked \u201cwhatever it takes\u201d at the end.<\/p>\n