Software Testing and Analysis Focused on Results, Not Just Metrics.
Clarity on Functional Risk Depends on More than Simply Executing Tests. Excellence in engineering initiatives depends an appropriate balance between execution and discovery. For high ROI in software testing and analysis, it pays to embrace an approach that incorporates vision for the sorts of organizational needs Software QA, Analysis, and Test Automation satisfy. At the same time, high ROI also depends on accounting for the sorts of downstream needs these activities present -- the more proactively, the better.
When software testing and analysis are approached strictly as a simple set of tasks, though, and focus shifts from results to metrics, organizations often encounter common setbacks that hamper ROI and potentially also becomes a liability for the organization bigger-picture.
I offer two decades of experience successfully helping Software Development Organizations achieve this balance (between execution and discovery) in software products, thereby helping to achieve this balance, make testing high ROI, and deliver excellence in software engineering. All of this on top of contributing early-on as QC in a print shop.
The services outlined here (provided through Upstream Consulting LLC) help software development organizations improve ROI in Test Engineering by focusing on solving clearly-defined engineering problems related to gathering data and feedback from work product.
Testing Strategy
If you know what you want to test, and you have an idea how you'd like to test it, you should be off to a good start. Sometimes it's getting that point, though, that's the challenge.
For me, testing strategy generally incorporates one or more of the services outlined above into a general plan of attack or an approach to testing a piece of software (or component functionality).
Relevant Insights
Test Framework/ Library Development
I've built custom test automation frameworks in a couple different languages, for different types of testing. I have also evaluated and troubleshot custom frameworks, so I'm generally familiar with what works and what does not. Communication is key, though.
I approach framework- and library development like custom application development. Once we have a clear understanding of the testing problem and which tools might be best to solve that problem, I can get to work on a potential solution.
Relevant Insights
- How I Engineered a Solution to Improve UI Testing Stability and Reduce Test Runtime by 90%
- Code Walkthrough: Simple Framework Running UI Tests with Cucumber-JVM, SpringBootTest, and Selenium
- Understanding Test Automation Frameworks: What is a Test Automation Framework?
Test Development
Automated tests don't just help evaluate a work product; they help make sense of the system under test and document both what makes it valuable and how that value gets tested.
When I write automated tests, these are the sorts of things I focus on, along with making sure tests are generally stable and performant.
Relevant Insights
- More than a Hot Take: An Assertion Statement Should Serve as the Focal Point for Any Well-Written Automated Test Specification
- Making the Most of Throwing Errors: Exploring Why "Fail" Could be One of the Most Valuable Things Automated Test Code Can Do
- Collating Test Methods to Limit Trips to External Systems in Automated Tests
Test Planning
I have written test plans for everything from a simple read-only Web page to a complete rewrite of saved search functionality in a managed information system. I'm pretty methodical, and time has shown that the types of tests I plan don't just help testers.
The test plans I write are effectively outlines that describe which parts of a system (or component functionality) should be inspected. Within the outline, the plan poses questions in terms of stakeholder concerns (and variations on those questions) to serve as lines of inquiry to address those concerns while at the same time providing testers and developers with flexibility if the functionality is in active development.
Relevant Insights
Risk Assessments
It's easy to think about the things you anticipate happening when a work product behaves as expected. What about when it doesn't, or even about what the things are that might go wrong? I have experience evaluating software both under development and post-shipment to provide descriptions of the ways those systems might fail and what the implications of anticipated failures might be.
The risk assessments I write are effectively reports on what I find when I evaluate a system or component functionality. The more I can understand or have access to within that system, the more I can generally find.