All our testers are coders (i.e. devs), because the tests we use are pretty much all "automated tests" driven by software which has to be coded.
Manual testing is useful for testing a user interface to see how it "looks and feels" (also sometimes called UX testing for "user experience"), but comprehensive testing is best done by software. Doing it in software means you have tests that you can re-use over and over again too, to detect if something gets broken. Human testers won't remember to test everything, and they ultimately become too expensive if you want to do tests every time the software changes.
Yes indeed, I am very familiar with all types of test. I suspected it was largely automated as that would make most sense for repeatable tests on iterative builds.
I think it sounds good, relative to old forks on the bygone chain.
The pedant in me of course would like to counter the comprehensive testing best done by software. In my experience comprehensive testing is best using automated testing backed up by manual for edge cases and the like :0)
It could be we're using two different definitions of manual test. When I was referring to manual tests, I was referring to tests that are manually done by directly entering inputs to the software, either thru a GUI or a command-line, then observing the results visually. This is very "old school" testing, which I did a lot of in my early software career. Now we only use this kind of testing for UX testing (well, except for one project where the manager is still very old school and more of an engineer than a computer science guy).
But another view of automated versus manual is one where any test where the inputs which are generated by testing software are considered automated tests (and this testing software explores the input space in some way), whereas humans specifying the inputs are considered manual tests, with these latter tests being used to test edge cases that weren't explored by the less intelligent but larger set of inputs generated by the automated software.
If that's what you meant by manual (which is what I was led to believe by your use of the phrase "edge case"), then I think most of Hive's current tests are manual in this sense, but we code those sets of inputs into automated tests that can be re-run over and over again.
For the most substantial tests, the automated testing system then applies the same inputs to a golden reference model (which generally is the old version of the code) and to the current code under development, and reports differences in the results as potential problems.
I did a lot of that old school manual testing myself back in the day. It is surprising how prevalent it still is in the industry. I am currently working with a global organisation who pride themselves on being cutting edge when it comes to software delivery and yet when you scratch the surface of what they do everything is old school/waterfall-esque in nature and advocating automation beyond the old school automation (e.g. tests run on a tool on a local install) in something like a CI/CD framework is simply baffling for them.
But yes, I digress. The latter part of your reply is what I meant by manual. I often don't express myself best when I type on my phone!
You're doing pretty well. I hate composing on a cell phone...