If you want long-form content then it's easy enough to build a bot that filters out anything too short
Often, though, a skilled writer can convey more in fewer words than an unskilled writer. And I've seen long posts on Steemit that offer nothing valuable, so I hope length doesn't become a determining factor. And this comment probably only serves to prove your point that, ultimately, human decision-making is needed.
There's also the "frequency" of how writers use words. Bots can only estimate what should be said to an extinct, but it takes an intellect with an actual human brain, to precisely layout what needs to be said rather than what's irrelevant.
I wrote a long piece about writing / structuring concepts, it was fun, it was brutal putting it together, but I believe it gives the writer value and concept ideas they maybe able to apply to their writing style.
It basically comes down to the reader, what he or she decides to do with the acquired info.
Either way.. (it's powerful s--t!!) LOL.
https://steemit.com/steem/@jaye-irons/steemit-project-3rd-4th-and-5th-degree-fact-level-search-and-apply-concepts-jaye-s-way
That was the article I created, which talked about various degree levels when searching for info one wants to apply to an eBook creation, story-telling analysis, or a simple how to do write up.
And guess what, it NEVER garnered a dime. But the information there is powerful, and very intriguing..
False negatives are always going to be a problem when we use metrics that indicate but do not directly measure something. But, in the case of AI Assistants that control visibility for a human, a false negative will not be seen while a false positive will be. It's a tricky balance to get right - if the parameters are too coarse then there'll be too many false positives: too tight and false negatives increase.