It's the automation threat that I fear most. It's not impossible to imagine that some computer genius or perhaps a state sponsored entity could create a completely autonomous Ransomware threat. Even if all the money gathered were given away to random people in a lottery it would be just as threatening to businesses.
So it's not so much that the focus has to be on "cybercriminals" in the traditional sense because the criminal could be the AI itself and the writers of the software could be anonymous or worse could be an intelligence agency deliberately creating an unstoppable weapon.
Governments are going to continue funding these unstoppable weapons and if there is a leak then those techniques can be used for any purpose by any entity. While decentralization is great in some ways, it also introduces risk in other ways. It makes attribution nearly impossible, and automation removes the human element. The humans might only exist on the edges and the profit from Ransom networks might be funneled into legitimate areas in entirely automated fashion as well, and how would we stop that?
Yes, AI is coming to the world of cybersecurity. We are already seeing it being used on both sides to a limited extent. It is what we will all be talking about in the next few years. It holds unimaginable risks and opportunities!
Attribution has and will always be a problem, unless you are meeting someone you know face-to-face. It is especially difficult when trading bits/bytes over networks never designed for security (everything has been bolted on to the Internet). But that does not mean all is lost. This is chess game. There are limitations and each side can maneuver. Don't discount what will be possible in the future. I don't ever expect a total 'win' by either side, but the game will get more intense and there are so many surprises ahead. It will be a bumpy and exciting ride!