I don't work with them, personally, but it's my understanding that if you want to use your own training set with Deep Learning, you basically need thousands of samples, and your own computing cluster.
You are viewing a single comment's thread from:
I don't work with them, personally, but it's my understanding that if you want to use your own training set with Deep Learning, you basically need thousands of samples, and your own computing cluster.
It depends on the task, but basically you are right. Training stage requires a lot of computational power. If someone implements Deep Learning at Amazon servers, he probably spends few million dollars for just server infrastructure
It does require "a lot" of computing power. But considering what leaps and bounds we've come in the last few years, you would be amazed at what you can do with a normal desktop PC or even a smartphone these days.
yeah. Computers and processors are developed rapidly. Moreover there are some amazing parallelization technologies that can essentially alleviate computational requirements.
I think in a few years training of neural network will barely be a big problem.