The next part, though, of AI is going to be around inference, which I think will be pushed out as close to users as possible, with more than fifty percent happening on end devices. Whether that's an iPhone or a driverless car, you don't want to have to worry about network latency.
You are viewing a single comment's thread from: