How big do the new AI clouds become? As large as Heroku, Digital Ocean, Snowflake, or AWS? What is the size of outcome and utilization scale for this class of company?
How does the AI stack evolve with very long context window models? How do we think about the interplay of context window & prompt engineering, fine tuning, RAG, and inference costs?