Researchers referenced the Turing test (human imitation at its core) as a longstanding goal post in the field for meaningful machine intelligence.
Economists and think tanks forecast AI's disruption by reasoning about the labor force and what % of jobs are automate-able at increasing capability thresholds.
Founders similarly make arguments about the value of their AI businesses with labor automation logic: “okay X many people currently have this job... they cost Y per year.. if we can automate that work then our market size is on the order of X times Y annually.”
Investors create speculative sky high valuations (the short-term scorecard for company success) for companies that fit this metaphor, and you get a feedback loop to early seed investors/accelerators/founders that these are good companies.
Silicon Valley thinkpieces abound with analogies to AI/agentic workers that can do work just like a human co-worker, take its recent favorite AGI bull (Leopold’s Situational Awareness)
You are viewing a single comment's thread from: