explore
Keep Exploring
How has the AI computing landscape changed since November, and what does the growing focus on inference (including effects on deployment, platform requirements, and market winners) imply for different customers and use cases?
add
What challenges arise from the wide variety of inference workload requirements (are workloads all "snowflakes"), and how should those differences be handled?
add
How has Anthropic's rapid ARR growth affected inference compute requirements, and what challenges and optimizations will the AI industry face moving forward?
add
What is your investment focus in AI, and why are you investing in companies like Positron and Akash (particularly regarding AI-native infrastructure and inference power efficiency)?
add
How has NVIDIA's dominance of the GPU/HBM ecosystem influenced your hardware design choices—particularly your decisions about packaging (CoWoS) and memory types (HBM, SRAM, custom DRAM, LPDDR)?
add
How do you expect intelligence (AI) to diffuse from the core/data centers to the edge, and how is your company's platform designed to support that transition?
add
What aspects of model development — for example reasoning, continual learning, increasing context length without degradation, or world-simulation/alternative transformer architectures — are you most excited about or see as most important for the future?
add