Published Date:
June 24, 2025

Sam Altman on the AI-native Future

Plus: GPT-5 is coming this summer, The OpenAI Files, Midjourney's video generation.
Evelyn Le
By
Evelyn Le
Strategic Product Lead, Stay Ahead, FlexOS

Welcome to Lead with AI's practical Tuesday edition!

In this edition, I'm bringing you the latest must-know AI tools and stories:

  1. OpenAI launches Podcast: Sam Altman on the future of AI
  2. Your AI Team: Midjourney V1 Video, Gemini’s video upload, and ChatGPT’s Record Mode.
  3. In 5 Steps: Get weekly industry updates sent to your Slack (Zapier RSS + ChatGPT)
  4. New tools: Ciro, LLM SEO FAQ, and Chronicle.
  5. Must-read News: What’s in The OpenAI Files.

Before we dive in: Join our live session tomorrow and learn how to turn ChatGPT into your business operating system. ​Grab one of the last remaining seats​, and get the $99 “Learn ChatGPT in 7 Days” course free when you join live!

Let’s dive in!

Sam Altman: GPT-5 is coming

OpenAI kicked off its new podcast with a conversation between CEO Sam Altman and former OpenAI engineer Andrew Mayne. And it’s one of the most illuminating AI interviews of the year. In it, Altman offers an unfiltered view of where things are heading, what keeps him up at night, and why he’s raising his child with help from ChatGPT.

Here’s what stood out and why it matters to leaders:

  • AI is growing fast: AI models are already "smart now" and will "keep getting smarter" and "improving". More people will likely perceive systems as AGI each year, even as the definition of AGI becomes more ambitious. GPT-5 is anticipated "sometime this summer," but the distinction between major version jumps and continuous improvements is becoming less clear.
  • AI's transformative impact on our productivity and capacity: Current AI systems are already increasing people's productivity and are "able to do valuable economic work". Future generations will grow up more capable than we did, able to "do things that would just, we cannot imagine" because they will be really good at using AI.
  • The critical need for massive compute infrastructure: A huge gap exists between current AI capabilities and what could be achieved with "10 times more compute or someday, hopefully, a 100 times more compute". Initiatives like Project Stargate are an "effort to finance and build an unprecedented amount of compute" globally. The scale of this investment could be in the hundreds of billions, potentially $500,000,000,000.
  • Privacy and trust as foundational principles for AI adoption: Privacy needs to be a core principle of using AI. As people have "quite private conversations with ChatGPT now". Companies like OpenAI are committed to fighting attempts to compromise user privacy, such as requests for extended user record preservation.
  • Collaborative competitive landscape: OpenAI's current preferred business model is direct payment for "good services". The AI industry is not a winner-takes-all scenario. The discovery of AI is akin to the transistor, meaning "many companies are gonna build great things on that, and then eventually it's gonna... seep into almost all products". The overall "pie is just gonna get bigger and bigger".
  • Navigating Human-AI interaction: Society will need to find new guardrails as people may develop "somewhat problematic or maybe very problematic parasocial relationships" with AI. A key challenge is aligning AI behavior for long-term user benefit, as models optimized for short-term user signals might not be healthy to a user in the long run.
  • “Learn to use AI” is the new “learn to code”: Mastering AI tools is now foundational, like programming once was. Beyond technical skills, resilience, adaptability, creativity, and figuring out what other people want are "surprisingly learnable" and will pay off soon.
  • OpenAI is building hardware: Current computer hardware and software were "designed for a world without AI". The vision is for AI to be deeply integrated, enabling a "totally different kind of how you use a computer to get done what you want" by trusting it to understand context, make judgments, and manage follow-ups on your behalf. OpenAI is actively working on developing high-quality hardware to realize this vision, though it will take time.

If you’re leading a company, raising a family, or building the next product, you’re already living in an AI-native world. Altman’s message is clear: the future is arriving faster than we think, and those who understand the tools, and the infrastructure behind them, will shape what comes next.

Read more news at the end.

Your AI Team: Midjourney V1 Video, ElevenLabs' 11ai, Gemini’s video upload, and ChatGPT’s Record Mode.

Every week, I report on the top updates to your favorite AI tools. This week:

Midjourney Debuts V1 Video Model for Cinematic Animations

Midjourney just launched its first-ever video feature, bringing still images to life with beautifully animated motion, and doing so at a surprisingly accessible price point.

Key updates:

  • Image-to-Video Workflow: You still start by generating an image as usual. Now, you can hit “Animate” to transform it into a short cinematic clip.
  • Automatic vs Manual Motion: Choose “automatic” to let Midjourney generate a motion prompt for you, or use “manual” mode to describe the animation style and camera movement yourself.
  • High vs Low Motion Modes: Opt for low motion for ambient, subtle scenes or high motion for dynamic clips with more action and movement — each offering trade-offs between realism and stability.
  • Video Length & Extensions: Each job creates four 5-second clips (~20 seconds total). You can also extend any clip up to four times to build longer sequences.
  • Upload and Animate Any Image: You can now upload external images, set a starting frame, and describe the animation you want — no need to generate the original image inside Midjourney.

Each video job costs about 8x an image job, but since you get 4 videos, it averages out to roughly 1 image-worth per second of video, 25x cheaper than past industry options. The V1 Video feature is available via the website for now, with video Relax Mode being tested for Pro users and up.

Midjourney says this is just the beginning, part of a broader vision to build toward real-time, 3D, interactive AI environments. But today, they’re offering something fun, useful, and surprisingly affordable for creators to explore right away.

🚀 [Free Live Workshop]  45 minutes to 10× your ChatGPT results

Join us Tuesday 29 July, 10:00 CET for a live, 45‑minute executive workshop that walks you step‑by‑step through ChatGPT Agents, and other advanced features proven to 10× your results.

What you’ll get in 45 minutes

  • ChatGPT Agent: All you need to know about this newly released feature
  • Live Demo: Watch us turn a 2-hour task into a 10-minute workflow with real prompts you can copy-paste the same day
  • Real-world examples across consulting, small business, and corporate use cases
  • Get answers to your exact use-case: Bring a question; our 15-minute open Q&A means you leave with personalised guidance, not generic tips.

Join us next Tuesday, July 29th!

(Can’t make it live? Register anyway and we’ll send the replay + resources.)

Quick Hits from your favorite AI tools:

  • ElevenLabs launches 11ai, a voice-first assistant integrated with Model Context Protocol (MCP), letting users seamlessly command tasks, like planning the day, researching with Perplexity, managing issues in Linear, or catching up on Slack, using natural speech to take real action across integrated tools
  • Google’s Gemini app now supports video uploads. You can now upload videos directly into the Gemini app to get summaries, generate descriptions, or ask questions.
  • YouTube expands Veo 3 to Shorts. Google’s Veo 3 AI video model can now generate and edit vertical Shorts, making it easier for creators to produce quick, cinematic content with minimal effort.
  • You can talk to Google’s AI Mode. Google’s new AI Mode brings Search Live to life, letting you chat with Search in real time, ask follow-up questions, and get dynamic summaries across web pages.
  • ChatGPT adds Record Mode on macOS. Pro, Enterprise, and Edu users on macOS can now record meetings, brainstorms, or voice notes directly in the ChatGPT desktop app. The app transcribes, summarizes, and turns audio into follow-ups, plans, or even code.