Weekly Journal: Oakley Meta HSTN Glasses Announced
[6 min read] Your weekend guide to getting ahead on the digital frontier. The glasses feature an ultra-HD 3K video camera, open-ear speakers, and integrated Meta AI -a hands-free voice assistant.
Welcome to this week’s Weekly Journal 📔, your guide to the latest news & innovation in emerging technology, digital assets, and our exciting path to the Metaverse. This is week 133 of the 520 weeks of newsletters I have committed to, a decade of documenting our physical and digital lives converge. New subscribers are encouraged to check out the history & purpose of this newsletter as well as the archive.
- Ryan
🌐 Digital Assets Market Update
To me, the Metaverse is the convergence of physical & virtual lives. As we work, play and socialise in virtual worlds, we need virtual currencies & assets. These have now reached mainstream finance as a defined asset class:
🔥🗺️Heat map shows the 7 day change in price (red down, green up) and block size is market cap. BTC has dropped but is still staying above USD$100k.
🎭 Crypto Fear and Greed Index is an insight into the underlying psychological forces that drives the market’s volatility. Sentiment reveals itself across various channels—from social media activity to Google search trends—and when analysed alongside market data, these signals provide meaningful insight into the prevailing investment climate. The Fear & Greed Index aggregates these inputs, assigning weighted value to each, and distils them into a single, unified score.
🗞️ Metaverse news from this week:
Oakley and Meta Unveil Next-Gen AI Glasses: Performance, Design, and the Future of Wearable Tech
In a bold step that fuses sports heritage with cutting-edge spatial computing, Oakley and Meta have announced the launch of Oakley Meta Glasses — a new category of AI-powered performance eyewear designed to amplify human potential on and off the field. Marking Oakley’s 50th anniversary, this launch signals a deepening of Meta’s push to make smart glasses the next mainstream hardware wave of the AI era. Following the runaway success of Ray-Ban Meta smart glasses, the Oakley partnership brings performance-focused design to athletes and everyday adventurers alike.
The flagship model, Oakley Meta HSTN, pairs Oakley’s iconic style with Meta’s latest AI and spatial computing innovations. The glasses feature an ultra-HD 3K video camera, open-ear speakers, and integrated Meta AI — a hands-free voice assistant capable of giving real-time answers, weather updates, and sport-specific insights. Athletes can check wind speeds mid-golf swing or record highlight reels just by saying, “Hey Meta, take a video.”
This push comes as smart eyewear emerges as a promising, less cumbersome gateway to the broader metaverse vision: a seamless bridge between physical and digital worlds, experienced not through bulky VR headsets but through lightweight, socially acceptable glasses. “Together with Meta, we are setting new bounds of reality,” said Caio Amato, Oakley’s Global President. “It’s about pushing performance and unlocking human potential in ways never seen before.”
Rocco Basilico, Chief Wearables Officer at EssilorLuxottica, which owns Oakley, said the glasses reflect an ambitious multi-brand strategy to create a connected eyewear ecosystem spanning sports, lifestyle, and entertainment. Available in six lens and frame combinations — all Rx-ready — the Oakley Meta HSTN will debut with a limited edition in July for $499 USD, with broader models starting at $399 later this summer across North America, Europe, and parts of Asia-Pacific.
The glasses will be showcased by Team Oakley stars Kylian Mbappé and Patrick Mahomes in a global campaign, and will appear at major sporting events starting this month, underscoring how wearable AI and spatial computing are reshaping what the metaverse can mean in daily life. In this next chapter for wearable tech, Oakley and Meta aim to move the metaverse off the couch and onto the playing field — no headset required.
Virtual Reality and the Metaverse: A Future Still Under Construction
Listen to the interview👆🏻. Virtual reality was once promised to be the future of not only video games, but social media. Is that future still possible? NPR's Ailsa Chang talks to Vishal Shah, VP of the Metaverse, to find out.
A decade ago, virtual reality (VR) and the metaverse were hyped as the next frontiers of human connection and digital life. Facebook’s $2 billion acquisition of Oculus in 2014, and its bold rebranding to Meta in 2021, signalled an audacious bet that people would soon live, work, and play inside immersive virtual worlds.
But in 2025, the reality is more sobering than science fiction. As Meta’s own Vishal Shah, Vice President of the Metaverse, candidly admits, the grand vision remains unrealised: "The hype around the metaverse is dead — and that’s good."
Today, despite selling around 20 million VR headsets, most are still used for gaming — a market limited by practical hurdles like cost and the notorious motion sickness that first-time users (like NPR’s Ailsa Chang) often experience. Outside gaming, the promise of millions gathering for work meetings, concerts, or everyday socialising in VR remains niche.
Yet, Shah argues the vision isn’t abandoned — just on a longer, more grounded timeline. Meta still believes VR is the only tech that truly makes people feel they’re sharing a room with someone far away. Social use cases — watching movies together, attending virtual events — are slowly growing, even if they’re not mainstream yet.
Meanwhile, a shift is underway: Meta’s focus is increasingly on artificial intelligence. Massive capital investments - $60–65 billion planned for 2025 - are going into AI research and infrastructure, subtly reframing Meta’s futuristic ambitions. Critics, like Oculus founder Palmer Luckey, have noted that the pivot to AI reflects how the narrative around the metaverse is being repackaged to align with investor confidence in tangible tech.
So where does that leave VR and the metaverse? Not dead, but evolving. As Shah puts it, the true goal is timeless: to shrink the distance between people. For billions with limited access to quality schools, safe communities, or global connections, an immersive, boundary-free internet could still be transformative - even if it’s not yet the everyday reality that Zuckerberg once promised.
👓 Read of the Week: "A.I. Sludge Has Entered the Job Search"
Published June 21, 2025 | The New York Times
In today’s race for jobs, humans and bots are flooding recruiters with an unprecedented torrent of applications — and the culprit is a new form of digital pollution: A.I.-generated sludge.
Key insights:
💼 An Application Tsunami
Fully remote roles at tech companies now attract over 1,200 applications in a day, much of it generated by ChatGPT and automated job-hunting agents. LinkedIn alone processes 11,000 applications per minute — a 45% jump in a year.
🤖 A.I. vs. A.I. in Hiring
Recruiters use bots to screen candidates. Candidates use bots to auto-apply and pass video interviews. The result? A bizarre arms race where automated résumés face automated interviewers — and both sides suspect each other of gaming the system.
🔍 Fake Identities Rising
Beyond bland résumés, there’s an uptick in applicants using fabricated identities. Gartner predicts that by 2028, 1 in 4 job applicants could be fake, pushing firms to adopt stricter identity verification.
📉 Candidates & Recruiters Both Frustrated
Job seekers spend hours crafting personalised applications — only to compete with bot-blasted spam. Recruiters are so overwhelmed some are taking down listings early, or skipping public postings altogether.
🛡️ Regulatory & Ethical Headaches
A.I. in hiring remains a legal grey zone in the U.S., but is tightly regulated in the EU’s A.I. Act as a high-risk use case. Lawsuits about bias are starting to pile up as well.
🔑 The Inevitable Reset?
Experts like Jeremy Schifeling predict this A.I. arms race can’t go on forever: “Eventually, authenticity will win. But until then, expect wasted time, money, and computing power on both sides.”
🎥 Watch of the week:
Gemini Robotics brings Gemini 2.0 to the physical world through its most advanced vision-language-action model to date. This breakthrough enables robots that are highly interactive, dexterous, and truly general-purpose. Learn more about how Gemini is powering the next generation of robotic AI agents at deepmind.google/robotics.
AI Showcase🎨🤖🎵✍🏼: Midjourney Enters the AI Video Race with V1
In the Metaverse, AI will be critical for creating intelligent virtual environments and avatars that can understand and respond to users with human-like cognition and natural interactions.
This week, a big splash in the creative AI world: Midjourney, known for its wildly popular image generation bots, has launched its first AI video model, called V1.
What is it?
Midjourney’s V1 is an image-to-video model. Users upload an image — whether it’s an original photo or a Midjourney-generated piece — and V1 transforms it into four short, five-second videos, each with its own twist. True to Midjourney’s style, the results so far look surreal, dreamy, and more artistic than photo-realistic.
How does it work?
Available only through Discord, like all Midjourney tools.
Runs in the browser for now — no standalone app yet.
Offers custom controls: you can let the AI animate automatically, or give it a written prompt describing how you want things to move.
Adjust motion: pick “low motion” or “high motion” for subtle or dramatic animations.
Extend video length: each clip starts at five seconds, but you can add up to four more 4-second extensions, maxing out at 21 seconds per clip.
How much does it cost?
V1 is pricier than generating still images — about eight times more credits per video. The cheapest option is Midjourney’s $10/month Basic plan, but heavy users might prefer the $60 Pro or $120 Mega plans for unlimited video generations (in slower “Relax” mode).
Why does this matter?
Midjourney’s move puts it head-to-head with big names like OpenAI’s Sora, Runway’s Gen-4, Adobe’s Firefly, and Google’s Veo 3 — all racing to define what AI video can do. But unlike some rivals, Midjourney’s ambition is bigger than making ad clips: CEO David Holz says video generation is just a step towards building real-time open-world simulations — think AI-powered, living virtual worlds.
One twist:
Midjourney just launched V1 days after being sued by Disney and Universal for AI-generated images of copyrighted characters like Darth Vader and Homer Simpson. This lawsuit highlights a growing battle between Hollywood and generative AI companies over copyright, fair use, and artistic boundaries.
The takeaway:
With V1, Midjourney is inviting its huge community to experiment with short, trippy AI video clips — and hinting at an even more immersive creative future. It’s another sign that the lines between artist, animator, and machine are blurring fast.
That’s all for this week! If you have any organisations in mind that could benefit from keynotes about emerging technology, be sure to reach out. Public speaking is one of many services I offer.