šŸ‘¾ Prompt Hacks, AI App Coding & More

Everything you need to know about AI this week

/// System Override

A tool, shift, or insight that challenges the default and demands a deeper look. Our main story of the week.

Welcome to another edition of The AI Upload!

For this edition, I am going to explore a prompt hack I found for MidJourney, as well as cover a few new breakthroughs and tools in the AI space. As usual, things are moving fast, so there’s a lot to explore here.

Let’s dive in.

To The Future,
Cory

Tricking Midjourney Into Producing Authentic Photos

I recently came across this article on PetaPixel that breaks down how some Midjourney users are using a specific prompt hack to get photos that look genuinely authentic - as if they were taken in a real setting, by a real person, using a digital camera, rather than a professional photoshoot.

To explain this in some detail, Levi Forster said this on X:

ā€œAI is trained on photos. Many real photos have file names like IMG_02202021.HEIC — so if you feed it just a typical file name in the prompt and that’s it, it will generate a realistic looking picture because the data that corresponds is always real.ā€

Let’s put this to the test.

For the first example, here’s the prompt I used as the control:

ā

Create an image of a mountain landscape from the perspective of a hiker on a trail in a digital camera style

Here’s what I received as the first output:

Sure, it’s a nice image, but it doesn’t really check the boxes I was looking for. First, it’s not ā€˜from the perspective of a hiker’, but rather a photo including a hiker. Second, this is certainly not a digital camera style, but rather some form of painting.

Seems like a strong control for this use-case. Now, let’s try the prompt hack from above. Here’s how I adjusted the prompt to accommodate this:

ā

IMG_190282009.HEIC mountain landscape from the perspective of a hiker on a trail

And now, the output:

A huge improvement. This actually checks the boxes of looking like a digital camera output as well as being from the perspective of the hiker. My assumption here is that since Midjourney knows to replicate an image that would have that filetype, they know that it would be the output of a camera that someone is physically holding. Pretty great line of logic there.

Let’s do one more to confirm the results:

Before vs After, using these two prompts:

ā

Create an image of 3 friends standing on a street corner outside of a convenience store in New York City in a digital camera style

ā

IMG_93282007.HEIC 3 friends standing on a street corner outside of a convenience store in New York City

Again, definitely an improvement in results.

A quick and important side note: I am using v6.1 for these images, but the article shows examples from v7 that produces more realistic looking humans. If I were to use v6.1 in practice, I’d be using it to produce scenes and sets rather than models. If you have the patience to get through the personalization required to unlock v7, this might not be an issue for you.

Overall, though, I think this works really well in execution! If you use this prompt hack for yourself, share your results! I’d love to see them.

/// Patch Notes

A log of new tools, major announcements, and news worth knowing, compressed for fast parsing.

ChatGPT Gets Memory

ChatGPT just got a memory boost—and it's all about you. Now, it remembers not just what you ask it to remember, but also patterns from your chat history to tailor responses more precisely. You’re still fully in control: toggle memory off, delete specifics, or use temporary chats that don’t affect memory. Memory works across sessions, learning things like your tone preferences or recurring topics. It's already live for Plus and Pro users, with broader rollout coming soon to Teams and Enterprise. Transparency, user control, and privacy remain front and center in how memory is managed.

Did Meta Lie About Their AI Benchmarks?

Meta’s latest AI, the ā€œvanillaā€ Llama 4 Maverick model, did not pass the vibe check. After being caught using a benchmark-optimized experimental version to secure a top spot on LM Arena, the unaltered version was finally tested—and landed in a lowly 32nd place, far behind rivals like GPT-4o, Claude 3.5, and Gemini 1.5 Pro. Meta claimed the prior version was ā€œoptimized for conversationality,ā€ but critics argue gaming benchmarks muddies real-world performance expectations. Meta has now open-sourced the release version, inviting developers to tweak it themselves—hopefully with less drama next time.

ByteDance’s Seed-Thinking-v1.5 AI

TikTok’s parent ByteDance is diving headfirst into the AI arena with Seed-Thinking-v1.5, a new reasoning-focused large language model built on Mixture-of-Experts architecture. Designed to excel in STEM and general domains, it activates just 20 billion of its 200 billion parameters at a time for efficiency. The model impresses on benchmarks like AIME 2024 and ARC-AGI, rivaling OpenAI and Google’s latest. ByteDance also introduced harder math tests like BeyondAIME to push model limits. With custom RL techniques and a dual-verifier system, Seed-Thinking aims for precision and resilience—poised to compete with the AI elite.

Vercel’s v0 Breakdown

Vercel’s CEO Guillermo Rauch recently sat down on the Lenny Podcast to talk about v0, their AI app building tool, and explore how he uses it. Timestamped below, he actually screen shares how he uses the tool to create forms, websites from screenshots, and more.

Thanks for reading this week’s edition of The AI Upload. Is there anything you want to see added? Something we missed? We want to hear your feedback! Simply reply to this email and we will read it. Thank you for your support!