- The AI Upload
- Posts
- š¾ Prompt Hacks, AI App Coding & More
š¾ Prompt Hacks, AI App Coding & More
Everything you need to know about AI this week

/// System Override
A tool, shift, or insight that challenges the default and demands a deeper look. Our main story of the week.
Welcome to another edition of The AI Upload!
For this edition, I am going to explore a prompt hack I found for MidJourney, as well as cover a few new breakthroughs and tools in the AI space. As usual, things are moving fast, so thereās a lot to explore here.
Letās dive in.
To The Future,
Cory
Tricking Midjourney Into Producing Authentic Photos
I recently came across this article on PetaPixel that breaks down how some Midjourney users are using a specific prompt hack to get photos that look genuinely authentic - as if they were taken in a real setting, by a real person, using a digital camera, rather than a professional photoshoot.
To explain this in some detail, Levi Forster said this on X:
āAI is trained on photos. Many real photos have file names like IMG_02202021.HEIC ā so if you feed it just a typical file name in the prompt and thatās it, it will generate a realistic looking picture because the data that corresponds is always real.ā
Letās put this to the test.
For the first example, hereās the prompt I used as the control:
Create an image of a mountain landscape from the perspective of a hiker on a trail in a digital camera style
Hereās what I received as the first output:

Sure, itās a nice image, but it doesnāt really check the boxes I was looking for. First, itās not āfrom the perspective of a hikerā, but rather a photo including a hiker. Second, this is certainly not a digital camera style, but rather some form of painting.
Seems like a strong control for this use-case. Now, letās try the prompt hack from above. Hereās how I adjusted the prompt to accommodate this:
IMG_190282009.HEIC mountain landscape from the perspective of a hiker on a trail
And now, the output:

A huge improvement. This actually checks the boxes of looking like a digital camera output as well as being from the perspective of the hiker. My assumption here is that since Midjourney knows to replicate an image that would have that filetype, they know that it would be the output of a camera that someone is physically holding. Pretty great line of logic there.
Letās do one more to confirm the results:
Before vs After, using these two prompts:
ā Create an image of 3 friends standing on a street corner outside of a convenience store in New York City in a digital camera style | ā IMG_93282007.HEIC 3 friends standing on a street corner outside of a convenience store in New York City |
![]() | ![]() |
Again, definitely an improvement in results.
A quick and important side note: I am using v6.1 for these images, but the article shows examples from v7 that produces more realistic looking humans. If I were to use v6.1 in practice, Iād be using it to produce scenes and sets rather than models. If you have the patience to get through the personalization required to unlock v7, this might not be an issue for you.
Overall, though, I think this works really well in execution! If you use this prompt hack for yourself, share your results! Iād love to see them.

/// Patch Notes
A log of new tools, major announcements, and news worth knowing, compressed for fast parsing.
ChatGPT Gets Memory
ChatGPT just got a memory boostāand it's all about you. Now, it remembers not just what you ask it to remember, but also patterns from your chat history to tailor responses more precisely. Youāre still fully in control: toggle memory off, delete specifics, or use temporary chats that donāt affect memory. Memory works across sessions, learning things like your tone preferences or recurring topics. It's already live for Plus and Pro users, with broader rollout coming soon to Teams and Enterprise. Transparency, user control, and privacy remain front and center in how memory is managed.
Did Meta Lie About Their AI Benchmarks?
Metaās latest AI, the āvanillaā Llama 4 Maverick model, did not pass the vibe check. After being caught using a benchmark-optimized experimental version to secure a top spot on LM Arena, the unaltered version was finally testedāand landed in a lowly 32nd place, far behind rivals like GPT-4o, Claude 3.5, and Gemini 1.5 Pro. Meta claimed the prior version was āoptimized for conversationality,ā but critics argue gaming benchmarks muddies real-world performance expectations. Meta has now open-sourced the release version, inviting developers to tweak it themselvesāhopefully with less drama next time.
ByteDanceās Seed-Thinking-v1.5 AI
TikTokās parent ByteDance is diving headfirst into the AI arena with Seed-Thinking-v1.5, a new reasoning-focused large language model built on Mixture-of-Experts architecture. Designed to excel in STEM and general domains, it activates just 20 billion of its 200 billion parameters at a time for efficiency. The model impresses on benchmarks like AIME 2024 and ARC-AGI, rivaling OpenAI and Googleās latest. ByteDance also introduced harder math tests like BeyondAIME to push model limits. With custom RL techniques and a dual-verifier system, Seed-Thinking aims for precision and resilienceāpoised to compete with the AI elite.
Vercelās v0 Breakdown
Vercelās CEO Guillermo Rauch recently sat down on the Lenny Podcast to talk about v0, their AI app building tool, and explore how he uses it. Timestamped below, he actually screen shares how he uses the tool to create forms, websites from screenshots, and more.

Thanks for reading this weekās edition of The AI Upload. Is there anything you want to see added? Something we missed? We want to hear your feedback! Simply reply to this email and we will read it. Thank you for your support!