ExamplesPricingBlogAboutGet started
All posts
researchframeworkphotography

What we learned from shooting 10,000 professional headshots

After 10,000+ professional headshots shot in a controlled studio environment, the patterns are no longer opinions — they're rules.

Joseph West··14 min read

This is a study without a control group. We didn't run a double-blind experiment on 10,000 faces in a lab. What we ran was a photography studio — specifically, Studio Pod, the automated headshot studio we built in Houston, Texas, where we've now photographed more than ten thousand real professionals in the same controlled conditions.

That volume, combined with the consistency of the setup, turned out to be a kind of research instrument. When you photograph enough people using the same lights, same camera position, same software-controlled process, the patterns separate themselves from the noise. What makes a headshot work and what makes it fail stop being opinions and start being observations you can't un-see.

This is the framework that emerged. We published it because it's also the framework our AI was trained on — and understanding it is the best way to explain why AI Headshots produces the results it produces, and why the rest of the AI headshot category often doesn't.

Why this study exists

When we started Studio Pod, we didn't set out to collect data. We set out to make professional headshots fast. The goal was operational: walk in, stand on the mark, walk out with a polished photo in twenty minutes.

But running the same photography process across thousands of people — doctors at the Texas Medical Center, attorneys Downtown, realtors in the Heights, oil and gas executives in the Energy Corridor, founders in EaDo — exposes something you can't see when you're shooting 20 weddings a year. You see the distribution. You see what the median face does wrong. You see the difference between the photos clients love and the photos they quietly replace three months later. You see the failures that cluster and the successes that don't quite repeat.

Over time, the patterns became impossible to ignore. We started filing them. When we moved into AI, they became the scaffolding for what our model learned.

How we collected the observations

A few ground rules for what follows:

  • Qualitative, not quantitative. We're not reporting percentages. When we say "most professionals" or "the majority of the time," we mean it directionally based on shooting thousands of sessions — not as a measured statistic.
  • Controlled conditions. Every observation below is from photos shot with the same lighting rig, same camera position, same backdrop variety, same direction given to every subject. That consistency is what makes the patterns legible in the first place.
  • Professional headshots only. This isn't a study of family photography, weddings, or editorial portraits. The register is professional: LinkedIn, team pages, hospital directories, law firm bios, realtor yard signs, press materials. The findings generalize within that register.
  • Houston-centric sample. Our subjects skew Houston professional — which means a meaningful mix of industries (energy, medical, legal, tech, real estate, corporate) and a strong demographic diversity thanks to the city itself.

None of this is peer-reviewed science. It's field observation at scale. Which is often how durable rules get made.

Pattern 1: Everybody underestimates lighting

The most consistent single variable in whether a headshot works or fails is lighting. Not the subject's appearance, not the clothing, not the pose — lighting. And professionals almost uniformly underestimate how much it matters.

The best illustration of this is what happens when someone who hates their existing headshot comes in and gets one taken in our studio. They sometimes don't even recognize themselves. "I look so much better here," they say. They've been blaming their face for what was actually their lighting — the overhead fluorescent of their office, the harsh mid-afternoon window in a conference room, the phone flash at arm's length.

The pattern we see over and over: a person's actual face is photogenic. Their selfies aren't. The delta between the two is almost entirely lighting.

What this means for AI-generated headshots: most AI tools take bad input and produce a variation on the same bad input. Ours doesn't, because we trained it on thousands of examples of the same face captured in professional lighting. The model learned what the subject looks like under correct light — and it can apply that knowledge to new selfie inputs. This is the single biggest technical advantage of training on studio photography versus scraped internet data.

Pattern 2: The expression spectrum

The second most consistent variable is expression — specifically, where on the "smile spectrum" the subject sits. We identified five clear positions:

  1. Stone-faced. No expression. Reads as severe, unfriendly, guarded. Almost never chosen by clients for professional use.
  2. Neutral. Controlled, no visible expression, but no tension either. Reads as corporate but cold. Works for certain legal and financial contexts, fails almost everywhere else.
  3. Quarter-smile. The corners of the mouth lifted 10%, slight tension around the eyes. This is where almost every successful professional headshot lives.
  4. Genuine smile. Teeth visible, eyes fully engaged, clearly amused or warm. Works for certain industries (realtors, hospitality, teaching) where warmth is part of the product.
  5. Forced grin. The subject performing "happy" on command. Dates instantly, reads as inauthentic, age-marks the photo.

The quarter-smile is the register 70%+ of professionals should target. Most don't, because most of us don't know it's the register — we either go too flat or too performative. When we direct Studio Pod subjects explicitly toward the quarter-smile, the selection rate for their final photos changes dramatically.

What this means for AI-generated headshots: AI tools trained on scraped internet photos pull a lot from LinkedIn, which means a lot of forced grins and a lot of neutral corporate looks. Our training data is weighted toward the quarter-smile because that's what Studio Pod directs for — and that's why the expressions in AI Headshots outputs read as more natural than competitors'.

Pattern 3: Clothing matters less than you think

This one surprised us. Everybody fusses about what to wear. Should I wear a suit? A blouse? What color? What neckline? In truth, clothing is one of the smaller variables in whether a professional headshot succeeds.

The hierarchy, in our experience, looks something like this:

  1. Lighting (the dominant variable)
  2. Expression
  3. Head/shoulder angle
  4. Background treatment
  5. Crop/framing
  6. Wardrobe (less important than most of the above)
  7. Hair
  8. Makeup

Clothing matters less because the headshot's job is to get the viewer to look at the face. Anything that doesn't compete with the face is fine. A solid color. A simple collar. A textured but non-patterned top. Once those conditions are met, the specific garment barely moves the quality of the headshot.

Where clothing DOES matter: when it actively competes with the face. Logos on the chest, busy patterns, bright neons, high-contrast stripes. These pull attention away from the face. Everything else is noise.

What this means for AI-generated headshots: our AI offers wardrobe variety because clients ask for it — corporate, casual, creative, executive — but we let clients know that the wardrobe choice is the least impactful variable. The lighting, expression, and crop are what make the photo work. Clothing is the decoration on top.

Pattern 4: Industry register matters more than clothing

Here's where we started noticing something that surprised us: clothing doesn't matter much, but register matters enormously. Register is the overall visual gestalt that signals what industry someone works in — lighting mood, background tone, posture, expression formality, crop style.

The differences are subtle but legible at a glance:

  • Lawyers and executives need a register of contained authority: darker backgrounds, dark or navy attire, formal posture, quarter-smile at the tempered end.
  • Doctors and medical professionals need trustworthy approachability: slightly warmer tones, soft light, visible warmth in the eyes, approachable expression.
  • Realtors and sales professionals need reliable warmth: genuine smile territory, brighter backgrounds, less formal posture, conveys "I'm pleasant to work with."
  • Tech founders and creatives need casual confidence: more editorial lighting, less polished posture, register closer to editorial photography than corporate headshot.
  • Academics and teachers need thoughtful approachability: natural lighting, books or neutral backgrounds, quieter register, warm but professional.
  • Actors and models need expressive range: multiple looks, multiple emotions, natural lighting, intimate crop.

The failure mode we see constantly: someone in one industry getting a headshot shot in the register of another. A doctor who looks like a BigLaw partner (cold). A tech founder who looks like an insurance agent (overly formal). A realtor who looks like an executive (too distant).

What this means for AI-generated headshots: this is where our style pages come from. We trained our AI with explicit register awareness because we'd seen the failure mode play out thousands of times. Most AI tools offer "business casual" or "professional" as their categories. We offer LinkedIn headshots, executive headshots, actor headshots, doctor headshots, lawyer headshots, realtor headshots — because those aren't interchangeable. The register is different and the output needs to reflect it.

Pattern 5: The 60% failure mode

The most consistent failure we see in professional headshots — the one that probably explains the majority of "I need a new headshot" moments in somebody's career — is a combination of four specific issues, almost always appearing together:

  1. Dead eyes. The subject looked at the camera without thinking about anything. The eyes are open, the focus is technically correct, but there's no life behind them.
  2. Shadow problems. Either under-eye circles exaggerated by overhead light, or hard-edged shadows from a single harsh source.
  3. Head tilt. Subject's head angled in a way that reads as uncertain or unserious. Professionals should generally keep their head level or tilted very slightly back, not down.
  4. Wrong cropping. Either too wide (face is a small part of the frame) or too tight (cutting off the top of the head or crowding the shoulders).

These four failure modes cluster together because they all share a common cause: the photo was taken by someone not trained in professional portrait photography. A friend with an iPhone. An HR person with a DSLR. An automated camera in a bad conference room.

The most interesting thing about this pattern isn't that it's the dominant failure mode — it's that all four issues can be fixed. Not by finding a better photographer. By applying basic professional portrait conventions: positioning the subject so the light is at 45° to their face, giving them a beat to think about something specific before the photo, correcting head tilt through direction, and cropping to the professional headshot standard.

What this means for AI-generated headshots: this is the core argument for AI-generated headshots specifically built on studio training data. A properly-trained AI has internalized all four corrections. It won't generate dead eyes because its training set didn't have dead eyes. It applies professional lighting by default. It crops to the standard. The subject never has to know any of this — they just upload selfies and receive headshots that don't suffer from the common failure mode.

What AI can replicate, and what it can't

One last pattern — and the most important one for anyone deciding between a real photographer and an AI tool.

AI trained on real studio photography can reliably replicate:

  • Professional lighting treatment
  • Consistent expression register (the quarter-smile family)
  • Correct head and shoulder angles
  • Industry-appropriate wardrobe and background
  • Proper cropping and framing
  • Professional color correction and skin rendering
  • Multiple varied outputs of the same person

What AI cannot yet reliably replicate:

  • The specific editorial intent a great photographer brings to a specific subject
  • Unplanned moments — the laugh the photographer waited for, the tension the photographer broke
  • Hyper-specific editorial decisions (deep shallow depth of field, unusual locations, wardrobe styling collaboration)
  • The "iconic image" use case — a book jacket, a magazine cover, an About page designed around a photograph

This matters because it reveals when AI is and isn't the right tool. For the 90%+ of professional headshot needs — LinkedIn, team pages, realtor yard signs, corporate directories, hospital profiles — AI trained on studio photography now meets or exceeds the quality of mid-tier photographer sessions. For the 10% of editorial or iconic image use cases, a skilled human photographer remains the right choice.

We built AI Headshots for the 90%. We still recommend Studio Pod — our physical studio — for clients who want the assisted experience, and we still recommend a specialist photographer for the 10% editorial cases. The tools complement each other.

What this means for you

If you're choosing a professional headshot tool, don't evaluate on price or speed alone. Those are table stakes. Evaluate on what the tool was trained on.

An AI tool trained on scraped internet photos will produce outputs that have all the failure modes of the internet's photography — dead eyes, bad lighting, forced expressions, wrong register. An AI tool trained on real studio photography — ours, specifically — will produce outputs that reflect what trained photographers have internalized about professional portraiture. The difference is legible in the output.

You can test it yourself. Upload ten selfies, generate a pack, and compare the results to the category average. If the lighting looks right, the expressions feel natural, and the register matches your industry, you'll know what you're looking at.

That's the 10,000 headshot study, distilled. Everything we've learned, now available to anyone with a phone.


Want to read more? See the 5 rules behind every great professional headshot, or read how to prepare selfies for the best AI output. Or just try AI Headshots — $29 gets you 40 headshots in 30 minutes.

About the data: This piece reports qualitative patterns observed across Studio Pod's operational history. It is not a controlled statistical study. The patterns are directional and based on running the same photography process across thousands of professionals. If you want to cite specific findings, we're happy to discuss — contact us at contact@thestudiopod.com.

About the author

Joseph West

Founder of AI Headshots and Studio Pod — the automated headshot studio in Houston, Texas. Photographer first, AI engineer second.