Handle folded bibs, finish line chaos, and 10,000-50,000 participants with multi-bib AI detection that beats manual tagging by 25x
Marathon photography is scale at its extreme: 10,000-50,000 participants, all wanting their photos, paper bibs folding under sweat and motion, finish line chaos with 8-15 runners in a single frame. Photographers need to identify every bib even when partially obscured, then deliver same-day galleries for top finishers.
Typical Event
1 day, 6-8 hours of shooting
Photo Volume
20,000-100,000+ photos per event
Delivery
Same-day for top 1,000 finishers, 24-48 hours for full gallery
Key Challenge
Multiple runners per frame with folded, sweaty, rain-damaged bibs at varying distances and angles
Download the official starting list CSV from the race organizer (usually includes bib number, name, age category). This is your lookup table — RaceTagger will match detected numbers to runner names. Clean it: remove duplicates, ensure bib numbers are in the first column, save as UTF-8.
Pro tip
Include ALL participants, not just elites. Recreational runners (bibs 5000+) account for 80% of photos sales. Missing their names means losing 80% of revenue.
Avoid high angles (scaffold photography foreshortens bibs). Position at chest height or slightly below for the flattest bib view. Use continuous burst at finish line (8-15 runners cross per minute at peak). Shoot RAW for maximum recovery in post-processing. Plan 2-3 shooting positions across the course (start, halfway, finish) — each gives different bib angles.
Pro tip
At the finish line, shoot one frame per runner crossing, not bursts. Multiple frames of the same runner waste processing tokens. One clear angle is better than 5 blurry ones.
Create an event folder in RaceTagger. Drag in your RAW or JPEG files. Import the CSV starting list you prepared in Step 1. Hit 'Process Event' — the AI will detect bibs and match them to runner names at ~4 seconds per photo. For 30,000 photos, budget ~2 hours of processing time. Check your token balance: ~500 tokens cover 3,000 photos at free tier.
Pro tip
Don't wait for all photos to process. Tag and review in batches. Sort by confidence score — manually review only the flagged low-confidence photos (typically 8-12% of shots).
RaceTagger flags photos where it's less than 90% confident in the bib read. For rain-damaged or heavily occluded bibs, this is normal — rain races flag 15-20% of shots. Open flagged photos in the review panel. Either confirm the detection (hit 'Good'), manually input the correct number, or mark as 'Can't determine' for photos where the bib is completely hidden.
Pro tip
Use the runner's race pace and position to contextually verify ambiguous numbers. If you see bib #2847 in the finish line photo and bib #2741 10 feet behind, the AI might have read one as the other. Check the starting list — did they have similar paces?
Export the tagged metadata as XMP sidecar files (RaceTagger's native format) or IPTC/EXIF keywords. These include the runner's name, bib number, and race position. Open your RAW files in Lightroom — the XMP metadata will appear automatically in the Keywords panel. Add your photographer credit and copyright. You now have fully tagged, deliverable images.
Pro tip
If delivering through an event platform (most marathons have a custom gallery site), check their metadata requirements first. Some platforms auto-populate runner name from bib number if you include the number — saves you uploading a separate CSV match file.
Use Lightroom's publish feature or your event platform's uploader to push tagged images. Your photos are now searchable by runner name. Runners can find their photos by searching 'John Smith' or their bib number. Set up a direct link for top 100 finishers (same-day delivery, builds goodwill) and a public gallery for the full field (next morning or 24 hours later).
Pro tip
Deliver to the event organizer ASAP — they'll promote it on their social media and website, driving traffic and photo sales. First photographer to deliver wins the contract for next year's event.
Why it's hard: Runners bend at the torso during the race. Paper bibs crease through the middle, distorting digits. A '4' looks like a '1', an '8' becomes a '3'. The fold happens unpredictably.
How AI helps: AI vision understands that partially visible digits are still part of the same bib. It infers missing digits from context and position on the body. Confidence scores flag the toughest reads for human review.
Why it's hard: Finish line chaos means multiple runners at different distances in the same frame. Bibs are at different angles, sizes, and contrast levels. Traditional OCR can only read one region at a time.
How AI helps: RaceTagger detects ALL visible bibs in a single photo and tags that photo to every identified runner. A finish line shot with 8 runners gets 8 separate bib reads and 8 tags in the gallery.
Why it's hard: Runners layer up or wear gear. A hydration vest covers the top half of the bib, leaving only 2-3 digits visible. A running belt obscures the bottom. The visible portion is ambiguous without context.
How AI helps: The AI model is trained on thousands of partially obscured bibs. It uses the visible digits plus bib position on the body to infer the full number. Low-confidence reads are flagged for human review.
Why it's hard: In rain races, bibs get soaked. Ink runs, paper loses its flat surface, numbers blur into the background. Contrast drops to nearly zero. Photographers love rain photos (dramatic, emotional) but they're the hardest to tag.
How AI helps: AI handles low-contrast scenarios better than traditional OCR because it reads the entire bib region, not isolated characters. Still, rain bibs flag 15-20% of shots for manual review. Budget extra time for wet races.
Why it's hard: Some marathons use two bibs: a timing chip on the wrist or ankle, and a printed number on the chest. Only the chest bib is visible in photos. Import the wrong starting list and you'll get wrong name matches.
How AI helps: RaceTagger always reads the visible printed bib. Make sure your starting list maps bib numbers to runner names (not chip IDs). The AI will match detected numbers to the names in your CSV.
Manual Tagging
8-16 hours for 30,000 photos (team of 3-4 taggers)
90-95% on clean bibs, 70-80% on folded/obscured (fatigue drops accuracy after 6+ hours)
With RaceTagger AI
~2 hours for 30,000 photos (fully automatic batch processing)
95-97% clean bibs, 88-93% challenging (folded/obscured), 75-82% worst-case with confidence flags
Real-world scenario
You arrive at 6 AM for a 50,000-runner marathon starting at 8 AM. You position yourself at the start (chest-high bibs, 20,000 photos), halfway point (15,000 photos), and finish line (35,000 photos). Your team shoots 70,000 photos total across all positions. By 3 PM, you have your cards full and the race is done. You import the starting list CSV (5 minutes), drag your photos into RaceTagger (2 minutes), hit 'Process Event', and take a break. By 6 PM, processing is done. You spend 90 minutes reviewing flagged low-confidence photos (the folded bibs and rain shots). By 8 PM, you export metadata and upload to the event's gallery platform. Galleries go live. Within 30 minutes, 2,000+ runners have found their photos. Within 24 hours, you've sold prints from 15,000+ photos — revenue that competitors won't see until tomorrow morning, if at all.
Same-day delivery = you get the next year's contract. Processing took 2 hours of AI work + 90 minutes of review. Manual would have taken your team 12 hours overnight. You made €2,000 profit instead of breaking even on labor costs.
500 free tokens. Upload photos from your last event and see the tags RaceTagger generates — no credit card needed.
Start tagging for free →How does RaceTagger handle multiple runners crossing the finish line at the exact same moment?
It detects all visible bibs in the frame, regardless of how close together they are. A photo with 6 finishers simultaneously gets 6 separate bib detections. You get 6 tags in the gallery, one per runner. That's the entire point — mass participation events demand multi-bib detection.
What if a runner's bib is completely hidden (under a jacket) and we can't see any part of it?
RaceTagger will flag it as 'low confidence' or 'undetectable'. You can either skip that photo (mark as 'can't determine'), or manually input the number if you can identify the runner by other means (timing data, position). Be honest with yourself — if you can't see the bib, don't guess.
Do we need to clean the starting list before importing, or can we just dump the raw CSV from the race organizer?
Check it first. Ensure bib numbers are in the first column, remove duplicates (some races have late-entry duplicates), and verify it's UTF-8 encoded. Spend 10 minutes cleaning, save yourself 2 hours of matching errors.
What happens with our free 500-token tier? How many photos can we process?
~500 tokens process ~3,000 photos at standard quality settings. For larger events, you'll need a paid plan. Most event photographers run 5-10 marathons per year, so a €50/month plan pays for itself with 2-3 events.
Can RaceTagger handle rain-damaged bibs that have run colors bleeding into the numbers?
95% accuracy on clean bibs, but rain drops it to 75-82% with increased flagging (15-20% of the set). Plan for extra manual review time. The good news: rain photos are the most popular with runners and sell at premium prices, so the extra review effort is worth it.
Related
Multi-Bib Detection in Marathon Photography — How AI Handles Finish Line ChaosDeep dive into the specific challenge of detecting 5-15 bibs in a single finish line photo — the defining problem of marathon photography
Related
RaceTagger + Lightroom: The Complete Marathon Photo WorkflowStep-by-step integration between RaceTagger and Lightroom, from import to published gallery — practical for photographers already using LR
Related
How to Process 100,000 Marathon Photos in 3 Events and Triple Your RevenueBusiness-focused guide on how AI tagging improves turnaround, delivery speed, and profit margins for event photographers