A short, honest tour of how lokal turns one screenshot into many.
1. Upload
The browser sends the image to the lokal backend. The file is stored under your workspace. Nothing is sent to AI providers yet.
2. Job split
When you click Translate with three languages selected on five screenshots, lokal creates 15 jobs (5 × 3). Each job gets a unique ID and goes into a queue.
3. Per job: extract
A worker picks up the job. It reads the original screenshot and extracts the visible text — title, subtitle, body, button labels. This step keeps positions so the layout can be rebuilt.
4. Translate
Extracted text is sent to a language model with the source and target locale, plus context (this is App Store marketing copy, keep it short, follow App Store rules). The result is short, marketing-quality copy.
5. Render
The original screenshot, the new text, and the exact target resolution (e.g. 1320×2868 for iPhone 16 Pro Max) are sent to the chosen image model — GPT Image, Nano Banana, or Nano Banana Pro. The model returns a new image.
6. Validate
lokal verifies the output dimensions match the target. If the model returns a wrong size, the job retries with stricter prompting before giving up.
7. Deliver
The result is saved next to the original under your workspace. You can preview, edit copy, regenerate, or download.
What is not in the pipeline
- No data is shared with anyone outside the chosen provider.
- No data is used for training.
- No analytics on screenshot content.