A/B testing means sending two variants of the same campaign to two equally-sized halves of your audience, measuring the results, and then sending the winning variant to the rest (or next time).
Why A/B test?
Note: A dedicated A/B module is planned for Q3 2026. Until then you run it manually using the steps below – it takes about 10 minutes to set up.
Step-by-step manual A/B test:
Create a segment to test on. Example: "Regulars not visited in 60 days" (say 400 guests).
Split the segment in two halves. Easiest way:
Create campaign A. Select segment "Test-A". Write your first text variant. Note the number of recipients and send time.
Create campaign B. Select segment "Test-B". Write your second text variant. Send at the same time as campaign A – timing must be identical so you compare apples to apples.
Wait 7 days. Open rate (email only), click rate, and – most importantly – bookings/visits among recipients are the metrics that count.
Compare in Campaign Results. Go to Marketing → Campaigns and click each campaign. Vendion automatically counts how many recipients visited or booked within 7 days of the send.
Send the winner to the rest. When a variant clearly wins – send it to the remaining customers in the original segment (those 400 minus 400 = 0, or the next similar segment).
What counts as a "clear winner"?
Vendion logs the following per campaign:
Rule of thumb: test with at least 100 recipients per variant. Fewer than that and the noise is too high to draw conclusions.
Example – a real A/B test:
| Variant | Text | Recipients | Bookings | Conversion rate |
|---|---|---|---|---|
| A (friendly) | "Hi {name}! We miss you. Book: {booking_link}" | 200 | 18 | 9% |
| B (urgent) | "LAST CHANCE {name}: 15% off tonight {booking_link}" | 200 | 31 | 15.5% |
Winner: Variant B. Send it to all 400 in a similar segment next time.
What should you NOT vary at the same time? Change only one thing at a time. If you test both tone AND offer, you don't know what caused the difference. Pure tone test = same offer, different tone. Pure offer test = same tone, different discounts.
Common test ideas that tend to deliver results:
Statistical significance – when can you trust the result?
Rule of thumb to avoid "random winners":
A difference of 9% vs 9.5% with 100 recipients per variant is just noise. The same difference with 1,000 per variant is real.
Common pitfalls:
Document what you learn
Create an internal page (Notion, Google Doc, paper notebook – whatever you like) with headings "What works for us" and "What doesn't". Example:
Over time you build your own "playbook" that is worth more than any external best-practice guide – because it is specific to your restaurant and your guests.
When the A/B module ships: It will split automatically, preview both variants side-by-side, and flag the winner when statistical significance is reached. Until then – manual works great.
This feature is part of Vendion Marketing.
Curious how it looks in practice? Read more about the product or book a short demo.
Was this article helpful?