Vendion
    Marketing

    Campaigns – A/B Testing

    5 min read#22

    A/B testing means sending two variants of the same campaign to two equally-sized halves of your audience, measuring the results, and then sending the winning variant to the rest (or next time).

    Why A/B test?

    • Learn which tone works – friendly ("Hi {name}, see you soon!") vs urgent ("LAST CHANCE tonight!")
    • Test different offers – 10% discount vs free starter
    • Compare emoji vs no emoji – does it affect open rate?
    • Optimize length – short SMS vs longer descriptive

    Note: A dedicated A/B module is planned for Q3 2026. Until then you run it manually using the steps below – it takes about 10 minutes to set up.

    Step-by-step manual A/B test:

    1. Create a segment to test on. Example: "Regulars not visited in 60 days" (say 400 guests).

    2. Split the segment in two halves. Easiest way:

      • Create two segments with the same filters + one extra filter "First visit within 180 days" vs "First visit more than 180 days ago"
      • Or use tags: bulk-tag 200 guests as "Test-A" and 200 as "Test-B"
    3. Create campaign A. Select segment "Test-A". Write your first text variant. Note the number of recipients and send time.

    4. Create campaign B. Select segment "Test-B". Write your second text variant. Send at the same time as campaign A – timing must be identical so you compare apples to apples.

    5. Wait 7 days. Open rate (email only), click rate, and – most importantly – bookings/visits among recipients are the metrics that count.

    6. Compare in Campaign Results. Go to Marketing → Campaigns and click each campaign. Vendion automatically counts how many recipients visited or booked within 7 days of the send.

    7. Send the winner to the rest. When a variant clearly wins – send it to the remaining customers in the original segment (those 400 minus 400 = 0, or the next similar segment).

    What counts as a "clear winner"?

    Vendion logs the following per campaign:

    • Delivered (delivered to recipient phone)
    • Opened (email only)
    • Clicked (email only)
    • Visited within 7 days
    • Booked within 7 days
    • Opt-outs (unsubscribed)

    Rule of thumb: test with at least 100 recipients per variant. Fewer than that and the noise is too high to draw conclusions.

    Example – a real A/B test:

    VariantTextRecipientsBookingsConversion rate
    A (friendly)"Hi {name}! We miss you. Book: {booking_link}"200189%
    B (urgent)"LAST CHANCE {name}: 15% off tonight {booking_link}"2003115.5%

    Winner: Variant B. Send it to all 400 in a similar segment next time.

    What should you NOT vary at the same time? Change only one thing at a time. If you test both tone AND offer, you don't know what caused the difference. Pure tone test = same offer, different tone. Pure offer test = same tone, different discounts.

    Common test ideas that tend to deliver results:

    • Personal greeting ("Hi Anna") vs impersonal ("Hi dear guest")
    • Emoji at the end vs no emoji
    • Question ("Craving sushi this weekend?") vs statement ("We have sushi this weekend")
    • Discount in kr ("50 kr off") vs discount in % ("10% off")
    • Booking link vs phone number as CTA
    • Morning (09:00) vs lunch (11:30) vs afternoon (15:00) as send time
    • Short SMS (under 160 chars) vs longer descriptive (over 160 chars, i.e. two SMS)

    Statistical significance – when can you trust the result?

    Rule of thumb to avoid "random winners":

    • Below 100 per variant → treat the result as a hint, not proof
    • 100–500 per variant → reasonable confidence with a clear difference (>30% relative difference)
    • Above 500 per variant → robust result even with small differences (5–10%)

    A difference of 9% vs 9.5% with 100 recipients per variant is just noise. The same difference with 1,000 per variant is real.

    Common pitfalls:

    • Different send times. If variant A is sent Monday 10:00 and variant B Tuesday 18:00 you are not just comparing text – you are comparing text + timing. Always send simultaneously.
    • Different target groups. If "Test-A" is weekend guests and "Test-B" is lunch guests, you get skewed results. Split as randomly as you can.
    • Another active campaign distracting. Is another campaign live at the same time? They compete. Run only one test at a time.
    • Premature conclusion. Wait the full 7-day window before concluding. Many bookings happen day 3-5.
    • Testing too often. If you A/B test the same segment twice a month, guests tire out. Max one test per segment per quarter.

    Document what you learn

    Create an internal page (Notion, Google Doc, paper notebook – whatever you like) with headings "What works for us" and "What doesn't". Example:

    • "Emoji in SMS: +12% clicks" (March 2026)
    • "Discount in kr beat discount in %: +4%" (April 2026)
    • "Sunday evening sends had the worst open rate" (May 2026)

    Over time you build your own "playbook" that is worth more than any external best-practice guide – because it is specific to your restaurant and your guests.

    When the A/B module ships: It will split automatically, preview both variants side-by-side, and flag the winner when statistical significance is reached. Until then – manual works great.

    This feature is part of Vendion Marketing.

    Curious how it looks in practice? Read more about the product or book a short demo.

    Was this article helpful?