Thanks Will! Very good pointers. Is it worth AB testing both emails or just going with email 2 (which might be stronger) and then seeing results after 50 people in cadence?
The problem with A/B testing when you have 50 emails in a list is that the numbers are not “statistically significant”. It’s difficult to pull repeatable insights from the results of the campaign.
As it’s the first cadence, I’d suggest getting it out there, see what messages get responses and then we can A/B test the worst performing emails in the future.