Our A/B Tester Tool Reveals the Truth About Your Campaign

True marketeers will know that in order to get consistently high engagement rates, evaluation of each campaign deliverable is necessary. Often looking at click-throughs, the number of attendees or sign ups, helps to indicate the success of a digital banner ad, print ad or e-shot. However, to get the best idea of which deliverable performs the best, using a Split Test A/B Calculator can help.

A Split Test A/B calculator takes into consideration the statistical significance. Way back in the 1700s, statistical significance was invented by mathematicians John Arbuthnot and Pierre-Simon Laplace who computed the p-value for the human sex ratio at birth (interesting fact). It has since been used in a number of ways, one of which being A/B testing in marketing.

To use Napier’s Split Test A/B calculator, simply follow the below steps.

  1. Input the number of visitors each landing page has received. If it’s an email marketing campaign, input the number of opens for each email.
  2. On the second row, input the number of leads generated from the landing page or from the email.
  3. Click ‘Calculate results’
  4. The Conversion rates and Confidence level will appear as a percentage
  5. If the difference is significant enough to indicate a statistical significance, the result will come out as ‘You have a winner! Statistically significant difference’
  6. If the difference isn’t significant, the result will come out as ‘Sorry, no winner. You can’t say the difference is significant’

This tool is especially handy if you are running an email marketing or advertising campaign and need to decide which design to take forward into the next round. If there’s no winner, perhaps it would be best to design a completely new email or advert until there’s a clear victor.

Let the battle of the Split Test commence! 


A Day in the Life of an Account Manager

At Armitage Communications, we’re keen to share with you the different roles we have within the team. From Account Directors through to Marketing Specialists, we have a range of people performing a variety of tasks on a daily basis.

In this blog, we’ll share with you Rose’s typical day. Rose started in PR over five years ago as a Junior Account Executive and is now a Junior Account Manager. 

Morning:

When I arrive in the morning, the first thing I do is check my emails for any urgent items which need to be addressed immediately. I then look online for any news relevant to our accounts, especially around robotics and automation technology, logistics and telecommunications.

It’s difficult not to get sidetracked into reading too much of the news - but also really important to get an overview of what is happening in the industries to provide context for our campaigns and articles.

Next I check in with the members of our Robotics, Telecommunications and Logistics teams to make sure we are all aligned on the high priority items of the day. If there are any difficulties across any projects then I have to think carefully about what the next best action is to take. More often than not it takes a small change to resolve a challenge, which in the moment can feel enormous, but usually it’s a small part of the overall picture and once addressed, it’s on to the next project. If I’m really stuck on what to do, then I can ask an Account Director.

Often the next project will involve writing of some kind. It could be a blog, feature article, opinion piece, case study or script. Depending on the content specifications, it could take up to eight hours to research, plan and draft the piece, or as little as half an hour.

Afternoon:

In the afternoon, I could be pitching features to editors, setting up press release distributions or spending some time on a Skype call with a client to run through briefs for content, events or campaign strategy. 

As an Account Manager, it’s my responsibility to ensure that my clients are well represented in the press and that all content aligns with their key messaging. Usually I will scan the media packs for relevant features to pitch for. Common features I’d go for include ‘Digitalisation in food manufacturing’, or ‘How to address the skills shortage,’ through to “Warehouse automation’ and ‘Robotics’ across a range of magazines including Controls, Drives and Automation, Logistics Manager and Food & Drink Network UK. There are lots of nationals which occasionally feature relevant news which we can re-actively pitch against  - recently we got a client a piece of coverage in the Times!

Either myself or the Account Executive will draft a synopsis for the article and email it over to the editor. We usually wait a few days before calling to follow up (unless it's a reactive pitch of course, in which case we have to be super quick before the news is no longer relevant). If the editor is interested in the content, then we’ll find out the deadline, word count and any images they’ll need and make sure we note this to remind ourselves to deliver on time.

The brief for the article will be outlined to one of our writers unless we have time to write it ourselves. Then once drafted, the article is sent to the client for approval before submitting to the editor along with any images they need such as headshots or real-world examples of the product in action. 

During the day I can often have a Skype call diarised, and I make sure to prepare in good time. I read through any attachments or notes in the invite, and write out any questions beforehand that I anticipate I will need to ask to glean the relevant information from the client to execute the project effectively. These 30 minutes to an hour of preparation time are what can make the difference between quality Account Management and last minute, rushed Account Management which can lead to lots of revisions and a frustrated client. 

The fewer the revisions, the better the value.

Towards the end of the day I review the items I have completed and mark them as done on the work in progress (WIP) sheet. I also consider what projects will need to be completed the next morning. Having deadlines set against each project in the WIP helps to inform my priorities and leads to a much higher client satisfaction rate as this kind of attention to detail and organisation means the work is delivered in good time. 

If I had to sum up what the role of Account Manager requires in a few words, I’d say flexibility, problem-solving abilities, creativity and a passion for nurturing positive client relationships. It helps when you enjoy the accounts which you work on, and have an interest in the subject matter, which I definitely do.

Did I mention I love robots?

If this sounds like a role you’d enjoy and you’re interested in potentially joining us, send your C.V. to [email protected]


AI bias and a new agriculture: ‘AI: More than human at the Barbican’ review part two

Over the last few days we’ve scanned many headlines which herald the future of artificial intelligence such as CMR Surgical’s £1bn Series C funding, a company based in Cambridge that is set to launch a surgical robot and Softbank’s plans to open a cafe run by humanoid robots in Tokyo. These headlines are unsurprising - fast developments in AI technology mean that what was sci-fi literature fifty years ago is now becoming a reality.

Nowhere is this easier to comprehend than an exhibition dedicated to the technology. In August we made the most of the longer evenings and made our way to the Barbican for ‘AI: More than human.’ Situated within the Barbican Estate of the City of London, the Barbican Centre has a large space fit for hosting thought-provoking events showcasing cinema, theatre, dance and art.

So when we arrived at the venue, our brains were already switched on to learn more about AI and how it’s transforming the world around us. 

Here’s the second part of Account Manager Rose’s review of the exhibition.

Through replicating the human brain, scientists were able to develop the first ‘neural network’ in the form of computer programmes in the early 21st century. Here we were, three quarters of the way through the exhibition, and arriving at the stage where AI began to proliferate into hundreds of applications. What enabled AI to be realised? Partly it was the power of modern computing but it was also work conducted by Alex Krizhevsky, who developed AlexNet (software which successfully labelled 15+ million high-resolution images) that got the ball moving.

The link between this development and other outcomes of AI’s influence were demonstrated by an art piece called ‘Myriad (Tulips).’ By Anna Ridler, the art piece on display was just a fraction of the 10,000 pictures of tulips which she photographed and categorised to highlight the human aspect that sits behind machine learning.

If humans influence AI so much, then can we trust those humans to form a fair representation of the world we live in? Can we rely on humans to use the technology for the betterment of the world? Echoing back to part one, many of us are frightened because at its core AI can be seen to represent a side of humanity that we haven’t quite grasped yet.

The data universe

The human influence on AI was explored in great detail in the third part of the exhibition ‘Data Worlds.’ Bringing to the surface AI’s underbelly, this section opened with a cartoon depicting AI in China, where AI not only monitors cities but also keeps track of its population. Later a human intelligent smart home experiment conducted by Lauren McCarthy was explored, where the relationship between smart devices and the private lives of those who use them was shown. Gender Shades by Joy Buolamwini, examined the misrepresentation of race and gender in datasets. All of this conspired to leave me thinking ‘Is AI a bad move for us?’.

It’s reassuring to know that there are some really inspiring people out there conducting research projects that raise these questions. If no questions are asked, and we go full steam ahead, we may end up with a world that we don’t really want. In the concluding paragraph of an article published in The Economist last week, a clause which rung true for me was ‘If problems can be foreseen they can be more easily prevented.’

But as well as being understandably cautious, we should look at the positives that are coming from AI. The final section of the exhibition ‘Endless evolution’ examined AI’s potential to improve our bodies, eliminate disease and even address famine.

The doctor will see you now

Mental health charity Mind has thrown some perspective on the UK’s worry that more and more of us are struggling with our mental health. Apparently the number of people struggling hasn’t changed but it’s the way that we’re coping with it that has gone in a more serious direction.

In order to properly treat mental health we either need a lot more counsellors, psychiatrists and medication or an alternative provided by technology. One section of ‘AI: More than human’ touched on the human need for connection in a progressively digital world with chat bots programmed to be as human as possible communicating with attendees. Experts are already suggesting that AI could help counsel patients and online counseling services such as the Big White Wall and Ieso are already in place in some UK regions.

Furthermore, AI can help doctors to determine diseases early on to prevent life-threatening outcomes. Just this week, Director of Google Health, Michael Macdonnel talked about an early stage AI-powered system which interprets Optical Coherence Tomography retinal images and identifies the signs of sight-threatening disease.

Other companies are experimenting with 3D printing body parts such as Axial3D’s work towards building 3D models of the anatomy using 2D images. The company has already started work on an algorithm which could potentially mean 3D organs become the norm in a hospital near you.

3D printing organs on-demand could potentially save thousands of people.

What’s eating AI?

‘AI: More than human’ also showed a small plant farm nurtured by AI. Small and innocent enough, it echoed plans that are already underway in UK universities for larger farms to begin using smart sensors. These can collect data to provide a greater understanding of crops from a distance so that providing the right fertiliser or amounts of water can be achieved remotely. More judicious use of pesticides can also prevent harm to the soil.

The world’s population is expected to grow from 7.7 billion to nearly 10 billion by 2050. Pitch this against a finite amount of arable land and we need to start thinking about ways to use technology to sustainably produce food, and fast.

Terramera’s Founder Karn Manhas summed it up in an article in Greenbiz earlier this year. He said, ‘Technology such as artificial intelligence (AI), robotics and big data might not be commonly associated with ‘natural’ or ‘health’ movements but actually, these advanced technologies are allowing us to eat cleaner, more locally and more sustainably than ever before.’

Robots picking fruit are helping to close the skills gap as well as reduce food waste. Drone pollinators and self-driving tractors are being developed to help drive efficiency and AI is used to make sense of farm data so that farmers can increase the health of crops, boost yields and ultimately provide better quality, affordable food.

If AI can help us feed the planet, then it’s definitely worth the research.

AI overwhelm

All of this AI in one go was a lot to absorb. It took an AI installation of screens showing butterflies and paintbox colours called ‘What a Loving and Beautiful World’ to round the exhibition off nicely. We could choose to interact directly with the panels, clicking the Chinese calligraphy to influence the space or sit and contemplate the surroundings, in awe of all of the elements combining to create the artwork.

We left asking ourselves the question, “Should we play a passive role in the developments of technology around us or make it our responsibility?”

If AI is to be shaped by human consciousness, then this question should not be asked by attendees of AI: More than human alone, it should be asked across the world.


From Golem to governing society: 'AI: More than human’ review part one

September welcomes the start of another academic year and the media has been busy as usual covering the latest in Science Technology Engineering and Maths (STEM) news. As the skills gap continues to widen, Politics Home reports that primary school teachers are struggling to engage students with STEM subjects. Increasingly, young people have to become responsible for their own development in these areas, dedicating their own time to learn about the latest technologies.

Over the summer we shared with you the list of IET open days taking place across the UK. We hope you and your families got the chance to attend (if you did, please do share with us your experience on Twitter, we’d love to hear from you). To follow our own advice, we also decided to delve a bit deeper into tech over the six weeks holiday and attended the critically acclaimed ‘AI: More than human’ exhibition at The Barbican.

Here’s our Junior Account Manager Rose’s account of the show. Broken into two parts, this is part one:

Sometimes you can see it, other times you can’t, Artificial Intelligence has a habit of sneaking up on us when we least expect it. Whether it’s the use of facial recognition in London’s Kings Cross or the Cambridge Analytica scandal, there are many who are wary of the fast-developing technology, and understandably so.

However, are our fears more to do with how the technology is used, rather than the technology itself? If it’s the former, we need to ask some difficult questions about ethics. Do we trust homo sapiens to implement technology for the greater good of mankind, the planet and other species that live here? ‘AI: More than human at the Barbican’ prompted many such questions. It explored how civilisations across the centuries have worked, albeit sometimes unknowingly, towards today’s rapidly-developing world of advanced technologies. But just as any good exhibition should, it also provided some very interesting answers to how and why the AI revolution has happened and what the future may look like if we continue in the same vein.

For how long have we wanted to create robots?

The exhibition opened with ‘The dream of AI’ and showed how humans have always been curious about the artificial creation of living entities, whether through magic, science, religion or illusion. From the belief in sacred spirits living within inanimate objects in Shintoism through to the Gothic literature of the nineteenth century, the early roots of AI manifest themselves in different ways across various cultures as far back as 400 BCE.

 

Take, for example, the religious traditions of the Golem in Judaism. A mythical figure, the Talbud Jewish holy book says that the golem originated as dust or clay ‘kneaded into a shapeless husk’ and brought to life through complex, ritualistic chants described in Hebrew texts. The above image taken from the artist Lynne Avadenka’s book ‘Breathing Mud’ explores the relationship between sacred letters and the life which is given to the Golem, and by extension to the world. This reminds me of early mathematical diagrams and the coding which is so often used today to program otherwise inanimate objects such as robots.   

Apparently Jewish mystics in Southern Germany made attempts to create a Golem in the Middle Ages and believed this process would bring them closer to God. Is humankind’s fascination with creating artificial life a spiritual exercise after all?

The Uncanny Valley

Later in this section of the exhibition the Gothic tradition of the nineteenth century was cited as significant. Gothic literature such as Mary Shelley’s ‘Frankenstein’ (1823) and Bram Stoker’s ‘Dracula’ (1897) blur the line between the living and the dead and evoke an emotional response of terror - yet people continue to enjoy these novels and the many films and television series that have stemmed from them.

Is it the element of the uncanny within these stories which appeals to us? Sigmund Freud’s essay ‘The Uncanny’ (1919) defines ‘uncanny’ as ‘belonging to all that is terrible - to all that arouses dread and creeping horror’ but it also explains that the ‘uncanny’ is formed when ‘something unfamiliar gets added to which is familiar’ according to English Professor Jen Boyle's interpretation of the text.

Perhaps this is why we get so perturbed by Count Dracula, essentially a human-being with a deathlike twist. Or Frankenstein the great inventor, who made a monster during a scientific experiment  using electricity and human body parts?

These creatures remind us of us - they’re part human, part monster. However, instead of supporting the positive self-image we like to preserve, they actually highlight the darker side of our psyches. They expose the capacity for human beings to become twisted and give in to their innermost desires.

‘AI: More than human’ goes even further in it’s exploration of the uncanny and its relationship to AI. The Uncanny Valley, a hypothesized relationship between the degree of an object’s resemblance to a human being and the emotional response, was demonstrated in a graph (see below). It shows that as the appearance of a robot is made more human, some people respond more empathetically until it reaches a point where it looks too human, for example social humanoid robots, and then people’s responses quickly become strong disgust.  

The Uncanny Valley Graph

Equally, if AI takes on too many human qualities such as empathy, creativity and leadership, many of us become perturbed, which is continually reflected in the news headlines today. 

Mind machines

The exhibition continued with a close look at the technological developments of the 19th and 20th centuries when the belief that rational thought could be systematised and turned into formulaic rules became more prevalent. Ada Lovelace, often considered the world’s first computer programmer, wrote a letter concerning a ‘calculus of the nervous system’ as early as 1844. As a young girl she was a particularly keen mathematician and was taken by her mother to see a demonstration model of the Difference Engine, the first computing machine designed by Charles Babbage. Ten years later she worked with Babbage on the Analytical Engine, a general purpose computer which had a store of 1,000 numbers of 40 decimal digits. The programming language was very similar to that used later by Alan Turing during the specification of the Bombe in 1941.

During WWII, the Bombe was used by Turning to decode messages sent by the Germans. It played a pivotal role in enabling the Allies to defeat the Nazis. It also led to the development of many other computers such as the ENIAC (1946) and the UNIVAC (1951). 

One of the most significant developments in the history of AI happened in 1956 at the Dartmouth Conference, a two-month event organised by computer scientist John McCarthy. Everybody who was anybody in the world of computers attended to work on the problem of how machines make language, process concepts and improve over time. It may not have met everybody’s expectations but it was there that the term ‘artificial intelligence’ was coined. The UK followed with ‘The Mechanism of Thought’ conference in 1958. 

It would only be a matter of 30 years or so before the Golden Era of computer technology began (think Windows 95!) and the first robots constructed by the Massachusetts Institute of Technology (MIT) would be built. Attila was also the first robot that I saw at ‘AI: More than Human’ which for me marked the great leap made by humans from stationery thinking machines to animate digital creatures.  

This was when the exhibition took a turn into the world of AI as we’ve come to know it today. In part two, I’ll explain how ‘AI: More than Human’ showed the many possible benefits of AI such as its potential to eradicate illnesses and produce whole new food groups. It also examined its darker side - the inherent prejudices it can hold and its capacity to ultimately govern society.

Until next time. 🤖