Small data, or just far away?

Data streams to illustrate post about big data on Cecilia Unlimited blog(This is a longer post – probably around 5 minutes to read, so do settle in with a cup of tea.)

Big data. Big Data. BIG DATA. Go on, say it again: BIIIIIGGGG DATAAAAAAA. Is it just me or is everyone else thoroughly sick of those words? (In my head, they sound just like Keehar the seagull in Watership Down saying ‘Beeg Vawter’ in a strange Germanic accent, but that probably is just me.)

Apart from the fact that I’m sick of the phrase, my main issue with Big Data is that it’s very bad at telling you anything about WHY people do the things they do. And that’s really important, if you’re going to understand anything about your customers or your people.

Can you manage what you can’t measure?

Brand and communications have always been nebulous activities, very hard to pin down. We do some stuff over here, some things happen over there, we’re not completely sure which thing we did to make them happen (but we’re not going to admit that in public). And that should be as expected: any process that involves human beings is going to be full of uncertainties and weirdness, as I pointed out in an earlier post, and anyone who tells you different is lying.

But now suddenly we can measure so many more things. We have click through rates and engagement ratios and numbers of retweets, and software that tells us when people have opened emails and how many times, and Google can tell us what they did on our website and how long they spent with our blog post open, and suddenly we have masses of data which means surely we must know more about our customers or our people, right?

Well, not necessarily. As Sam Ransbotham said in an MIT Sloan article last year, data is not the same thing as facts. He says “Facts are much harder to come by than data. While data seems to promise objectivity, instead it requires analysis – which is replete with subjective interpretation.” One of the dangers of getting our heads stuck too far into huge datasets is that we can start to see more and more correlations, and as anyone who has ever done a statistics course can tell you, correlation does not imply causation. As Anja Lambrecht and Catherine Tucker pointed out just this week, “the skill in making big data valuable is being able to move from mere observational correlations to correctly identifying what correlations indicate a causal pattern and should form the basis for strategic action. Doing so often requires looking beyond big data.”

A good example here is re-targeted ads. You know the sort of thing: you look at a pair of boots on a retail website, only to be followed around the internet by the same boots, popping up on every website you visit. But if we do buy the boots, despite reams of data about us as a customer, the boot company still doesn’t really know which part of its harassment campaign made us buy them – if any. We might have clicked on the ad to make the purchase, but the buying decision could have been made at a completely different moment, and the ad was just handy. Similarly, your analytics might tell you that someone had your website open for three hours, but this tells you nothing about why. They might have had it open to point out a hilarious typo to all their friends, or simply left it open while they did something else. (There’s a whole discussion to be had here about measuring attention, but that’s for another post.)

I’ve heard re-targeting described as the equivalent of being pursued by the boot salesman into every other shop we visit, but it’s really not. If a boot salesman did pursue us from shop to shop, we’d find it annoying, but also wonder what it was about us that made the salesman so keen. We might feel flattered, especially if he told us how well the boots suited us, and how he only has one pair left and was saving them for someone really special. If he’s a good salesman, we’d buy the boots because he’s made the effort to build a great relationship with us. But we know re-targeting ads are automated, so in fact it’s like being followed around by a robot salesman repeating ‘buy boots, buy boots’ in a mechanical monotone – we don’t feel special, or that the salesman really cares whether we buy the boots or not. We are just one data point in a million, a blip on the bottom line.

Machine learning – very cool, but…

So I think the other problem with big data is that sometimes we can become a bit too entranced by the novelty of the technology involved in analysing it, and forget that humans can be pretty good at understanding each other all by themselves. At a conference recently, I heard Melanie Cook of Sapient Nitro talking about a leadership project she’s working on with CalTech. The project combines a variety of inputs – KPIs, financial data and the like – with data from an organisation’s communications (Melanie wasn’t specific but I’m assuming emails, internal messaging etc) and puts them through an artificial intelligence system to produce a kind of temperature check of how the organisation is doing – an updated, souped up version of We Feel Fine, with blockchain technology to ensure complete anonymity.

Image of We Feel Fine

Image credit – Jonathan Harris of We Feel Fine

I genuinely don’t know how I feel about this. I loved We Feel Fine, but it didn’t feel intrusive in the way that this does, because it used blog posts as its dataset – blog posts have been put out there for public consumption so they feel like fair game. (Also I loved We Feel Fine principally because it was a beautiful piece of data visualisation.)

But two things occur to me about this project: firstly, while this might be an interesting exercise, I would argue that you should definitely not make the data visible to your organisation, and certainly not in real time. You can imagine some seriously worrying feedback loops taking place in peoples’ minds: ‘Most people in the organisation feel happy. I don’t feel happy. What’s wrong with me? Why aren’t I happy when the organisation is happy? Maybe I’m in the wrong job? Maybe I should be happier? But now I’m worrying about not being happy, and that’s making me more unhappy. Paul in Finance told me he wasn’t feeling happy, so was he lying? Why would he lie to me about feeling unhappy?’ (OK, so maybe not everyone’s head is as bonkers as mine, but I would still argue that making the results visible could have a serious impact on how your people feel as a result.)

Secondly, if you want to know how people are feeling in your organisation, why not ask them – or ask their managers? Talk to your teams, walk around the floor, make yourself approachable, hold monthly Town Hall sessions and build a culture where people can be open with you. I absolutely agree with Melanie Cook’s assertion that employee surveys are a terrible way of finding out how people are – once a year you might find out how they felt on one particular day (probably about six months earlier), which is no use to anyone. But if you need to analyse how many times your employees said ‘Fuck I’m bored’ over internal messaging to find out how they’re feeling, then there’s a serious failure of management going on. (To be fair to Melanie, she pitches this as more of an early warning system for leaders rather than the only management tool they’ll ever need, but I still maintain that a leader that’s in touch with their organisation will pick up discontent at least as early as any AI.)

It’s all about feelings

What I do admire about this project is that it’s trying to find out how people feel, rather than what they do. As we all know, brands – whether employer, FMCG or B2B – are made up of how people feel about them.  I recently read Martin Lindstrom’s book Small Data, which discusses some interesting case studies from his brand consultancy work. As you might guess from the title, Lindstrom doesn’t have a lot of time for big data, preferring to dig deep into customers’ lives to gain his insights (literally, in some cases, hunting through bins and examining their fridge magnets).  I found the book a bit frustrating to read, as it gives the impression that his work singlehandedly changes entire brand strategies, which I wasn’t completely convinced about: I’d have wanted to hear more about how it fitted with other types of insights to provide a more complete picture of the opportunities. (Also, he is responsible for a supermarket in America where when the roast chicken in the rotisserie is done, all the staff throughout the store have to do a chicken dance. I think if I worked there I would be thinking of 101 ways to kill him with every shift.)

But the important point of the book for me is to show that if you really want to understand how your people or customers feel about something, there’s really no substitute for asking them, ideally face to face, and spending some serious time listening to the answer. What associations does your brand have for them? What feeling did you provoke in their fevered mind that made them buy your product/join your company/recommend your service? As yet, the machine, CRM system or analytics tool doesn’t exist that can ask that question, and really properly understand the answer, with all the nuances and shades of grey that come with being a human.

So where does this leave you, as a marketing or communications director, brand manager or HR professional? Which end of the telescope should you start – big or small –  when dealing with data? Well, I’ll hand this over to an innovation expert I spoke to about this recently (who will no doubt be delighted to get the last word) who says: “Many of us today are finding that we have to do ever more with ever less. In this context, big data feels like a luxury that not everyone can afford. We tend to get better results with deep understanding of the “Right Data”. The most powerful insight can very often come from a single point of data, so by all means skim the surface to intuitively identify likely hot-spots, but then dive deep when you find them.”

Seems sensible to me.


p.s. If you’re still puzzled about the title of this post, you haven’t watched enough Father Ted.