## The Fallacies of #BigData

The biggest problem with software is that it doesn’t do us any good at all unless our wetware is working properly – and unfortunately, the wetware which resides between our ears is limited, fallible, and insists on a good Chianti every now and then.

Improving our information technology, alas, only exacerbates this problem. Case in point: Big Data. As we’re able to collect, store, and analyze data sets of ever increasing size, our ability to understand and process the results of such analysis putters along, occasionally falling into hidden traps that we never even see coming.

I’m talking about *fallacies*: widely held beliefs that are nevertheless quite false. While we like to think of ourselves as creatures of logic and reason, we all fall victim to misperceptions, misjudgments, and miscalculations far more often than we care to admit, often without even realizing we’ve lost touch with reality. Such is the human condition.

Combine our natural proclivity to succumb to popular fallacies with the challenge of getting our wetware around just how big Big Data can be, and you have a recipe for disaster. But the good news is that there is hope. The best way to avoid an unseen trap in your path is to know it’s there. Fallacies are easy to avoid if you recognize them for what they are before they mislead you.

**The Lottery Paradox**

The first fallacy to recognize – and thus, to avoid – is the lottery paradox. The lottery paradox states that people place an inordinate emphasis on improbable events. Nobody would ever buy a lottery ticket if they based their decision to purchase on the odds of winning. As the probability of winning drops to extraordinarily low numbers (for example, the chance of winning the Powerball is less than 175,000,000 to 1), people simply lose touch with the reality of the odds.

Furthermore, it’s important to note that the chance *someone* will win the jackpot is relatively high, simply because so many tickets are sold. People erroneously correlate these two probabilities as though they were somehow comparable: “someone has to win, so why not me?” we all like to say, as we shell out our $2 per ticket. Assuming tens of millions of people were to read this article (I should be so lucky!) then it would be somewhat likely that *some* member of this impressive audience will win the lottery. But sorry to say, it won’t be *you*.

The same fallacy can crop up with Big Data. As the size of Big Data sets explode, the chance of finding a *particular* analytical result, in other words, a “nugget of wisdom,” becomes increasingly small. However, the chance of finding *some* interesting result is quite high. Our natural tendency to conflate these two probabilities can lead to excess investment in the expectation of a particular result. And then when we don’t get the result we’re looking for, we wonder if we’ve just wasted all the money we just sunk into all our Big Data tools.

Another way of looking at the lottery paradox goes under the name the law of truly large numbers. Essentially, this law states that if your sample size is very large, then any outrageous thing is likely to happen. And with Big Data, our sample sizes can be truly enormous. With the lottery example, we have a single outrageous event (I win the lottery!) but in a broader context, *any* outrageous result will occur as long as your data sets are large enough. But just because we’re dealing with Big Data doesn’t mean that outrageous events are any more likely than before.

**The Fallacy of Statistical Significance**

Anybody who’s ever wondered how political pollsters can draw broad conclusions of popular opinion based upon a small handful of people knows that statistical sampling can lead to plenty of monkey business. Small sampling sizes lead to large margins of uncertainty, which in turn can lead to statistically insignificant results. For example, if candidate A is leading candidate B by 2%, but the margin of error is 5%, then the 2% is insignificant – there’s a very good chance the 2% is the result of sampling error rather than reflecting the population at large. For a lead to be significant, it has to be a bit more than the margin of error. So if candidate A is leading by, say, 7%, we can be reasonably sure that lead reflects the true opinion of the population.

So far so good, but if we add Big Data to the mix, we have a different problem. Let’s say we up the sample size from a few hundred to a few million. Now our margins of error are a fraction of a percent. Candidate A may have a statistically significant lead even if it’s 50.1% vs. 49.9%. But while a 7% lead might be difficult to overcome in the few weeks leading up to an election, a 0.2% lead could easily be reversed in a single day. Our outsized sample size has lead us to place too much stock in the notion of *statistical* significance, because it no longer relates to how we define significance in a broader sense.

The way to avoid this fallacy is to make proper use of sampling theory: even when you have immense Big Data sets, you may want to take random samples of a manageable size in order to obtain useful results. In other words, *fewer* data can actually be better than *more* data. Note that this sampling approach flies in the face of exhaustive processing algorithms like the ones that Hadoop is particularly good at, which are likely to lead you directly into the fallacy of statistical significance.

**Playing with Numbers**

Just as people struggle to grok astronomically small probabilities, people also struggle to get their heads around very large numbers as well. Inevitably, they end up resorting so some wacky metaphor that inevitably contains an astronomical comparison involving stacks of pancakes to the moon or some such. Such metaphors can help people understand large numbers – or they can simply confuse or mislead people about large numbers. Add Big Data to the mix and you suddenly have the power to sow misinformation far and wide.

Take, for example, the NSA. In a document released August 9^{th}, the NSA explained that:

*According to the figures published by a major tech provider, the Internet carries 1,826 Petabytes of information per day. In its foreign intelligence mission, NSA touches about 1.6% of that. However, of the 1.6% of the data, only 0.025% is actually selected for review. The net effect is that NSA analysts look at 0.00004% of the world’s traffic in conducting their mission – that’s less than one part in a million. Put another way, if a standard basketball court represented the global communications environment, NSA’s total collection would be represented by an area smaller than a dime on that basketball court.*

Confused yet? Let’s pick apart what this paragraph is actually saying and you be the judge. The NSA claims to be analyzing 1.6% of 1,826 Petabytes per day, which works out to about 29 Petabytes per day, or 30,000 terabytes. (29 petabytes per day also works out to over 10 exabytes per year. Talk about Big Data!)

When they say they select 0.025% (one fortieth of a percent) of this 30,000 terabytes per day for review, what they’re saying is that their automated Big Data crunching analysis algorithms give them 7.5 terabytes of *results* to process manually, *every day*. To place this number into context, assume that those 7.5 terabytes consisted entirely of telephone call detail records, or CDRs. Now, we know that the NSA is analyzing far more than CDRs, but we can use CDRs to do a little counter-spin of our own. Since a rule of thumb is that an average CDR is 200 bytes long, 7.5 terabytes represents records of 37 quadrillion (37,000,000,000,000,000) phone calls, or about 5 million phone calls per day for each person on earth.

So, which is a more accurate way of looking at the NSA data analysis: a dime in a basketball court or 5 million phone calls per day for each man, woman, and child on the planet? The answer is that both comparisons are skewed to prove a point. You should take any such explanation of Big Data with a Big Data-sized grain of salt.

**The ZapThink Take**

Perhaps the most pernicious fallacy to target Big Data is the “more is better” paradox: the false assumption that if a certain quantity of data is good, then more data are necessarily better. In reality, more data can actually be a bad thing. You may be encouraging the creation of duplicate or incorrect data. The chance your data are redundant goes way up. And worst of all, you may be collecting increasing quantities of irrelevant data.

In our old, “small data” world, we were careful what data we collected in the first place, because we knew we were using tools that could only deal with so much data. So if you wanted, say, to understand the pitching stats for the Boston Red Sox, you’d start with only Red Sox data, not data from all of baseball. But now it’s all about Big Data! Let’s collect everything and anything, and let Hadoop make sense of it all!

But no software, not even Hadoop, can make sense out of *anything*. Only our wetware can do that. As our Big Data sets grow and our tools improve, we must never lose sight of the fact that our ability to understand what the technology tells us is a skill set we must continue to hone. Otherwise, not only are the data fooling us, but we’re actually fooling ourselves.

*Image credit: _rockinfree*