More than ever, data is driving manufacturing. New technologies like AI and machine learning are processing massive volumes of raw data to produce thousands upon thousands more data points than have ever been available on the factory floor. 

And it’s exciting! More data paints a clearer picture of what’s happening on the assembly line, and helps manufacturers make better operational decisions. But because all of this data is relatively new, it’s not unusual for seasoned manufacturers to approach the numbers with a healthy dose of skepticism. And until your team trusts the data, they won’t be able to use it effectively.

To help you make the change to a data-centered company, we’ve identified three common reasons why manufacturers who aren’t used to working with this much data feel blocked from totally trusting it. And, with that trust, the data will make your teams’ jobs easier and more effective.

Trust-blocker No. 1: The data is gathered and presented in a way that I don’t understand. I don’t even know what I’m looking at, it looks completely different than the data I’ve reviewed in the past.

Ten thousand data points present much differently than ten data points. There’s a lot more to reference, so the distribution of the data will probably look different than what you’re used to.

But that difference can tell a very important story. For example, sometimes the data creates a model other than a typical bell curve, which you wouldn’t know to look for previously. If it creates another type of distribution, like a fat-tail curve, it changes the way you define outliers. Other times, the data can clearly highlight areas where bottlenecks are occurring, which may be unclear with other, less voluminous data presentations.

As data measuring and gathering techniques change, the data will, too. Just because it looks slightly different than you’re used to doesn’t mean you should dismiss it: To help you become more comfortable with the data, understand how it’s produced. 

What is included in the new dataset versus the old? Where do your previously collected data points fit into the new picture? Knowing the background can help you better understand why the story being told may differ from previous stories.

Trust-blocker No. 2: With small datasets, I could manually check outliers for accuracy. Now that I have thousands more data points, there are proportionally more outliers, and I can’t check them all. So what if I make a decision based on an outlier that shouldn’t be? 

You’re right; you can no longer double-check every outlier for accuracy (unless you want to stop doing anything else in life… and even then, it’s a stretch). With 20 data points, you might have four outliers, which you could easily double-check by going back out to the line. 

With 1,000+ data points, the mantra should be “trust but verify.” More data will naturally produce more outliers, but you can assume they’re  proportionally accurate. So if two out of 10,000 aren’t verifiable, you’ve likely got 99.98% data accuracy. And the good news is that more data naturally makes you more precise – like when you get more context about a picture, it brings it further into focus. 

Another key point to consider when verifying data is attribution; as long as you can properly attribute the data, you can be confident in its accuracy. Solutions like Drishti make it simple to spotcheck and attribute the data using video.

Trust-blocker No. 3: The data is pointing to a completely different reality than we’ve assumed up to this point. Did my team make poor decisions in the past?

It may seem like you’re seeing completely new data than what you’re used to, but that isn’t the case. The data you’ve been gathering in the past is still there; it resides within the data in the system. But now, it’s backed by thousands of other data points that help complete the picture and can be the basis for unexpected insights.

That said, you or your team members may have made errors in the past simply because they had such limited data available to them. The key is not to blame people for decisions that, in retrospect, don’t look as appealing as they did at the time. By focusing on improvements moving forward versus blaming for past errors, you’ll see that your team is more willing to accept new data than if they’re worried about being blamed because of it.

Many manufacturers have spent years figuring out how to operate with minimal data, so it makes sense to be skeptical of new, massive datasets. These tips will help them adjust to the new paradigm and keep their lines rolling – with greater productivity than ever.

Data helps drive factory ROI. Read how to get more analytics from the factory with Drishti.