line operators hands on an assembly line

Look at this beautiful image below. So tranquil. So ethereal.

You probably know what it is, right? A solitary ship sailing across a misty lake?


It’s a photo of the Sutro Tower in San Francisco, partially shrouded in fog.

As soon as you have more information, you can see a much bigger picture. And that allows you to draw much more accurate conclusions.

So it goes with data. The more data points you have, the more context you get. And the better decisions you can make.

But until now, that volume of data hasn’t existed for human assembly lines, and generations of industrial engineers have honed the art of making decisions with small, incomplete datasets. One of the challenges of the data explosion in manufacturing is, quite frankly, that nobody knows what to do with it all.

Industrial engineering in a data lake
There’s a tremendous difference in how the world looks when you have 10,000 data points versus 10 data points.  

Unfortunately, 10 data points has been the norm (if not the best case scenario) since the dawn of industry. And manufacturers are so used to thinking in those terms, it can be world-shaking to think about exponentially increasing the information that’s available to you.

Even the most data-driven manufacturers are accustomed to making big decisions with small datasets. But how would you do your job differently if, instead having 10 time-and-motion studies to balance your lines, you had ten thousand? How would that impact your ability to measure and plan?

Not only would you have better answers to your questions, you’d probably come up with a whole new set of questions.

Measurement and planning made more precise
Data helps manufacturers do two things really well: measure (how well is your line doing today?) and plan (how can you make your line do even better tomorrow?).

If you were to put 2018 World Series MVP and Red Sox slugger Mookie Betts in front a pitching machine, he might hit 10 out of every 20 balls. But if you expected that level of performance on the field, you’d likely be disappointed: The batting cage doesn’t have a crowd cheering. It doesn’t have any pressure associated with a hit or miss.

In the real world, Mookie might only hit 7 out of 20 balls: Maybe those jeers got to him, or maybe he’s mentally blocked because of the imposing Green Monster. Or maybe he’s on game five of a home-at-home series and his muscles are sore.

The point is this: If you only take measurements when all of the variability is controlled (e.g., the batting cage), your ability to make informed decisions about uncontrolled situations (e.g., the bottom of the ninth) is greatly compromised.

So having vastly greater quantities of data points (“data lakes,” as they’re known in the tech world) across many more scenarios  empowers you to measure and predict with much greater accuracy.



Episodic measurement vs. continuous
It goes deeper than just the volume of data points available (as important as that is). It’s also about the frequency of data collection. There are two key reasons why continuous data collection provides manufacturers with significantly more accurate data than episodic collection:

  1. Episodic measurement inherently creates a bias. Think of it this way: You’re driving around your neighborhood when a state trooper pulls out behind you. If you’re like most people, you become hyper-focused on everything you do under the trooper’s gaze. You brake more carefully, you over-use your turn signals, you keep your hands firmly on “ten” and “two.” So it goes on the line: As soon as you whip out that stopwatch, your operators change how they do their work. And the data they give you doesn’t reflect the real world. (We did some research on time and motion studies and the bias they introduce with A.T. Kearney; take a look!)
  2. Continuous measurement ensures you collect every data point, not just the typical ones. If you go out on the line and measure cycle time and units per hour once a day for a week, chances are you’re going to capture much of the same data. You are likely to miss the poor performance by the operator who got in a quarrel with her teenager this morning, or the unusually fast performance by the new guy who just joined third shift last week. You may be tempted to say “Data outliers happen, what matters is found in the mean data,” but outliers tell an important story – one that can easily be lost when you’re only measuring episodically.

Outliers are so important, in fact, that we’ll spend another blog post discussing them. Meanwhile, the lesson is clear: More data points are to your benefit, and with them you can greatly improve measurement, planning and accuracy. And the next time you drive by the Sutro Tower, I dare you not to see a ship.

Our data team is expanding. Interested in helping manufacturers harvest and analyze more data? Check out our careers page.


3 replies

Trackbacks & Pingbacks

  1. […] only way to understand whether or not your outlier truly is an outlier is with more data, which I spoke of in an earlier post. Traditionally, manufacturers have used time and motion studies, which are a form of episodic (also […]

  2. […] Now that I’m officially with Drishti, I can see the use cases that haven’t been made public yet. And it’s clear that the technology, even in its early state, is creating tangible value. Drishti is providing manufacturers with massive datasets that help them make better business decisions. […]

  3. […] AI is giving us more data than ever before. Drishti’s data guy, Sameer Gupta, wrote a post on how to work with massively greater volumes of data. […]

Comments are closed.