The “Use Case” Fallacy in Data Science strategies

Note: As a standard practice, I will include a plagarism detection link in all the articles on my blog site. You can use the online tool below to evaluate any article published on the web for plagarized content. All you have to do is to paste the url of the article there and it will go through every sentence in the article and flag any plagarism. I believe as a follower of “writer’s integrity” I should include this tool in all my articles:


“Use Case” vs “Execution”

So I was recently in a discussion with fellow analytics professionals where I suggested that one of most important bottleneck that we currently have in the implementation of IIoT enabled analytics solutions in warehouses today is the fragmentation of systems. We have Warehouse Management systems (WMS), Warehouse Control Systems (WCS), Building Automation systems (WCS)-and bunch of others, each operating in their own silo.

And in my mind, the most important aspect of implementation, is standardization- to ensure that these systems integrate and talk with each other, to create a single source of truth.

And then one of the peers indicated that we don’t need to wait for data to be standardized to identify good use cases. And they were right.

But we have to keep in mind that use cases only identify opportunities- the eventual, sustainable benefits come only after you build the analytics capability in a solid way. We tend to forget that often and this is where I believe we fall into “The Use case fallacy”.

What exactly is Use Case Fallacy ?

If you have worked on quantifying business value from use cases, you should be aware of a couple of points:

(1) You probably used an ad-hoc data lake for developing the solution, with the assumption that the input data source will get standardized and automated at some point. This data lake probably pulled data from multiple systems. When executing the use case, you “assume” an automated, sustainable capability in the future

(2) Even if the data came from one system, and you identified a cost savings opportunity, keep in mind that in an “implemented” format, that system that is the source of the data will interact/interface with other systems in the future, hence data standardization MUST happen at some point, irrespective of the system you are leveraging.

Use case fallacy is missing the very important aspect that for eventual realization of any benefits identified during a use case, you HAVE TO build certain capabilities in your data infrastructure and pipelines.

So how can we avoid this ?

The most important aspect is to:

Understand the difference between identifying an opportunity through a Use Case and an eventual, sustainable implementation of that capability.

And that is why, data standardization needs to be an important consideration in developing business cases.

It should be highlighted clearly in business case documents that the eventual execution of identified opportunity and underlying analytics capability will leverage a foundation of standardized data source(s)

A detailed future state, high level, standardization and automation requirements, assumed in the use case, needs to be explicitly highlighted. This section is a MUST HAVE in every use case result discussion document/presentation.

Views expressed are my own.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s