The Human Context Hidden Behind Industrial Data You Can't Ignore


In this episode of Unplugged: An IIoT Podcast, host Phil Seboa sits down with Bob van de Kuilen, CEO and co-founder of Thred (thredcloud.com), to explore a question that sits at the heart of every industrial data initiative: what makes data meaningful, and for whom? Bob brings an unusual background to the IIoT space, combining social anthropology with years of management consulting for Fortune 500 companies. That perspective shapes a conversation covering the difference between machine context and human context, why knowledge graphs go beyond what the Unified Namespace can deliver on its own, and the practical gap between knowing what to do with industrial data and actually doing it.
Bob studied social anthropology and sociology at university before spending years working with Fortune 500 companies in the Netherlands, often standing on oil platforms and walking through large factories. He eventually moved to New Zealand, where he started a management consulting business focused on organizational performance and change. That trajectory gave him a clear view of something the industrial data world tends to overlook: context is not a single thing. It depends entirely on who needs the information and what they plan to do with it.
Industrial data is high-volume, high-velocity, and very raw. Machine context, the metadata, data ontology, and standards that organize that raw data, is essential. Without it, you have noise. But Bob argues that machine context alone is insufficient. "The challenge is: meaningful for whom?" he explains. Human context is dynamic. It involves not knowing what you do not know, connecting abstract ideas across domains, and relying on social interaction to surface knowledge that no schema can capture.
Bob offers a vivid example. When a factory breaks down, the real knowledge about how to fix the problem often sits with one person. Johnny, the veteran technician who just retired, knew exactly what to do because he had seen that failure mode fifteen years ago. That institutional knowledge is not stored in any database or metadata tag. Paying attention to human context is what truly unlocks the value of industrial data.
The Unified Namespace has become an important stepping stone for organizing chaotic OT data, and Bob gives it credit for that. But he is direct about its limitations. UNS is hierarchical and linear. You can navigate up and down a tree structure, but you cannot easily go wide or make the cross-connections that reflect how real industrial knowledge works.
Knowledge graphs, Bob explains, function more like the cerebral cortex. They store associative memories and relationships. He uses a language analogy to make the point concrete. A child might learn 10, 20, or 50 words. An adult knows somewhere between 100,000 and 120,000 words, all interconnected in a web of meaning. Without knowledge graphs, you cannot build that kind of sophisticated vocabulary for your industrial data. You are stuck at the vocabulary of a child.
The practical implications are significant. Consider a single photo eye sensor on a production line. For the supply chain team, that sensor is counting work in progress. For the reliability engineer, it is tracking running hours for predictive maintenance. For the production manager, it feeds the OEE KPI. A knowledge graph layers all of these perspectives onto the same data point, allowing each stakeholder to derive their own meaning without duplicating or restructuring the underlying data. Bob presented this concept at the ProveIT conference.
The real-world results speak for themselves. One client faced a hydraulic issue that took over a week to diagnose. Using a knowledge graph combined with Claude, Bob's team traversed the relationships through PLC code and P&IDs and found the root cause, a contaminated proportional valve, in three minutes. In another case, an operator claimed that humidity caused board drops on a production line. Others dismissed the idea. By connecting knowledge graph data with weather station readings, the team proved that when humidity exceeded 67%, board drops tripled. Between 45% and 55% humidity, more than 80% of the issues were eliminated. That finding annualized to a half-million-dollar business opportunity.
Bob references Jeffrey Pfeffer's book "The Knowing-Doing Gap" to frame a challenge he sees constantly in manufacturing. Factories know lean manufacturing. They know Six Sigma. They know root cause analysis. The theory is not the problem. The doing is where the gap lives.
He identifies three common mistakes organizations make when they try to close that gap. The first is going too big. Trying to boil the ocean kills momentum. Bob's advice is to start small, build context around a focused problem, and let the early wins create the foundation for everything that follows. The second mistake is not being cross-functional enough. Digital transformation is not just IT-OT convergence. It means bringing radical new technology to human beings who work in complex social systems. The third mistake is reductionist thinking, failing to appreciate the scale of change that is possible. Bob draws a parallel to electrification between 1870 and 1920, which transformed society from predominantly rural to urban. The potential of today's industrial data revolution is comparable, but only if organizations think broadly enough.
Bob also delivers a pointed warning about large language models. Do not just throw an LLM over raw data and expect useful results. You will get excitement initially as teams discover what questions the model can answer. But when you start asking "why" questions, the hallucinations begin. LLMs, Bob says, are like "overly confident teenagers." The knowledge graph serves as the bridge between the language reasoning capabilities of an LLM and the relationships embedded in raw industrial data. Without that bridge, you get fluent nonsense instead of actionable insight.
His advice to leaders is straightforward. Talk to the right people, especially the OT professionals who hold the real knowledge about your assets. And follow the principle of "festina lente," the Latin phrase meaning "make haste slowly." Move with purpose, but do not rush past the foundational work that makes everything else possible.
"Imagine if you could accelerate that -- turn a two week cycle down to two minutes. That's game changing." -- Bob van de Kuilen
Machine context is necessary but not sufficient. Metadata and data standards organize raw industrial data, but the real value is unlocked when you account for human context, the dynamic, experiential knowledge that lives in people's heads and social interactions, not in databases.
Knowledge graphs extend the UNS, not replace it. The Unified Namespace provides a hierarchical starting point for organizing OT data, but knowledge graphs add the associative, cross-functional relationships that let a single data point serve the supply chain team, the reliability engineer, and the production manager simultaneously.
Start small, stay cross-functional, and build the bridge before deploying AI. Digital transformation stalls when organizations go too big too fast, stay siloed, or skip the contextual foundation. A knowledge graph between your LLM and your raw data prevents the hallucinations that undermine trust and derail projects.
If you are leading a digital transformation effort in manufacturing, this conversation offers a clear starting point. Audit where human context currently lives in your organization and how much of it is at risk of walking out the door when experienced operators retire. Identify one focused use case where a knowledge graph could connect data across functional boundaries. Resist the temptation to deploy AI directly over raw data without building the relational scaffolding first. And above all, involve OT professionals early and often. They hold the knowledge that makes every other technology investment worthwhile.
Bob van de Kuilen is the CEO and co-founder of Thred (thredcloud.com), an industrial strategist working at the intersection of IT, operational technology, and industrial transformation. With a background in social anthropology and management consulting for Fortune 500 companies across Europe and New Zealand, Bob brings a unique perspective connecting human meaning and organizational behavior to industrial data and knowledge systems.
PLCnext Technology is the open ecosystem for industrial automation from Phoenix Contact. It brings together open hardware, modular engineering software, a global community, and a digital software marketplace to bridge the worlds of IT and OT.
Digitalization and globalization are placing new demands on industrial automation. The precisely tailored design of the open automation system is just as important as flexible, modular expansion. In addition to standard programming of PLC systems in accordance with IEC 61131-3, parallel programming and the combination of programming languages such as C/C++, C#, and Matlab® Simulink® in real time is also possible with PLCnext Control. Accelerate your application development process with the free basic version of PLCnext Engineer, or use your familiar programming environment.
With simple cloud integration, the option to use open source software, and the ever-expanding expertise of the PLCnext Community, you will benefit from new forms of collaboration. The resulting solution apps, software modules, runtime systems, and function extensions are available in the PLCnext Store and save an enormous amount of time and money when creating applications. This makes PLCnext Technology the ideal ecosystem for your modern automation challenge.
Discover how FlowFuse empowers you to build, deploy, and scale industrial automation -- your way.
Visit us at Hannover Messe at Hall 014 Stand K26 and experience live demonstrations of FlowFuse connecting the entire industrial stack -- from PLCs on the shop floor to MES, ERP, and cloud services -- enabling real-time industrial connectivity, data integration, and AI-powered operations.
Let's transform industrial data together -- live, integrated, and in real time.