Digital twin tech makes connections in places we didn’t think connection could exist

We live in an age where data is the transformative currency. But as our industrial processes gather data at an astonishing rate, according to market analyst Gartner, as much as 90% of it never gets used.

So-called ‘dark data’ is the information that organisations gather from digital assets about all aspects of activity and processes, and then basically store in the cupboard under the stairs with little prospect of it ever being used. Just as dark matter in physics makes up most of the universe, dark data is the unseen bulk of the industrial ecosystem. Usually retained for compliance purposes only, this data is a security risk and a drain on resources, often costing more to store than the value it creates.

Tim Berners-Lee famously said that data is “a precious thing and will last longer than the systems themselves”. What he didn’t predict is that we’d fail to convert data into information in such a wasteful way. Data is only as useful as its ability to connect with other data to produce information that is greater than the sum of its parts, can be used in a meaningful way and is clearly understood.

Technology serving a data vacuum isn’t worth much to the organisation that has invested in it: technology that interacts with other technology – the digital twin – is infinitely more valuable. While the success of a digital twin is ultimately measured by the depth and span of its connections, one of the key benefits of this technology is its ability to ‘talk to itself’. In this way, it creates a network of data that provides valuable information about the world at large.

Fabrizio Cannizzo

Within this information jigsaw puzzle, the digital twin can be seen as a piece of data that offers a way to describe other data, one of the key steps to putting it to use. Data about other data is called metadata: it elaborates on what we already know and allows us to extract more information from the data we’re examining. For example, a digital twin can illustrate key information such as how frequently data is being shared, where data is coming from, how it is being measured and what metrics are being used. Think of it this way: for centuries we have had windmills, and for most of that time all we really knew about them is where they were and what they did. It stands to reason that if you combine and analyse (rather than just store) data on how much wind is being harvested, speed, volume, output, efficiency and downtime, you’ll get more from your windmill.

Digital twinning helps to maximise these connections and the more information a digital twin provides about the data at hand, the clearer the usefulness of the data becomes. It works in a similar way to writing an article based on open-source research. If you use only one online resource that subsequently proves to be unreliable, then everything extrapolated from that source is contaminated and is ‘fruit from the poisonous tree’. The more research sources you use, the more your chances of producing a reliable article increase. The odds become better still if you can establish the reliability of the source itself. Working on this principle, a digital twin is the scholarly article of a digital ecosystem. Each layer of data or information builds confidence in the value and competency of the data you’re assessing.

What makes digital twin technology such a powerful tool is its ability to describe itself. The more data it provides, the more understood it becomes, which allows it to interoperate successfully within a network of other twins. This network creates a data mesh that allows a spectrum of data to communicate with each other, to be shared securely and ultimately output new data in an ongoing, adaptable and compounding data cycle.

We’ve talked about measuring the success of a digital twin and how it is only as strong as its connections to other data and, ultimately, its ability to describe that data. But the endpoint is this: empowered by connection and strengthened through detailed communication, digital twins can bridge the gaps between parts of an organisation where you wouldn’t normally seek connection.

No organisation does one thing only, but too often they operate as though they do. In its simplest representation, a manufacturer will have a production line along with systems for regulating that process. It might also have an onsite R&D team that’s designing the item to be manufactured. These three areas will have their own set of systems and data, but of course real ecosystems are far more complex. They will be set up to operate independently and yet far more can be achieved if they were digitally interconnected. How do they talk to each other? How do they learn from each other? How do they connect? The answer to each of these questions is the digital twin and the resulting data mesh that fills the gaps between functions, where each function is its own data point. Twin technology creates a network between them, unlocking the full potential of data that works together as a federation. The three discrete data sources integrate into a web of sources that has the inbuilt semantics necessary to understand the three separate inputs.

The potential for collaboration is limitless as it follows the threads of a network that weaves its way through your business – customer base, supply chain, control systems – and fills the intangible space created by holes in that net. Everything has the potential to operate as one cohesive unit without needing to teach staff or systems how to be specialists in multiple areas. Let each group of experts focus on their area of expertise, while the complex network of a digital ecosystem binds them together. This is the value of digital twin technology: it interacts, making connections in places we didn’t think connection could exist.

If this federation of twin technology is important in attributing value to your data, describing what’s inside that network of twins can have an even greater effect by unlocking a digital supply chain of limitless potential. When you have access to both the data and the details of the data, you unlock the ability to make your own decisions on how to use it.

You shouldn’t have to jump through hoops to get the data you need, and neither should it end up in that ‘data dump’ of expensive unused junk. It should be accessible, tangible and put to work in a way that transforms the efficiency of your organisation.

Fabrizio Cannizzo

Iotics

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE