Replacing the word “source” with the word “product” is not enough to change the reality of your data.

I recently shared my perspective about the Data Mess vs. Data Mesh.
Shortly afterward I was chatting with my friend Luca asking his feedback and his main comments were along the lines of “it’s all good, but it’s very hard to find an effective mechanism to reward and incentivize the change”.
In this post I’m sharing my view on this challenge.

Citing  Zhamak Dehghani original work I highlighted that “Data as a product” is one of the pillars of the Data Mesh.
To reap the benefits promised by the data mesh organizations then feel urged to morph their data sets into data products.

Unfortunately there are many definitions of product and they are not equally useful to transform the organization and create a real, useful data mesh.

One definition of product is: “a thing that is the result of an action or process”.
This is the definition easiest to apply and is the most dangerous too.
It makes possible to quickly and automatically label every existing data source as a “data product” without the need to change anything in the existing processes.
It is a sure guarantee that the data mess will remain in place for the years to come with data remaining a by-product of the business processes rather than a “real” product.
Just like today, but with the trendy label.

A definition of product much more useful to incentivize the organizational change is the following: “an article or substance that is manufactured or refined for sale”.
The key part is “for sale” because it implies the existence of a historically strong product improvement driver: money changing hands and increasing a producer’s wealth in the process.

Many organizations have created, or are in the process of creating a “data marketplace”, to facilitate data product exchanges.
Unfortunately a lack of general agreement about what a “data marketplace” should be can lead to the creation of something slightly, but significantly different: a data catalog.
In recent implementations, hopefully, the catalog is paired with a set of tools to self service the data access and/or transfer (I’ll come to the subject of transfer vs. access in a future post).
This kind of data marketplaces work nicely with the first (and lesser) definition of a data product but is not fully supporting the value creation expected from the adoption of the second definition.

The data marketplace that supports continuously improving data products is something slightly, but significantly different.
On top of the functional characteristics of the basic marketplace I listed earlier it enables a low friction exchange of (data) goods for a certain amount of an agreed currency.

The technical means to easily move money around are many, well known, and broadly available.
The tricky part is, once again, an organization and people problem: defining the “certain amount” of currency that should change hands.

Who sets the price of the data products and how?

Being fundamentally Austrian in my vision of economy my first answer was: the free market!
Unfortunately this is a bad approach for the data market because the producers of data raw (I’ll tentatively blog about raw data/data by-products in the future) are in most cases natural monopolies: having only monopolists setting the prices would immediately lead to a complete failure of the marketplace.
Centrally regulated prices appears to be the only option.

Historically centralized economies have been trailing free economies in terms of wealth generation and this can be concerning: how we could prevent the same from happening in our data marketplace?
The special nature of digital goods compared to physical goods (produce once, sell many times) help us a bit in this matter.
Setting a fixed price centrally will promote efficiencies in data production at the assigned quality point (by making the production more efficient the producer will increase his gain), but shouldn’t destroy completely the incentives to improve the products because the same price is paid by each consumer and not by the corporate through a budget directly allocated to the producer.
In this scenario the producer has an incentive to get more consumers to buy the data product by improving it (there is a quantifiable return on additional investments made on the data product) and by sharing ideas about new ways to create value from data.

Deciding the prices is, once again, a non-technical problem.
I would promote value-based prices over cost-based prices any day because inefficient production processes lead to a higher price for a set value generated (I am being Austrian again here), but there is, again, a data product specific constraint here: most organizations have a hard time materializing the expected return of their data investiment.
I dare to say that many can’t even quantify the obtained return at all and this makes a proper value-based central pricing of data assets close to impossible.
The only the option left is to set the initial price of data products based on a linear combination of incurred and recurring production costs.
This unit price of the data product (subscription) for each consumer will be calculated by dividing the (calculated) current cost by the number of current consumers and the budgets will be aligned accordingly to the current consumers.
Organizations can (and should) apply a periodic price deflation factor to the initial prices to drive efficiency up and prevent omission bias and complacency on the producer side .

Will the Data Mesh save organizations from the Data Mess?

The “Data Mess” is almost as old as the installation of a second database within a single organization. Or maybe even older and paper-based.

Many companies, all over the world, have tried to solve the data mess problem for decades with varying degrees of success.
Which is a nice way to say: in many cases with limited or no success.
No matter the many promises of technical silver bullets that were made over the years, like the MPP databases earlier or the Hadoop-based datalakes later, the task of integrating data is still far from being a trivial one.

About 6 months ago I had a chat with a friend, and former Teradata colleague, and he told me he had to discuss the data mesh with the CIO of a large Italian company that was extremely excited about the subject.
Unsurprisingly, given the ripples that this post of Zhamak Dehghani had in the market, in the preceding weeks I had several conversations about the data mesh with my team mates and we are still debating the subject.

I’m writing today because I’m concerned by the fact that the data mesh is perceived, in almost all the conversations I have, as the (new&improved) silver bullet that will finally kill the data mess monster for good.
I think this might be the case. But only as long as the data mesh is not reduced to the technology/architecture part of the solution.

The “data mess” is generated by a combination of shortcomings in 3 key areas:
1) people
2) processes
3) technologies

The data mesh discussions I’ve had so far focus mostly, if not only, on the technical solutions with an unexpressed assumption (or hope) that removing the technical obstacles will be enough to magically fix also people and processes shortcomings.
I guess it might be because a lot of people in IT is more comfortable dealing with technologies than with processes and other people.
Or maybe I am just perceived as too much of a geek for my counterparts to discuss the non-technical aspects of the data mesh with me.

Frankly I hope it’s the latter scenario and the people and processes pillars are being addressed in other streams I’m not part of.
I say this because what the experience in the software quality space taught me is that technologies can facilitate processes, but don’t change them (with a few notable exceptions when packaged ERPs replaced custom solutions ahead of Y2K and many organization in a hurry just had to adapt to processes supported by the ERP they picked).
I also learned that people with enough motivation to do so can ignore, or even hijack, the best processes.

Both the first and the second post of Zhamak Dehghani touch multiple times the process aspects.
Are processes prominently missing only from the conversations I am having and hearing about or is a common pattern?

I tend to think that the people pillar (a.k.a. incentives to embrace a new way of doing things) is still not sorted out, or maybe is even perceived as too hard to approach, in many organizations and for this reason is simply removed from the debate.

I believe that solving the people part of the problem is strongly tied to a real transformation of data into a product rather than just a dump of JSON by-products, that the potential consumer has to figure out how to use, of the organization’s processes.
What incentive is given to the marketing team (or the e-commerce one, or the customer service, or the production lines…) to invest part of their limited budget to produce high-quality. easy to use, data available in the mesh and, maybe, also increase the data value over time?
No ROI, no party.

In the end my answer to the question I asked in the title is:
“Building a data mesh infrastructure without creating effective processes (and the right incentives for individuals and organizations to embrace the new processes) is not going to remove the data mess from the map.”