No Code Tag Management is not Possible‚ Utilities is the Future of Digital Analytics Implementation

Well, why was No Code a thing in the first place?

No Code Tag Management is not Possible‚ Utilities is the Future of Digital Analytics Implementation
Written by
Matt Bentley
Published on
August 9, 2023
Category
Data

No Code Tag Management?

Several years ago, there was a push to have a 'No Code' tag management solution in which all data points are only collected from within the prescribed data layer with no modifications.  Several in the industry saw this as an idealistic push for the industry to standardise the methodologies for data collection, bringing technology, business and digital analytical teams into harmonious alignment.    

But how many of us have not collected data by adding in JavaScript, pulled from the HTML, trigger an on click rule or collected the value of a cookie?  Unless you are collecting basic data points, No Code tag management concept seems foreign or a solution that works only in a clean-room, not in the real world.  And if you have tried to go down this path, you probably have realised that it raises more issues than it solves.  

The middle ground

There is a middle ground. At Loop Horizon, we believe data collection should be a division of labour in which the technology and digital analytics teams work to their respective strengths for the common goal of delivering repeatable, accurate data collection.

We also recognise that differing skillsets, resources and core responsibilities of technology and digital analytics teams often do not fully align, meaning a No Code tag management approach is not feasible without requiring the technology team to work outside their comfort zone (which is the thing that causes issues).

And this all starts with creating a translation layer - a Utility.

Where No Code Fails

Well, why was No Code thing in the first place? 99% of the time, it is down to resource. The size of the analytics team is almost always smaller than the size of the technology team. Given the discrepancy in resourcing, it makes sense to try to push the weight of work onto the team with the most resource, right?

But when your request for new data collection is picked up, how often do you get the same developer back-to-back? Where does data collection sit in the large list of other priorities a developer might have? What if the developer assigned is simply the one that drew the short straw or is new to the team with the least experience?

As you can see; the seeming benefit to a No Code approach of having a large pot of available developer resource is the issue.

Several times, we would have to explain what a data layer even was. Then the data layer request would have to compete with the other priorities the developer has. This developer will then need to learn from scratch, not only how the code works, but do this fast so they can deliver all their other priorities as quickly as possible, and move onto another project (which they may ultimately find more interesting than the pain of having to do data collection).    

This is why bespoke solutions that focus solely on the needs of the implementation specialist or analyst fail.  You can have a customised and complex solution, but the approach needs to be tailored to the needs of technology so that the data layer can be implemented easily and consistently across the estate.

Repeatable, scalable results!

To ensure the consistency you require, we must make it easier for the developers to deliver repeatable results.  This is done by automating data collection as much as possible - baking it into the platform itself and taking it's form and function from the platform.

This does not necessarily lend itself to a No Code approach.

To mitigate for this, the implementation team can create a suite of utility functions.  These utilities take something complicated (e.g. a componentised, nested and platform-driven data layer) and make it simpler to use. As the utilities are fixed functions that rely on a standardised (if complicated) data layer, while they may be hard to create initially, once completed they need minimal maintenance, and ensure the delivery of repeatable data results.  This does mean doing the hard work upfront but making life easier in the future.  

A working example

For example, we created a function that takes information from a data layer product array, which could have one, or fifty, or a hundred objects pushed on. The utility flattens it and de-duplicates the values.  By doing this, it turns a complicated data layer into a simplistic data layer for the tag management solution to read consistent results without worrying where within the array our data exists.

By doing the hard work upfront, we ensure we get consistent results from this flattening function each and every time.  The best part of all; once this code was created, it hardly ever needs to be updated.  In the last 6 months, we had to update this code once due to a data layer schema change, but the update was less than a day's work.

And the great news? Because the underpinning data layer is populated in a way that works with the way the platform functions, it's easy (actually in this case - automated) for the development team to build.

The benefits

Recently one of our clients saw the benefit of this approach. They delivered a new project where a component was used in a different part of the journey and forgot to involve the implementation team in the design and build process.

And instead of having no data at all and having to start the data collection process from scratch post-live, most of their business questions were answered due to the data layer automation. The data layer is baked into the platform at a component level instead of the journey level, so as the development team created the journey using existing components, they added the data layer code at no additional thought, effort or - importantly - cost.

And of course, the utilities handled the integration with the tag manager and various vendor tags, with no updates needed.

In summary

This is why we believe the reliability of data collection does not reside solely on the development team (No Code) or with the implementation / analytics team. It should be a division of labour in which each team is responsible for the sections of the data collection that play to their strength.

By this divide and conquer approach with clear roles and responsibilities, building new features or debugging an issue often does not take a week or a day or even an hour.  It is as simple as a quick meeting, where it is already clear to everyone on the call who owns what, and therefore who needs to do what to build the feature or fix the issue.    

How is your data collection?

Is your data collection as clear cut?  Are you getting consistent reliable data collection which you can trust?  Could you easily debug where the issue is and what needs to be done to fix it?  Do you have a division of labour in data collection?  

If your business would like to see the same harmony in data collection, do get in touch with us at Loop Horizon as we'd be happy to discuss how we can help bring balance to your data collection strategy.