Unifying a Data Stack and Leveraging Self-Serve Analytics with Atlan

The Active Metadata Pioneers series features Atlan customers who have completed a thorough evaluation of the Active Metadata Management market. Paying forward what you’ve learned to the next data leader is the true spirit of the Atlan community! So they’re here to share their hard-earned perspective on an evolving market, what makes up their modern data stack, innovative use cases for metadata, and more.

In this installment of the series, we meet Daniel Ferguson, Data and Analytics Director at PHMG, an audio branding company that helps over 36,000 clients across 56 countries sound their best. Daniel shares how PHMG transformed their data stack from fragmented to unified, and how Atlan has been a crucial piece in by tracking lineage, managing reports, and easing team onboarding.

This interview has been edited for brevity and clarity.


Could you tell us a bit about yourself, your background, and what drew you to Data & Analytics?

I used to be a DJ and then worked as a sound engineer building recording studios. During my time as a sound engineer, I found myself interested in the technical and analytical side of things. After starting a family, I wanted a change. After selling my recording studio, my mother, who managed a council office, offered me a job and I started out in the call center unit, handling calls. It quickly became apparent that I could do more than just calls, so I moved to the database team. 

I started studying for a degree in Economics and Mathematical Science at The Open University while working at the council. Using the skills learnt on my degree I started to build logistic regression models to target contacts in the call center I had previously worked in. I proposed that, with just one person, I could achieve the same results as the entire team. My work generated 300% more results than the team’s combined efforts by optimizing data collection, addressing missing information, and cherry picking the best contacts. After that I was hooked on the power of data & analytics.

I then built a company providing data services to other Local Authorities. Near the end of my degree, a consultancy in Scotland, Aquila Insights, offered me a position. They worked with clients like Sony, Office Depot, and RBS, which gave me early exposure to the data profession. From there, I advanced in the field and eventually joined PHMG. My journey into data was somewhat accidental, but it brought me to where I am today.

Would you mind describing PHMG?

We specialize in audio branding. Think of logos like Netflix or Disney Plus, through sound alone, these brands are instantly recognizable as industry leaders in entertainment and streaming even when their visual logos are not in sight.

We also go beyond traditional audio branding by developing custom music tailored to each organization. We’ll take Atlan as an example: What is Atlan about? What do you represent? What’s the type of rhythm that it wants to bring? 

This connection between music and identity is what attracted me to the company. We have been highly successful, operating in 56 countries with 36,000 clients.

Could you describe your data stack, and how it came together?

When I got here, we were using SQL servers with Excel spreadsheets. There were limited to no interactive reports, and every data request had to be raised to the data team. 

There was a need to modernize the information flowing into the company and implement the right technology to achieve this efficiently and reliably. I focused on finding technology solutions that would streamline operations and reduce the need for additional engineers. 

I was really careful with technology selection, avoiding solutions for the sake of it, and not building from scratch. While Azure Fabric offers a comprehensive solution, for example, it’s still new and that comes with additional risks, but is something I am keeping my eye on. It’s crucial to choose the best tools for the job and ensure they work well together. Investing in a seamless process with these tools allows you to start strong and demonstrate value quickly, with room to evolve as you scale.

In my board proposal, I highlighted two essential tools: Atlan and ThoughtSpot. I explained that while we could manage without them, they would make a significant difference. I wanted governance to become embedded in our processes, and that instead of assigning data stewards without clear direction, we provided actionable reports and understandable data. With properly organized data, governance becomes straightforward, and Atlan streamlines this process.

I selected Snowflake for its robustness and reasonable pricing, and Fivetran for its reliable pipeline performance, which effectively handles our data integration needs. 

I implemented PowerBI for executive reports, and ThoughtSpot for our self-serve data needs. I am a big fan of ThoughtSpot, because it allows users to adjust their own reports, reducing the need for constant modifications from the data team. 

For orchestration, I use Airflow to manage pipelines, and DBT with GitLab for our code repository and CI/CD processes.

Why was Atlan a good fit? Did anything stand out during your evaluation process?

In my previous organization, I tried using open-source with DataHub, but its maintenance and development required significant investment. Atlan stood out because it’s plug-and-play, automatically building out miners that reveal previously unknown insights. It identifies and explains scripts we weren’t aware of, saving time and reducing technical debt from having to manually review extensive code.

Atlan lets us track and monitor what we’ve built, including data lineage and assets. It’s invaluable for reviewing reports without needing to ask for code details—just navigate through Atlan to see the report’s history. New team members can also understand report construction through Atlan. 

For me, Atlan was a key piece of the puzzle.

I researched Collibra, Alation, and Atlan extensively, and Atlan was the clear choice. It felt designed for medium-sized enterprises and required minimal engineering effort. Given our situation, it was crucial to integrate it from the start, rather than as an afterthought. This allowed us to learn and develop Atlan alongside our existing systems, rather than trying to force it into our pre-built setup.

I always make it a point to meet with leadership teams at events to gauge their attitude and determination, and I don’t know of any other players that are doing it as well as Atlan. I was genuinely impressed by Atlan’s leadership team — not only their passion for the product but also their commitment to addressing my challenges and improving our situation. 

How are you planning to harness Atlan to enhance your data stack? What exciting use cases and goals do you have in mind?

We’ve invested in a data vault model for our data warehouse, which feeds into an operational data store, what I call the data mart. All our reports and metrics are built from this data mart. In Atlan, we define how to construct everything, so once a metric is defined, we can write the SQL to extract it from the mart. 

We then create curated tables for client services and sales organizations, enabling them to self-serve via ThoughtSpot. For detailed insights into the construction and rationale of these metrics, we store that information in Atlan, which becomes our catalog.

As new people come on board, I ensure that there’s no need for a handover. By default, we document our processes as we go and build systems that leave clear breadcrumbs for others to follow. Atlan plays a crucial role in this. We direct new team members to Atlan to help them understand how everything is built and what it’s built from. Atlan doesn’t just spill out the code, it highlights the key objects, their usage, and their importance.

Another major project involves creating a comprehensive glossary within Atlan, serving as our single source of truth. This environment allows business users to access all corporate metrics and view reports from Salesforce, PowerBI, and ThoughtSpot, all linked around key KPIs. 

We are also currently refining our data lineage and model descriptions. As we create new data models, we update descriptions incrementally rather than in bulk. This ongoing effort helps ensure that our data models are well-documented and easily understandable.

Do you have any advice to share with your peers who are starting out in managing and organizing their data assets effectively?

Businesses always talk about being data driven, but they don’t talk about the assets that actually drive the data. We want information to flow in our organization, but information cannot flow if it’s not organized consistently. And for me, tools like Atlan are making it significantly easier for us to organize and communicate what data matters. 

Don’t get me wrong, Atlan isn’t a silver bullet. It won’t fix poor organization within your data warehouse. However, it does provide a centralized place to define and assess your processes, helping you identify which ones are effective and which ones need improvement. 

Atlan helped us determine where to start by identifying our most crucial tables and focusing on what was important. For instance, we found one table critical for everything we built, allowing us to prioritize it. We then assessed our reports and discovered that some we thought were important were relevant only to specific reports, not the broader context. 

As we get sensitive data, we can also immediately flag it. If we get audited, we can simply pull up Atlan and say, “Hey, this is what we have. This is how we manage our data. This is what our data assets are.” So, for those committed to being data-driven, they need to look after their data assets and understand what their data assets are.

Photo by Adi Goldstein on Unsplash

Author

Director of Product Marketing - Customer Advocacy

Write A Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.