
Meet The Team: A Q&A with Huddle’s Data Engineer, Marin Stojsavljević
Blog
Meet The Team: A Q&A with Huddle’s Data Engineer, Marin Stojsavljević
Marin’s journey into data engineering has been all about problem-solving and innovation. Since joining Huddle in 2022, he’s played a key role in optimizing data infrastructure, ensuring fast and reliable data processing in the fast-paced world of sports betting. In this Q&A, Marin shares his path into data engineering, insights on AI in betting, and the exciting projects shaping Huddle’s data landscape.
Can you tell us about your journey into data engineering? What drew you to this field, and especially to the field of sports betting?
I've always been fascinated by data and its growing impact on decision-making across industries. Over the past years, data has become one of the most valuable assets, shaping everything from business strategies to technological advancements.
I joined Huddle at the beginning of 2022 as a computer science student, working in a small research team led by professor, Mario Brcic. Our focus was on optimizing a complex parameter-finding algorithm - more of a computational challenge than a traditional data engineering project. Once we successfully completed that project, I started collaborating more with the data engineering team. One of my first major projects was developing a Delta Lake prototype using Apache Spark. A few months later, I fully transitioned into data engineering, and I’ve been working full-time in the field ever since.
Sports betting in particular is a very challenging industry which heavily relies on data. In such a fast paced industry, data engineering plays a very important role in enabling fast data processing and maintaining data quality. The fast-paced nature of sports betting means there are endless opportunities to innovate and apply the latest data engineering techniques.
Can you share details about a recent project you worked on in Huddle and that you found particularly interesting or challenging?
Currently, we are working on improving our ETL pipelines built with Apache Spark, which are crucial for persisting business-critical data into our Snowflake data warehouse and AWS S3. Our main goal is to create a fault-tolerant, high-performance system capable of processing thousands of records per second. Along the way, we've tackled several challenges, including optimizing resource consumption for our Spark applications, setting up robust data backup strategies, and enhancing our alerting and local debugging processes. These improvements ensure greater reliability, efficiency, and scalability in our data infrastructure.

We recently published a whitepaper on AI in sports betting. How do you see AI and machine learning impacting sports betting data in the future?
AI and machine learning have been game-changers in so many industries, and sports betting is no exception. Since betting is all about data (odds, probabilities, player stats), AI is going to play an even bigger role in shaping the industry.
We’re already seeing machine learning models making odds sharper and more dynamic, spotting trends and patterns that even the best human traders might miss. But it’s not just about odds, AI is also becoming key in fraud detection, risk management, and even personalizing betting experiences for users. The way I see it, as AI continues to evolve, it’s only going to make sports betting more efficient, more accurate, and probably even more exciting for bettors.
Are there any initiatives or projects underway within the engineering team that you're particularly excited about?
We have several exciting projects ahead, mostly focused on enhancing our data architecture. One major initiative is implementing a medallion architecture, which will standardize data ingestion and transformation, significantly improving our data management capabilities. Additionally, we plan to develop a data catalog using DataHub to track data lineage, making data flow more transparent and accessible not just for our data engineering team but for other departments as well.
Beyond that, we’re continuously working on improving data processing, storage, and reporting to drive better efficiency and insights across the company. It’s exciting to be part of something that’s going to make a significant impact, and I can’t wait to see it all come together.
How does the data engineering team at Huddle continuously improve its processes and methodologies to stay ahead in a rapidly evolving field?
Our data engineering team at Huddle continuously evolves by staying up to date with the latest technologies and best practices in the field. We focus on continuously improving our data infrastructure to enhance performance, scalability, and reliability. Our highly skilled team works closely together to solve challenges, optimize workflows, and implement innovative solutions that keep us ahead in this rapidly changing industry.