About This Interview Series
HI-driven AI explores the partnership between humans and artificial intelligence through conversations with people who use AI in their daily work. Our mission is to understand how AI works as a partner and collaborator, not just another tool.
These interviews are conducted in the respondent's preferred language and formatted with AI assistance for clarity and translation. Each respondent reviews and approves the final version before publication, ensuring their perspectives are accurately represented.
Zsolt Szalai
Data Engineer
Zsolt is a data engineer with roughly seven years of experience across enterprise and startup environments. After studying business informatics, he started at IBM working on virtualization platforms, moved through insurance fraud detection and telecom analytics, and now works at SEON — a Hungarian-founded fintech startup specializing in financial fraud prevention. His daily work revolves around Python, SQL, Apache Spark, and cloud data platforms like Databricks and Snowflake.
Give us the quick version of your career path.
I studied business informatics — fairly database-heavy, not deep programming, but touching everything at a surface level. I actually taught myself to code through World of Warcraft, writing Lua scripts. Later I picked up Java OOP at university, and my thesis was about designing an ERP database backend for a small company doing alpine work.
I started at IBM, spent about four years building internal BI platforms and multi-cloud provisioning systems. After they sold our department, I went to an insurance company where I prepared claims data for actuaries doing fraud detection. Then Deutsche Telekom, working with IoT telemetry and supply chain data — behavioral predictions, like estimating the likelihood that someone who sends an SMS will actually pay their bill. Now I'm at SEON, refactoring pipelines that score financial transactions for fraud risk.
How did AI first enter your work?
Initially I was very surprised by how much these models could do. But for a long time, I treated it as just a better Google. At the large companies, there were always restrictions — internal guidelines, and we had a wrapper running a local LLM on the company's on-prem environment. It wasn't the best model, to put it mildly.
But it was more than just a chatbot — it had a context-aware component connected to the internal intranet. I worked on a project where we loaded data into a vector database that the chatbot could query. We made the intranet search smarter, first for internal use, and later another team adapted the model for customer communication too.
What were the milestones from there?
The next big step was GitHub Copilot in the IDE. That was a significant leap — it could understand the repository, read the documentation, and know what the codebase was about. It was like not having to explain everything from scratch every single time.
Then I moved pretty quickly to agentic coding. Fuszti actually introduced me to Claude Code last summer. I'd tried OpenAI's Codex before that, but it was much slower at the time, and I prefer working iteratively rather than waiting for one big answer.
After that, Perplexity became important for online research. I started organizing my searches into spaces by topic, so it knew what context I was working in. It also gave me the idea to build a personal landing page — an online CV that connects to my open-source projects and social media. That idea actually came from AI, and it opened up a lot of networking opportunities I wouldn't have had otherwise.
And then Obsidian became a big part of my workflow. I've always had the habit of documenting everything I learn digitally. With Obsidian's knowledge graphs and AI integration, I can connect dots I couldn't before — the AI helps with creativity and finding relationships across my notes. I use the same approach at work now, and it's helped me understand the ecosystems I'm working in much better.
You mentioned there are different "camps" among developers. What do you mean?
I see roughly three groups when I talk to colleagues. First, people who reject AI entirely or just use it as a glorified Google — maybe occasionally for copy-paste requests. Second, people who actively use autocomplete features in the IDE. That was the big jump for me initially — you're still doing the development yourself, you know exactly what's happening, but you're getting intelligent suggestions and you're more productive. Third, people who mostly just prompt and aren't writing code manually anymore.
What was your best experience with AI?
Non-professionally: there's a YouTuber who creates entire AI-generated music videos. The script, the video, the music — everything AI-made. He's a 3D artist who uses models to generate video from prompts, even overlaying them onto 3D model skeletons. What used to require an entire studio, one person can now do on a single computer. That was genuinely magical to me.
Professionally, it was discovering Claude Code. Though I quickly fell into the trap of thinking "great, it can do everything." It can do everything — the question is how. There's a learning curve, and it's evolving technology, but I can absolutely see this approach completely transforming what software development looks like.
And the worst?
I used to call it "wandering into the dark forest." When you give the model context and you're not entirely sure about your own question, it will sometimes confidently assert something completely wrong — essentially gaslighting you into a bad direction.
One specific case: I was researching data across multiple AWS accounts, running CLI commands to figure out storage volumes across buckets. The model completely mixed things up — pulling in information from an older conversation that had nothing to do with my current question. I couldn't figure out why.
Turns out, when I'd initialized Claude in the repo with claude init, it created a context file about the codebase, and stale information was written there. That file was in .gitignore, so I never looked at it. I only found the answer through a random Reddit thread where someone had the same problem.
That's when I realized: if other people know the fix, it means I'm just using it wrong. I need to read the documentation and understand the mechanics. Even though you're "just talking to it in English," there's a real learning curve.
How does AI affect teamwork?
Honestly, not seamlessly. People don't enjoy reviewing AI-generated code. It's cheap to generate but hard to understand, and it's frustrating to dig through and find bugs that should have been caught during development. If someone doesn't follow best practices when using AI, it creates a lot of extra work for the team.
There's also a pride factor. People who invested years learning to code sometimes refuse to use AI on principle — and if they won't use it, they don't want others to either. There's real resistance to the technology.
But I have no doubt the future looks like this.
How do you see the future with AI — three, five, ten years out?
I'd like to see regulation. The direction things are heading doesn't feel right — I feel like we're going full speed toward a cliff and we don't know when we'll reach the edge. If certain things don't change, there could be serious societal consequences: disrupted systems, massive unemployment that nobody's managing. We're already seeing that increased productivity doesn't necessarily mean proportionally increased demand for workers.
But the positive side is real too. AI has opened doors I never imagined. I've started thinking about my own open-source projects using technologies I couldn't have contributed to before — if I need to write something in Java and I'm not great at Java, AI can bridge that gap. I see people now able to realize ideas they always had but lacked the tools for.
I've personally stepped into the self-hosted world, which I'd been afraid of before. I'm thinking about building a family archive — collecting memories, photos, maybe using AI to edit together family videos. My brother built his own rig for local generation and has already done projects like this. Our creativity is less constrained than it used to be, and I find that genuinely exciting.
What would you say to students and junior developers?
Do what genuinely interests you. If you got into a field only because it pays well but you don't actually care about it, it might not be worth the investment. Look at what you're drawn to, where your talents lie — because more and more tools will become available to help you showcase your creativity and skills.
Many of my classmates dropped out along the way. I've had colleagues switch to entirely different fields because it wasn't what they cared about. So if someone is afraid they won't find a job: don't think in terms of jobs. Think about realizing your own plan. The young generation will adapt — that's what they're good at.
Member discussion: