This is the next entry in our series of interviews about how people actually use AI. In the interview below, developer Máté Fellner gives insights about the good, the bad, and the fuzzy reality of collaborating with AI — from turning a 3-day task into 20 minutes to debugging code you didn’t write.

Hi Máté, thanks for joining me. To start, could you please introduce yourself?

I’m 29, and I graduated with a degree in applied mathematics. I work in the field as a programmer, AI engineer, data scientist — whatever you want to call it. I’ve been doing this since about 2019, so for roughly six years now.

What kind of AI work have you done in those six years?

I started out in computer vision and spent most of my time there. A lot of my work was related to deep learning, but I also did some time series analysis. Besides the heavy-duty deep learning, I also worked on what you’d just call machine learning — using simpler tools like regression and random forest methods on smaller, more structured (tabular) data, especially in the medical field.

But honestly, the most important part wasn’t just building the models. I learned the most from managing a project from start to finish. That means doing the less glamorous but critical stuff like checking data quality and figuring out if the results are actually any good. That whole process is a huge part of AI, even if it’s not as cool as building a deep neural network. A lot of my time was spent on that.

So, are you still working as a data scientist now, or has your role shifted?

I still do a little bit of model fitting, like in the medical field, but I usually just call that data science or statistics. I use AI in my current projects, but not always as the person who builds the models. For example, I recently worked on an app to help assess cognitive skills in kids. It records a kid as they move and tracks their hand and finger movements. The app might show a prompt on screen like, “touch your nose” or “make your hand look like this,” and it checks how well they copy the pose. It uses AI, but for us, the AI is a ”black box”— we just use it, we don’t change how it works inside.

So you’re basically plugging existing AI models into a bigger system?

Yeah, you could put it that way.

Let’s talk about how you use AI as a tool. As a developer, how often do you use it in your daily work?

More and more lately. I should mention I mostly code in Python for AI stuff and use React for frontend web development. At first, I just used it in a simple way. If I had a chunk of code I was unsure about, I’d copy ten lines from my Python file, paste it into ChatGPT, tell it what I wanted to change, and then copy it back. Even that was super helpful, not just for simple web stuff but for trickier programming problems too — the kind people say require more complex thinking.

As the tools got better, I started using Cursor, which I love for its code completion. Following a friend’s suggestion, I’ve tried Claude Code as well, it’s been fun and really useful. I’m still figuring out which tool is best for which job and learning their limits — what size of a code chunk or what level of complexity I can hand over. Of course, all this changes constantly as the AI models get updated and their prices change.

Do you pay for any of these?

Yep, I have the $20/month plan for Cursor. With that, I can choose from different AI models. For example, it lets me use Claude 3.5 Sonnet a lot more than the more expensive 4.0 Opus model within my monthly limit. I also have the basic €20/month plan for Claude Code. For what I’m working on right now, those two are enough.

Which one do you like more? How do they fit into your workflow?

If I have a big, self-contained task — like designing a new feature or creating a whole new page for a website — I give it to Claude Code. I can describe what I need, tell it which buttons go where, what the URL route should be, and it will generate all the code across, say, six different files in about a minute. It’s perfect for that.

But if I’m doing something where I want to stay in control, I use Cursor. I have it set up so I have to approve every single change it suggests, either block-by-block or file-by-file. It also has a neat feature where you can just highlight one line of code — like a function call — and ask it for a quick fix, which is great if you’re feeling lazy and don’t want to look up the exact syntax. It’s also cheaper in terms of “tokens” (how AI usage is measured) than sending the whole file.

Are you happy with this way of working? You said you like to stay in control. Is that because you’ve had bad experiences with AI messing things up, or do you just enjoy coding yourself?

I’ve thought about that. I think there are different ways to use these tools. The riskiest way is to just blindly trust the AI to build your project. I think that’s guaranteed to fail eventually; it’s just a question of when. A better way is to test what the AI does after every single step to make sure it’s actually doing what you want.

A third way is to ask for big changes but review all the code yourself, line by line. That’s safer, but for me, it sucks the fun out of programming. If I get 300 lines of code that I didn’t write, I’m just a code reviewer. And it’s awful when it’s not quite right, because then you have to debug code you don’t understand, which is really hard.

I actually just ran into this with a personal project. I let the AI run a bit too free, and it couldn’t do what I asked. I’ll probably have to restart it by building the foundational parts myself, by hand, and then let the AI build on top of that. It’s more fun for me that way because I feel like I’m actually making something.

What’s this personal project?

It’s a sports analytics project. I love basketball, so I’m collecting data on games. The AI got stuck when I tried to get it to handle the data. I’m pulling info from three different places: the box score (player stats), the play-by-play data, and the betting odds. I actually used ChatGPT to write the code that scrapes the data from the web, and it did a great job with a tool called Selenium that I don’t know very well, unlike BeautifulSoup which I’ve used before.

The problem came when I asked Claude Code to put all this data into a database. It completely over-engineered it. It tried to build a super-complex database with five different interconnected tables, like something you’d see in a huge company, when I only had three types of data. I had absolutely no need for that. It even tried to use a database system I’m not familiar with. While it could have been a learning opportunity, I just wanted to get this project done quickly with tools I already know.

And you can’t just tell it to do it differently or be more specific?

You can try, but it’s hard. You explain the task one way, and the AI starts doing it in a completely different way. You don’t always know where it’s going to misunderstand you. So it generates a thousand lines of code, you see it’s wrong, and you just have to delete it and start over. At that point, you wonder if it would’ve been faster to just do it yourself. The lines aren’t always clear.

Do you ever worry that using AI so much will make your own skills worse?

It’s a small concern, but on the other hand, AI has let me do things I couldn’t have done before. Like that web scraping code — I couldn’t have written it myself, but I could understand what the AI wrote.

For web development, my attitude is more about getting the job done. I once tried to build a simple “copy to clipboard” button. My own code, which I pieced together from Stack Overflow, worked on my computer but broke for everyone else when I put the site online. I couldn’t figure it out and eventually gave up. Recently, I gave the same problem to Claude. It just worked. I have no idea what it did differently, but it solved the problem. I don’t feel dumber because of it; I’m just happy it works.

What’s your happiest AI story?

At my job, I had to build these really complicated web forms. I’m talking about data entry pages with 30–40 fields each — text fields, number inputs with character limits, dropdowns, pop-up windows with single selections, and even pop-ups with multi-select options that had special logic. The first time I did it by hand, it took me three full days. Recently, I had to do it again. This time, I was given the requirements in an Excel sheet. I just copied the specs and pasted them into the AI tool. After a couple of small tweaks, it built the whole thing in 20 minutes. That was a very good day.

Do you use AI for anything outside of work?

Yeah, I use it for looking up information all the time. It’s often better than Google. I especially like tools like Perplexity. With the newer models, they can search the web and show you their sources, so it’s not like the old ChatGPT that would just make things up from its “memory.” I use it as a kind of personal tutor. For example, during a home renovation, I wanted to know how hard it would be to put up a drywall. I had a whole conversation with the AI about the steps and could ask follow-up questions specific to my situation, like what to do with a concrete wall versus a brick one. It’s way better than a static tutorial.

I also used Perplexity for my sports project. During the European Basketball Championship, it’s hard to know the strength of national teams because they don’t play together often. So I asked Perplexity to give me a team’s roster, and then for each player, I asked it to find their regular season club and their role on that team — like if they’re a starter or a key player. Doing that kind of research manually would have taken forever.

Let’s go back to that sports project. It sounds pretty serious. Are you actually trying to build an AI to beat the sports betting sites?

That would be the ultimate goal, but I’m not there yet. Honestly, it’s more for fun right now. The betting world is incredibly tough. The bookies set the odds not just based on which team is better, but to ensure that they always make a profit. So to win, you don’t just have to be smarter than the bookie, you have to be smarter than all the other bettors. For now, I’m still just in the data collection phase. For me, the real interest is in the complex modeling problem itself. How do you even properly model all the data from a basketball game? It’s a really hard problem, and that’s what makes it so fascinating to me.

One last thing. You also teach Python for an AI program. How can a course like that keep up when the field is changing so fast?

It’s tough. The field feels like the Wild West right now. When I started my career, computer vision was a pretty stable area. You could learn how a convolutional network worked, and that knowledge would be useful for a long time. Now, things change every week. “Prompt engineering” was a huge buzzword, and now people are talking about “context engineering,” which, to be honest, is basically the same idea with more commands.

But I think these courses are still incredibly important. The world will need developers who can use AI tools like Lego blocks without knowing what’s inside. But it will also always need people who understand the deep math behind it all — the activation functions, the gradient descent methods, the things that make the models actually work. These are the people who can fix models or build new ones. That fundamental knowledge is valuable no matter what the latest trend is.

That’s a great way to put it. Anything else you want to add?

Just that you can’t blindly trust these tools. You have to be ready for them to give you bad answers, and you can avoid a lot of bad experiences that way. Second, the quality of the AI’s output really depends on how good your instructions are. And finally, you just have to accept that some tasks are too complicated to explain to an AI, and you have to do them yourself.