
Paul Savage has been a software engineer since the mid-90s. After working in one of Ireland’s early tech successes, Aldiscon – which created the world’s first commercial short messaging product, Telepath SMSC – he started his own business in 2000 and has been involved in 17 businesses since. The most successful of these was NearForm, builder of the COVID Tracker app in Ireland. Today at his new company, Brightbeam, Paul’s focus is on the Age of AI.
We spoke to Paul to get his insights on building with AI and what founders can do to avoid the hype and ensure they deliver real value.
Paul, you’ve been working in software for a long time. What are you doing now that’s different?

Paul Savage, COO of BrightBeam and New Frontiers alumnus
Brightbeam is service company helping enterprises get advantage from AI, particularly the newer generative AI tools. We’re just over two years old, founded specifically to use generative AI capability, which we see as the first step towards our vision of being The Integrator of Digital Intelligence. The biggest thing that sets us apart is we deliver business value by bringing ideas into production, not simply to prototype stage.
To give you some examples, we built an AI solution that scans outpatient receipts for a health insurer and then interprets the receipt, validates the information, and helps users fill out the form – resulting in faster reimbursements. We built an internal knowledge management tool for an ERP company to un-silo information and make it accessible using natural language. We helped a charity with a large volunteer base match service users with volunteers based on criteria such as location and shared interests.
Everyone’s talking about how AI accelerates development. What are the pros and cons of that acceleration?
Yes, AI is enabling people who’ve never coded before to build software and it’s accelerating what experienced developers can do as well. We’re seeing a massive reduction in the cost of getting a software product to market that is genuinely transformative.
But there’s a serious pitfall. A junior developer can now use something like Claude Code and spin out a system with 80,000 lines of code. Historically, we ensured quality by having one developer write the code and another review it. But you simply can’t review 80,000 lines of code, so it gets deployed as-is.
In the rush to get features built, you end up with a huge codebase and don’t realise how low quality the code actually is until it’s in production. Then you have security issues, it’s difficult to modify, and it becomes brittle very fast because things are connected internally that shouldn’t be. In a compliance environment, you’re in real trouble.
If the code quality improves, will that solve the problem?
You may previously have considered your software IP as a barrier to competition, but it’s probably not a barrier anymore. The moat around SaaS is gone in that way. I would argue that market position is the only thing that protects a product anyway, but that’s further exacerbated now because products can be replicated so quickly.
The tools are getting better all the time – this year there was a major Claude release that makes it a serious tool for developers, and it’s going to improve continuously. But that just accelerates the replication problem.
There must be a temptation to add AI to whatever solution you’re building so that it feels new and relevant?
There is a tendency to think that if you have a good business idea and you’ve built a platform, but there’s no AI in it, you could just sprinkle a bit of ‘AI sugar’ on top. But my question would always be ‘Why?’ What specifically is the AI doing? Is there an experience lift? Are you bringing in natural language? Is it dealing with unstructured data in a way that makes things easier? Is it predicting something useful or generating content that adds value?
Scaling businesses (like those on track to become Enterprise Ireland High Potential Start-Ups – HPSUs) obviously want to be building where there’s momentum. Historically that was the dot-com boom, then mobile internet, smartphones, the cloud, containerisation. AI is a tsunami compared to those. But that doesn’t mean you jump on it just because it’s there. It shouldn’t be a default. I see too many people thinking ‘I have a problem; I’m going to throw AI at it’. That’s not the right approach. There should be a reason to choose it as a technology.
If you do decide AI makes sense for your product, what does it take to use it well?
Domain knowledge is crucial. When you know what you want, you can make AI jump through the hoops exactly the way you want it. That’s powerful and enables you to move faster. But it depends on what you’re trying to do.
When we write, for example, code to summarise 60,000 phone calls a week, every millisecond counts. You have to sculpt the code very carefully. Realistically, AI-generated code is still too blunt a tool to do that. So you need experts in the mix. They ask AI to set up the system – generate the skeleton and solve the blank page problem – and then they tell it exactly what they want.
What’s the biggest difference when you’re building with AI versus traditional software?
The amount of iteration after you get the features complete has changed. Let’s take that simple example of summarising a phone call. The system grabs the transcript, runs it through an LLM, and summarises it for you very quickly. If you want it to work well in a specific context – maybe you want to remove personally identifiable information or you want the summary to always include the product model discussed – you have to iterate the solution to get it to do exactly that really well.
With old software development, by the time you built your features that was 80% of the effort in and 80% of the budget gone. With modern AI systems, you get your features complete by the time you’re just 20% in. The question is no longer ‘will it do something’ but ‘how well will it do it’. Think about whether it gives you the same quality result every time, and build proper testing around that.
Once you have something working well, what do you need to watch out for?
When you use ChatGPT, you put your data in and, depending on your model, they can store it or use it for training. You couldn’t have a highly sensitive use case in an open model like that.
When we deploy systems, we use models that are solely for the specific use case and blocked off from the world completely. None of the data is used for training or stored. You take the data, interact with the LLM, then the data goes away. Most companies we work with have had data governance in place for some time and newer factors like the AI Act are adding security layers on top of that.
But there are absolutely people using AI in ways that share too much information. Everyone should get to know the configurations for every tool they’re using. Turn off what you don’t want. If you have high-risk sensitive data, deploy your own model. If your data is very high risk, look at using local inference applications, which are literally air-gapped so the data isn’t even connected to the internet.
What should founders be doing about the use of AI in their own companies?
You have to get specific. Think about the data in your business and product and decide where you want it to flow. Get familiar with the terms and conditions of the tools you use. Configure them accordingly – like keeping data processing within the GDPR zone, for example.
The challenge every business has is that this is a genie out of the bottle. Everyone uses AI to some extent, for a wide variety of uses. It’s clearly a help in a business setting, so if you don’t set up a secure system for people to use, they’re probably going to use insecure public systems.
If you don’t have an education programme in your company, people won’t know how to use these tools securely. AI training is going to become mainstream, just like cyber training did. Enterprise Ireland and the IDA are rolling out education and supports in this area, so make sure you’re educated. We can expect some new ISO numbers to come along specifically around AI too, building on the recently added ISO42001 standard.
What are the limits here? Where should we not rely on AI?
Large language models are prediction engines. Think of it like a five-year-old. If you ask a five-year-old a question and they know the answer, they’ll answer to show off, to impress you. If you ask a question they don’t know the answer to, they make something up because they don’t like admitting they don’t know things. They aren’t built to say ‘I don’t know’. If they have enough information, they’ll back the answer up and it might sound plausible. LLMs are exactly the same.
We divide use cases into discriminative and extrapolation. Discriminative is where you take a body of text and summarise it down. At that level, LLMs don’t make many mistakes and hallucinations are low. Extrapolative is when you’re asking questions it doesn’t have data for. It will have a go because it’s built to generate text whether based in reality or not. So it will be very credible, but it may still be wrong.
Where you’re asking for a lot of extrapolation – such as writing marketing content – put a human in the loop and make sure the result has been validated and makes sense. A point worth making here is that things do keep evolving and improving. Every 90 days the tools and models are improving and some of these things are becoming less of a problem in some ways. The only thing we know for sure is that in the next year this will keep evolving and improving. Which is why education and learning are essential.
There are lots of AI gurus online giving out all kinds of advice. Who should founders listen to?
It’s a problem. There’s a certain amount of hype and pure nonsense being spoken, and definitive statements being made that just aren’t true in practice. One example is the idea that data has to be very clean and well-structured to get value from AI. Whilst you can get MORE value from AI if your data is clean, the reality of generative AI is that you can use unstructured data and get value from it today. What was true in the world of machine learning (ML) doesn’t necessarily hold true now. Even better, when correctly implemented, Gen AI tools can clean an organisation’s data over time.
Another idea is that pretrained models aren’t good enough and you should train your own. But what we’ve seen is you can get a head start using something like ChatGPT and then go back and train your own models to get even more value in terms of higher control and more accuracy. ChatGPT is pretrained so it does things out of the box. With models you host yourself, you’d have to train them. But you don’t have to choose one approach forever.
You can see there’s quite a lot of stuff people are saying that could be true in their context but won’t be in yours. That’s why I would never give anybody casual or generic advice about AI. Data varies so much from business to business, and the data changes the behaviour of the system, so there’s no single approach. Get specific, understand your business and your data first. It’s the only way to cut through the noise.
Learn more about Paul on LinkedIn and his company at brightbeam.com.
About the author
Scarlet Bierman
Scarlet Bierman is a content consultant, commissioned by Enterprise Ireland to fulfil the role of Editor of the New Frontiers website. She is an expert in designing and executing ethical marketing strategies and passionate about helping businesses to develop a quality online presence.
Recent articles

Founder Perspectives: Lessons From Building Businesses In Sustainability

Tech Startups In The Age Of AI: Alumnus Paul Savage On Speed, Quality & Risk

Fourteen Startup Founders Graduate From Phase 2 Of New Frontiers In Tallaght

Eleven Founders Graduate From New Frontiers In The Border Mid-East Region

Laying The Right Groundwork Helps Startups Prepare For Export Success

Startup In Dublin: Learn More About New Frontiers On TU Dublin’s Grangegorman Campus

Michael Furey On The Success Of Ronspot: “The Most Important Thing Is Research”

Scarlet Bierman

With RedOrange AI, Amit applies his philosophy of intelligent automation to compliance management. He believes compliance should be a streamlined, proactive process that leverages technology to reduce human error and help businesses navigate regulation without disruption. So, how will RedOrange AI help businesses keep up with the pace and complexity of today’s regulatory environment?

