14 min read

Talking Tech with Boomerang

Interested in generating passive income? Join our partnership program and receive a commission on each new client referral. Learn more.

Talking tech

Welcome to another episode of our Talking Tech series, where we engage in in-depth discussions with different companies to explore the technologies that make their products. Recently, we had an insightful discussion with the team at Boomerang. So join us for this talking tech session featuring our head of engineering, Nika Jorjoliani, alongside Boomerang’s CEO, Peter Tanner and CTO, Dave Kimberley.

Briefly About Boomerang

Boomerang, established in 2010, is a company known for its expertise in messaging solutions. They specialize in automating omnichannel digital communications, making it easier to manage stakeholder engagements, time-sensitive communications and alerts. Boomerang’s innovative technology tracks individual messages and efficiently manages notifications across diverse platforms and multiple communications channels. Their solution allows companies to build communication workflows that can automate business processes in real-time.

More About Boomerang Roadmap

Peter Tanner: Even with a 15-year presence in the digital engagement space, I would still describe Boomerang as a boutique business. Our solutions overcome fundamental problems that exist with messaging. Over the last 15 years, one consistent observation has been that communication almost always seemed like an obstacle or even an afterthought.

When a 2- way message is sent from an automated process to a person, the response can be managed by the process; however, where this fails is when more than one message is sent to the same individual requiring a response, as this causes the workflow process to break because it cannot know which message is being responded to. The workaround in this scenario is to use another comms channel. However, this delivers a very poor user experience, adds development and IT overhead and, ultimately, costs. This restriction is a significant issue when building automated processes through SMS.

So, we decided to fix that problem and now provide a service that guarantees every message sent through Boomerang can have its response matched to the exact record irrespective of the number of messages sent or order of reply.

Boomerang has subsequently patented Intelligent Messaging across more than 40 countries Globally. Our origins lay in offering a standard messaging service; however, once we understood the level of communications automation Intelligent Messaging offered, we developed several business process solutions with Intelligent Messaging at their core.

After several years, we had a stable of solutions, all providing different capabilities from M2M alerting to mustering, customer support to weather alerts. While we had some success, we were struggling to win new customers. The problem we faced was letting the world know what we could do. With so many products and solutions, we were hamstrung to get our message out there, and we were a jack of all trades and master of none.

So, at the beginning of 2023, as we came out of Covid, we decided to change our approach and give our customers the freedom to access our services from one single place, use our software to build whatever they needed, connect their processes and data to anything they wanted and allow them to control the whole process from start to finish, without having to write a single line of code.
We call this boomLogic.

From within boomLogic, customers will be able to select from pre-defined system templates and edit them for their use or build their own from scratch. They will be able to connect APIs to receive and send data, create groups and sub-groups, build user-initiated inbound and 2-way outbound processes, count responses, build exclusive websites to handle payments or ordering, build dynamic engagement journeys for surveys, choose the customer’s preferred communication channel, send and receive files, use short links, insert dynamic data into messaging, extract and reuse specific message data, and much, much more.

This, in turn, will allow customers to build a myriad of solutions specific to their own business or their customer’s businesses, including automated appointment scheduling and re-scheduling, large-scale mustering, escalation of Machine-2-Machine or incident alerts, managing communications around support tickets, with both customers and field-based engineers, enabling self-serve customer support, processing payments, online form completion, chasing incomplete documents, image capture, and so much more.

We feel our target audience is the development community and technology companies, as boomLogic negates the need to develop complex communications solutions into existing software applications, removing risk and cost and allowing easy addition and change as customer stakeholder requirements change.

Sharing More Technical Insight

Nika Jorjoliani: Can you provide an overview of your technical stack and highlight some key aspects before we dive deeper into the specifics?

Dave Kimberley: We were fairly early adopters of cloud. We’ve been using cloud platforms for 15 years. Back in the day, when everybody was running around with their bare metal servers that were constantly failing and having to be replaced, we were already on the cloud and didn’t have those failure points. Over the past couple of years, we’ve strengthened that cloud platform, and over the past 18 months, we’ve introduced Kubernetes to orchestrate the services on the cloud. We run our private cloud, and we run around three data centers in the UK, with a point of presence in Europe and the US.

Nika Jorjoliani: Before you went to Cloud, did you have any type of different architecture that you were working on? Fifteen years ago, when you were just starting, Cloud was not what it is now. How was that journey, and what were your biggest problems during that time?

Dave Kimberley: Originally, the solution was powered on a single dedicated server. This was going back to 2008 when single servers weren’t redundant, scalable, and dangerous territory. Some companies still run on single-server solutions today. But back in 2008, we needed something that could scale and be highly fault tolerant. A single-server solution is never going to provide that.

In 2008, we established our own cloud infrastructure. Normally, clouds are hosted in a single data center with multiple network connections coming in. However, our cloud stands out because it is a solution where we can replicate data in real time between many geographically split locations, eliminating the limitations of a single physical site.

In terms of how it’s evolved over the years, our system has notably become faster. Every other week, we have access to brand-new CPUs that provide a 20% increase in performance compared to the previous generation. So, over the years, we’ve just ripped out hypervisors, upgraded CPUs, upgraded memory, and thrown them back in. This level of flexibility is unique to a cloud environment, and it would be quite challenging to achieve such upgrades on a bare-metal setup without experiencing extensive downtime.

Nika Jorjoliani: How did your customer base respond when you transitioned to the Cloud in 2008 when it was relatively new? Were there significant customer additions, and how did they react to any outages they might have experienced during this transition?

Dave Kimberley: In 2009, customers were using that platform, but the technology of that platform also evolved. So, back then, we were using .Net extensively with Microsoft servers and SQL databases. I think .Net has slowly been eroded over the past 15 years, but it was still a big deal back then. The part of moving to the cloud was moving away from .Net to PHP. Now, we use PHP and Go.

Our ongoing efforts revolve around security standards. This includes enhancing data in transit encryption and at rest and ensuring full compliance with the latest security benchmarks. Our goal is to enable the platform to smoothly adapt to substantial growth, up to ten times its current volume, without experiencing downtime or operational stress, thanks to our sufficient computing capabilities. Using Kubernetes offers us the flexibility to deploy as many services as needed.

Nika Jorjoliani: Did you develop the entire architecture internally, or did you need any outside assistance? Managing three separate data centers is a big deal, and it must have required a lot of planning, right?

Dave Kimberley: You might expect that managing multiple data centers would be a substantial task, but cloud technology simplifies it more than you’d think. If we were running dedicated servers in a traditional setup, that would involve constant maintenance, replacing components, and dealing with downtime. However, with the cloud and VMware, adding a server is as straightforward as installing it and letting it boot from the network to load the image from the central console. All hypervisor management is done remotely, making it a smooth process to manage an on-prem cloud across multiple data centers using reliable software.

We’ve done a lot of scripting and coding to automate everything. Even down to the firewalling rules, we use hardware firewalls because they scale better than software firewalls, and all of the rules on the firewalls can propagate between data centers. For example, in the event of a DDoS attack, we mitigate it upstream within the network. We also distribute this information to all our firewalls and switches to block it at the source. You could normally do this using an older school hardware-based non-cloud solution.

Differences Between Public Cloud vs Private Cloud

Dave Kimberley: There’s also a difference between public cloud vs private cloud. Some organizations use public cloud services like GCP or AWS, which is acceptable. But people fail to understand that it’s still a public shared cloud. So, the same hypervisor that’s powering your code on that cloud is powering other companies’ codes as well. So you can end up with noisy neighbors, where you have degraded service because another customer’s server is causing you contention. 

In fact, that’s why we went private. In a private cloud, we can define rules, configurations and security settings that you wouldn’t be able to do on a vanilla public cloud.

Nika Jorjoliani: Let’s discuss the technical side of private cloud vs public cloud. Are there any differences when it comes to setting up a cloud infrastructure on a private as opposed to the public cloud? Also, what are the differences price-wise?

Dave Kimberley: Initially, if you go public cloud, it’s fairly cheap because you don’t have to buy hardware and network gear. With a few clicks, you can access computing power or manage databases. It’s a great deal, as getting your systems online costs very little. 

However, as you continue to scale on platforms like AWS, the total cost of ownership becomes significant, especially if your software experiences varying usage patterns, being quiet at night and busy during the day. It can be challenging to calculate these costs accurately. When you crunch the numbers, it is around 40% more expensive to use a public cloud compared to a private one.

Also, you’ve got the drawbacks. Public clouds require DevOps engineers and a lot more skills based on the services you’re coding against to work with. Another drawback of a public cloud is that transitioning away can be challenging if you’ve designed your application within the AWS ecosystem. Somehow, you’re locked into their environment. You can’t just go and quickly port your code somewhere else without having to recode certain components and then find other engineers who understand the private cloud. So, all things considered, the private cloud often proves to be a more cost-effective and configurable solution.

Nika Jorjoliani: That’s surprising and really nice to hear. I’ve noticed AWS and, generally, the public clouds restrict you in some ways. They force you to use their services, so you don’t have much flexibility at that point.

Dave Kimberley: In October last year, the software company Basecamp made a notable move when they left the cloud. They left AWS mainly due to the escalating costs. The actual performance of the platform was suffering because of noisy neighbors, and they were locked in a situation where if AWS were to increase their prices by 15% tomorrow, they’d be left with limited options and flexibility. So, Basecamp was the first company to reject AWS. Since then, many companies have followed Basecamp and moved back to their private clouds. 

In terms of the design of the network and the architecture, that was something we did ourselves. In my experience as a CTO here, I’ve come from heavy transactional platforms, building solutions that can scale into tens of millions of transactions per day. This design scales in every component and will allow us to do as many transactions as possible.

Nika Jorjoliani: I’ve always thought the private cloud would be much more expensive. But now that I listen to your points, it makes much more sense.

Dave Kimberley: And it can be in the short term because you’ve got to buy that hardware. But it’s a cloud. So, you could start small and then throw in more hardware resources as you generate more revenue. And it depends on how you build it. We went pretty hard straight away because we wanted to build something phenomenal. So initially, there was a big outlay to buy and build that hardware and infrastructure. But once it’s done, it’s easy in the long run.

Future Development Plans

Nika Jorjoliani: Since there are so many touch points through your application, does it have some analytics tool, or are there plans to implement one in the future?

Peter Tanner: We’re currently in the testing phase. Once we move this system into production, it will include comprehensive analytical data. This data will help us and our customers understand user trends so that we can make adjustments, gain insights, and facilitate learning. Analytical data is crucial for this purpose; it’s the key to learning and improving.

Nika Jorjoliani: You’ve mentioned that the application knows what to answer. This has to be pre-programmed by the people, I imagine. But I’m sure that you’re already thinking of incorporating artificial intelligence. So, can you give us hints of what you’re thinking about it?

Peter Tanner: There’s no denying AI is here, and one wants to take advantage of it. But we’re not rushing to release it because we want to make sure that when we do, people will want it, understand it, and it’ll work as they expect. Once we’re sure about that, we’ll figure out where it fits best. 

Technical-wise, our physical infrastructure is ready for it. We just have to code against it. So we have to create our own engine, train the engine, and then we could introduce it into the software stack.

But without question, AI could be making decisions in our workflow process.

Meet the authors

We are a 200+ people agency and provide product design, software development, and creative growth marketing services to companies ranging from fresh startups to established enterprises. Our work has earned us 100+ international awards, partnerships with Laravel, Vue, Meta, and Google, and the title of Georgia’s agency of the year in 2019 and 2021.

Contact us