Skip to main content
Gun.io Home

Megascale high-performance decision engines with George Corugedo of RedPoint

If you’re shopping online for pretty much anything, there’s a very good chance the pages you’re looking at are being dynamically generated based on A/B testing and automation powered by RedPoint.

George Corugedo is the firm’s co-founder and CTO. In this episode George sits down with Ledge to discuss how his global engineering organization created industry leading decision engine performance of less than 50 milliseconds per decision and scales it using .Net Core and containers to more than 30 million real-time decisions per day.

Don’t miss George’s powerful lessons learned going from small company technical founder to global CTO.


David Ledgerwood
David Ledgerwood
· 25 min

Transcript

Ledge: George, thanks for joining us today. Really cool to have you.

George: Thank you. Thank you. Very excited to be here. Sounds like a fun time.

Ledge: Awesome. Can you please tell the listeners maybe two, three minutes about you, your history and your work now?

George: Sure. Happy to. I run a team of about 48 developers that span the UK, Boston, Boulder and Manila. We collectively work on four different products that are very interesting products, that all come together to deliver a customer engagement hub.

Now, just to give you a little bit of background on what that is, a customer engagement hub is a set of technology that allow you to collect data, basically everything that’s knowable, about a customer. Pull it all together into a real-time availability hub – something that’s now called the CDP or Customer Data Platform. That’s the most recent buzz term for it, but it’s really a new species of data environment. It’s really not what you might think of as a data warehouse, it’s really focused on real-time availability of data.

Then we’ve got machine learning and AI, proprietary stuff that we’ve built that we then apply to the data to develop insights. Then we’ve got an orchestration layer that sits on top of that that allows marketers from enterprise organizations to then deliver messaging and customer experience from any touchpoint across the organization.

It does that because we interconnect via our SDK to all of those typical touchpoints. Things like call centers and websites and mobile apps and email. Whatever it may be, we interconnect to those last mile providers, and our technology sits in the middle and gives the folks in the enterprise one point of operational control for all that messaging and experience management, and one point of data control.

That’s what we focus on. That’s what we do.

We use a gamut of technologies. Everything from .NET, .NET Core, all the way to JAVA, C++, lots of JSON work that we’re doing with all those different interfaces and exchanges of data across all those systems. Lots of fun stuff.

Now, we’re also moving all of that technology more directly into the cloud environment. Particularly Azure, but we work both in Azure and AWS. Leveraging more and more of the past technologies that are available in things like Azure. So, we’re making the technology which was just pure software in the traditional sense of it – downloadable, installable software – is becoming more and more like a first class cloud citizen now. So we’re doing a lot of that conversion to those type of microservices that we would deploy in the cloud.

Ledge: So, is the solution old enough that you would have developed pre-cloud, pre-microservice type of thinking, and now you have to re-architect that?

George: Well, fortunately, not a lot of re-architecting. Some parts yes. Some of our technology is close to 15 years old. It started back, C++ data management engines and everything. Frankly, we still outperform the likes of Informatica and lots of the bigger companies out there.

The technology is still very viable but the world now is mostly cloud oriented. So in some cases, yes, we are having to rebuild some of the execution engines that are inside of the data application. But interestingly, the .NET side of it has been much easier to move into the cloud.

What we’ve done is moved all of the .NET code into .NET Core, which then allows us to then run in Linux and Linux containers, which has allowed us to then very rapidly move everything into the auto-expanding environments that clients are expecting.

Really, you can’t tell that the technology used to be more traditional software. It works just as you would expect any kind of SAS auto-scaling microservices oriented technology would now.

I think that’s partly smart architecting and engineering early on, part luck because .NET and .NET Core have evolved the way they have. But we’re still paying off some of the debt that C++ gave us.

It’s all good. It’s stuff that we love to do.

Ledge: I don’t hear as many people talking as I would expect about the .NET Core in the Linux world, and yet I know that that’s huge and there’s a lot of development there.

Maybe talk a little bit more about that experience because I think there’s still a little bit of the overhang on the open source world that Microsoft used to be evil. Now they’re doing a tremendous service there and a lot of development around Linux.

What’s that been like, and how could people explore that?

George: It’s interesting. One of the things that I’ve always been fascinated by in technology is how people ascribe personalities – the evil Microsoft, or the cool Apple, or whatever. I’ve never understood that, frankly. To me, technology is a tool, and every technology has tradeoffs. It’s just that simple.

If you want something to be successful, it’s all about figuring out what’s the best fit. What tradeoffs fit your circumstances. Then, how do you do and deliver that solution as quickly as possible.

I’ve never been Microsoft-phobic but, at the same time, I’ve always been very open to every type of technology that’s out there that’s useful.

The .NET Core has been a real blessing for us because if you work with containers much you know that the Linux containers are much more agile in the things they can do than the Windows containers. What the .NET Core has allowed us to do is to very quickly move a lot of code that has to work in a very microservices kind of way, a lot of our real-time decision engines that require that auto-scaling capability – has allowed us to take advantage of that because of its compatibility with Linux now.

An example of that is, in our orchestration layer we have our real-time decision engines that can either be driven by discreet rules or by machine learning. What the .NET Core has allowed us to do is to take real advantage of large numbers of containers and the whole idea of containers, which allows them to replicate and diminish as necessary.

That’s a very important thing when suddenly you’re in the holidays and you get a big rush at a website, and it’s demanding lots of decisions to keep up with demand.

To us, it’s been a real blessing. Like I said, I don’t think… We’ve used .NET and we started building some of the applications we built on .NET, frankly, because of the speed of development. JAVA is very useful in a lot of different cases, but the applications that we could build in .NET in the timeframe, at the level of efficiency, that we could build them just didn’t compare. So we were able to scale up these applications much more quickly in .NET than we would be able to in other languages.

So now we have the good fortune of having .NET Core that allows us to really expand it across lots of different environments.

Ledge: Are you using Docker, Kubernetes? What is the actual technology sector?

George: Yeah. We actually use both of those – mostly Kubernetes now. Since we are so much in the Azure environment we use Kubernetes a great deal. It just works like a charm and the performance we’re getting out of it is really staggering.

Most of our competition, the decision engines that have to support websites or real-time decisions from call centers or mobile apps or whatever it may be, none of them can break 200 milliseconds per decision. We’ve got clients that are delivering 30 million decisions a day all at less than 50 milliseconds, at the 95th percentile.

In the world that we live in, that’s really staggering performance, and that’s including multiple pops in a single decision.

For example, one of our clients when they request a decision they will have a stop for other feeds of data, for pricing optimization at one API, for product recommendations from another API that’s driven by models. Whatever. We’ll have three or four stops that we have to do. We collect all of that then process it through our real-time decisions and send it back, and typically we’re in single digit milliseconds.

In general, it’s really a sub-50 millisecond threshold that we’re looking to beat, and we do that.

So we get that performance from the combination of our code sitting inside of those containers. It’s really worked out really well for us.

Ledge: That’s a fantastic performance metric. You don’t hear that a lot, so well done.

George: Thank you. It’s great. Our clients love it and it’s really put us in a great position in the whole real-time decision space and in all of the AB testing that we do with websites. We can work both in the server side or client side. Just lots of flexibility around that.

Ledge: Do you have a server list use case that anywhere on that? Because it seems that you could potentially go that direction.

George: Yeah, and we’re definitely looking at it. Currently, we don’t have a use case that requires it but where we’re heading with the software, certainly that is a direction that we’re heading towards.

Ledge: Sort of that stateless model of decision making.

George: Absolutely. One of the things we’re exploring is…

The history of a lot of what we do is based in workflow one way or another. It’s all about state and state changes. As we move into a world where decisions – both inbound and outbound decisions and decisions across an enterprise whether they’re being internally done or externally managed – where all of that has to run at this type of speed, the less overhead we’ve got the better. Certainly, we’re looking at how to migrate the technology that way.

There’s the current landscape which is really an extension of that traditional workflow, where you mentally think of the Visio diagram and the arrows and the Go button, to a place where you’ve got machine learning running instantaneous decisions. Then working in that type of serverless, headless type of environment where decisions are running in a very organic way across the enterprise.

That’s very much where the technology is headed, we’re just not quite there yet.

Ledge: Very interesting. So you kind of have the experience of being a technical founder all the way up now to global CTO, 50 engineers all over the world working for you.

I wonder, two or three lessons for people who aspire to such things. What do you wish you knew then that you know now? Just tips for somebody who aspires to go from founder to global CTO.

George: Well, the interesting part of all of this is, this was never the plan. So I’d say probably one of the big lessons is, be careful how hard you stick to your plan because you might miss out on some great opportunities.

My educational background is all in applied mathematics. I expected to be a professor and all that stuff, but life had other ideas for me. So, the idea of being able to take what you have and not be afraid of new challenges, I think, is really one important lesson there.

One point already far removed from being a professor, I was working at Accenture. That was an unlikely place for a math professor to end up but there I was working with Global 100 clients and solving a lot of business problems and everything else. It seems like an unlikely place to be but one of the things that I learned is that, if you can think and problem solve, you can pretty much move that into any space you want.

That was what I really depended on, was that ability to problem solve. That took me to a place like Accenture where I got unbelievable business training. Some things you can’t learn at a business school. It’s very different being in that place.

It was interesting because I was there and I was considered an expert in what I did at the time, and then all of a sudden I get the bug to go do a startup in the Philippines with call centers. I knew nothing about call centers. Nothing about the Philippines or what I was really even doing out there.

I remember there were a lot of nights when I was – and nights because you work US hours so you work overnight. I remember sitting there thinking a lot of times, What was I doing? Why did I do this? Why did I take this…? I had a cushy job. I was an expert at what I did. I was paid a lot of money. What am I doing here at 3:00 in the morning at a call center in the Philippines?

But the truth is, it was a tremendous learning experience for me and I wouldn’t change a thing of it now because that experience really prepared me for doing what I’m doing now. Which was, I’m glad that I’m not having the first board meeting that I ever had at this job. I had it then.

So a lot of those experiences that you might think were kind of off to the left or to the right of what you planned, there’s always a lesson in it and there’s always a learning that you could take away from it.

I didn’t expect to be here. I certainly aspired in the back of mind, that would be great. But here we are and we’re doing phenomenally well.

What’s really interesting, I guess a second lesson of that is, really, if you believe in direction and a vision then stick to it.

We’ve never been a conventional company or a company that really listens to conventional wisdom. When we entered the space we’re in, which is really marketing technology or martech, we always were very data obsessed. While everybody else was coming up with the next marketing widget or the next ad technology or DMP or whatever, we were always thinking, man, if you don’t get the data right nothing else works correctly.

Well, the market has finally caught up to us and we are like the hottest thing going right now in terms of martech because we organically really focused on data, and we organically developed our own machine learning.

For a lot of years, it was really frustrating with the analysts, which are just the bane of my existence but nonetheless you’ve got to deal with them. They would say, “Well, you’re not really a marketing company, you’re really more of a data company.” Then when you go talk to them about the data, “Well, you’re not really a data company, you’re really more of a marketing company.” So you could never please them. But we knew the vision was right because we see the results. We see what happens when you fix the data, get it right. The marketing works.

It was a really important lesson for us and for me to really stick to your vision. You may be ahead of the market a bit, but eventually if it’s right people will come around. That was a real important lesson for me as well.

I don’t know if I can come up with a third one, maybe I can while we chat here, but those are two big ones for me.

Ledge: That’s fantastic. Thanks for sharing those great insights.

I’ll go with the short, easy one at the end there. What we always ask is, we’re in the business of evaluating and vetting and hiring the best software engineers in the world, and we have a pretty rigorous system to do that.

I like to ask everybody I talk to, what are your heuristics for knowing when you’re talking to and about to hire or wanting to hire a super-senior software engineer? A+, elite, badass. Who is that? How do you measure that? Help the listeners understand because everybody wants to do that.

George: Well, that’s a great question. I think one of the reasons we’ve got so many of those really hot engineers is because, one of things is we’re real flexible. In the sense of, one of the things about engineers is that they’ve all got personalities and they’re kind of interesting sometimes. We look for the value in the people and what they bring, and if they are a bit eccentric that’s fine with us.

What we don’t do, and what we really don’t put up with is being slack. Everybody’s got to be an A+ personality. They’ve got to be obsessed with that they’re doing. They’ve got to be really self-directed. We like to see people who have started up other companies and maybe the startup didn’t work out, but people that can really initiate something. That you can say… What we used to say at Accenture, “Fire and forget.” You can discuss a vision for somebody and next day they’ve got a prototype, that’s what we look for. It’s people that can iterate very quickly.

Interestingly, they have to be good at communicating because it’s not enough that you can just code up anything in the world. You’ve got to be able to work in a team. So that ability to communicate and turn that communication into something tangible quickly is a really important asset for us, or it’s a really important quality for us that we look for.

We look for people that have built stuff in their prior lives that have been part of really productive teams. We also look for people that are not necessarily just developers, in the sense that they’ve got to know, they’ve got to have some expertise in that particular space and industry.

Like, if we’re going to hire somebody to code up machine learning code, well they need to know machine learning inside and out in addition to coding. When we hire folks to develop stuff for our orchestration, we want people who know a lot about workflow and how it works, in addition to being able to code in five different languages.

They have to have expertise in the topic as well as the code. Coding is like the price of entry, and the expertise in that particular topic is what really makes them effective for us.

Ledge: Speaking our language. Thank you. Couldn’t have said it better myself.

George, this is a lot of fun. Thank you so much for joining us.

George: Thank you. Really enjoyed it.