Mastering the Art of Scaling: CTO Seth Carney's Deep Dive into Courier's Growth, Challenges, and Success

Available on
Episode Description

1. 🚀 Seth Carney, the CTO of ⁠Courier⁠, an all-in-one notification platform for developers, shares his journey from an InfoSec engineer to a CTO in this episode.

2. 📈 The discussion covers the evolution of Courier, which now processes billions of events each month, showing the power of scaling.

3. ☁️ Seth talks about how ⁠Amazon Web Services⁠ (AWS) has been instrumental in accommodating their infrastructure needs and the challenges they faced along the way.

4. 🛠️ Insights are provided on the role of tracing and observability tools at Courier, how they manage a vast set of metrics and logs, and their approach to latency issues and changes.

5. 💼 The second half of the conversation revolves around the balance between generating valuable events, traces, logs, and metrics and those that are just costing money.

6. 💡 Seth's mindset on building and scaling systems has changed over the years, and he emphasizes the importance of partnerships and transparency in Courier's business model.

7. 📊 Data plays a crucial role in decision-making at Courier. They use data to inform their decisions and provide observability for their customers.

8. 🌍 The episode also touches upon Courier's plans for global expansion and the engineering challenges that come with higher service level agreements.

9. 🚧 Seth discusses the importance of overcoming hurdles and unlocking value during scaling, using various strategies to achieve success.

10. 👥 Finally, the importance of building trust with customers through data transparency and reliable notifications is highlighted. Seth shares his belief that the best scaling and growth occur when there is a great partnership between the customer and the engineering team.

Full disclosure, Courier is a customer of ⁠Propel⁠, the company I co-founded along with Nico Acosta and Mark Roberts.

(0:00:11) - Scaling and Infrastructure Challenges at Courier
Seth Carney and I discussed his career, Courier's journey, AWS scaling, and billions of events monthly.

(0:11:01) - Tracing and Observability in Courier
Seth and I discussed Courier's tracing and observability tools, latency, experiments, traffic diversion, and tracing/sampling in development.

(0:17:53) - Scaling and Value in Engineering Leadership
Seth and I discussed value of events, logs, metrics, scaling, customer expectations, and partnership.

(0:26:32) - Building Partnerships and Transparency in Business
Seth and I discussed transparency, scale, customer needs, and partnership benefits.

(0:31:57) - Data's Role in Decision Making
Seth and I discussed data's role in Courier's success, customer observability, data warehouse access, and Propel-powered analytics.

(0:39:25) - Reliable Notifications and Data Analytics
Data can inform decisions, build trust, provide insights, and drive engagement.

(0:50:27) - Expanding Global Footprint and Engineering Challenges
Seth and I discussed global expansion, engineering challenges, product updates, data residency, and SLAs.

(0:59:44) - Improving Speed and Discussing Family Life
Seth and I discussed efficient solutions, investing in hardware, data-driven decisions, global expansion, and SLAs.

0:00:11 - Tyler Wells
Welcome to the DataCals podcast. On today's episode, we have a conversation with Seth Carney. Seth is a CTO of Courier, the all-in-one notification platform for developers. Before Courier, seth has pretty much done it all. During the conversation, we'll dig into his background and the hard-earned experience he's learned to build at scale. This conversation gets in the weeds, so to speak, and at a great time discussing some of the tradecraft Seth has developed building a Courier. This was a fun discussion and one that I hope to have more often than Seth, as he and the Courier team continue to grow and evolve. So, as always, sit back, relax and enjoy the conversation. All right, seth, Thank you for joining me on the DataCals podcast. I appreciate you taking the time out of your very busy day and sitting here to have a conversation with me.

0:01:04 - Seth Carney
Yeah, it's great to be here.

0:01:06 - Tyler Wells
Well, it's obviously great to have you. Not only am I a fan of Courier, but you all are also a customer, so it's great to have this conversation, but actually learned a little bit more about Courier. So you're currently the CTO there. Why don't you tell me a little bit Before we jump into that? Actually, let's hop into your background. How did you end up at Courier and what were you doing before that?

0:01:29 - Seth Carney
Oh gosh, Wow. That's a long story because I'm sure the gray hair betrays me a little bit. Professionally started longer ago than I'm willing to admit, but I was actually an InfoSec engineer, basically being a white hat ethical hacking, penetration, testing, building secure applications. Managed to get a gig at a startup while I was in college and, as most startups do, failed. This was right around the time of the bubble burst and left there and actually joined a not-for-profit. I was working as an engineer there. A software engineer, spent some time there, worked up through to a team lead.

Incidentally is where I met Troy, who is the founder of Courier, and spent some time there building out their software systems, variety of different stuff, primarily for benefits and those things. Actually left, though, and went to a company called Elqua Elqua some folks who are in the marketing space might recognize, but they were the Magic Quadrant Leader at Marketing Automation, and it was there through their major UI rebuild and IPO and then acquisition by Oracle. Spent some time there at Oracle, but was really missing the startup life again, so left to jump back in and was in a chief architect role at another startup where we were working in the transportation and logistics space. Again, as many startups do failed, I'm like, success we found at Elqua.

Odd story along the way, troy and I actually all these companies I mentioned we actually worked together at, so we've really professionally spent a bunch of time together, let's at 17 or so years across four jobs. But when he founded Courier reached out and, honestly, it was really pumped by the idea. It's something that Ian and I both had experience trying to get teams to build before. You always build something that's not as good as what you envision, primarily because it's not your core value proposition. Everyone needs notifications. Everyone needs really great engagement with their customers. That exists a lot at the marketing level. It doesn't exist at the transactional layer transactional layer and so I was really interested in what he was pitching and made the jump from where I was to come over here, and the last what's about to be four years are approximately history.

0:03:58 - Tyler Wells
Four years, though in startup time is it triple or is it quadruple in terms of the years, or maybe dog years?

0:04:06 - Seth Carney
It's some weird point on the space time continuum, because it's both fast and slow. Some days I will tell you it felt like six months, and there are some days I will tell you it felt like 10 years. So depends on the day you ask me.

0:04:20 - Tyler Wells
And so let's talk a little bit about Courier. You send a lot of messages, send a lot of notifications. What's the scale if you can tell me, what was the scale when you joined and what is the scale now, if you remember?

0:04:35 - Seth Carney
Oh, I remember Well, I remember when we signed our first customer and then I remember when we signed our first customer actually really started sending messages to Courier.

You're always trading on relationships very early on and very early on in the inception of a company and we did similar things trying to get people we knew in the industry to pick up and use the product and give us feedback and things like that and that's your first customer.

They don't really usually go live and then we got some folks really using the product and giving us real feedback and things of that nature. And I do remember it was just a few messages, right, like literally Courier's messages were the only thing that we're going through the product we were sending. Probably I don't know, I'm sure there was a point where we were sending 10 or 15 messages a day and they were probably all to ourselves, to the point where, like throughout the infrastructure now we process us effectively billions of events that are occurring throughout our system throughout the course of a month, and events mean a wide range of things they're profile updates and message sends and routing and workflow invocations and all kinds of other things. But pretty far pried from where we were, certainly when we were just sending messages to ourselves.

0:05:55 - Tyler Wells
From just a few to billions. And let's talk a little bit. Who's your infrastructure provider?

0:06:00 - Seth Carney
We're hosted on AWS.

0:06:02 - Tyler Wells
And how's that growth been during your time at AWS and when was the first time you sort of hit that, that oh shit moment of real scale?

0:06:13 - Seth Carney
Well, I mean, I guess there's probably been several oh shit moments over the course of. You know like, look, lots of oh shit moments and lots of oh shit moments over the course of like those few years. But it's like I think the bigger issue is that they always hit when you're least expecting out. Right? I remember one time we were actually like this was our own fault, you're building fast, you're building furiously, early on in the course of you know the product and you don't you don't always say, oh, I am going to predict this usage pattern in this scale. And so you know, we had an early version of the product and we had written some code actually about. It was like I think 14 months prior, and we didn't anticipate a certain usage pattern.

And I was. You know, I ride my bike to work and I got off my bike at the office and I my phone was going off and it was Troy furiously trying to get a hold of me and he's like everything's delayed, everything's delayed, what's going on? And and you know it's like what the heck is happening? And you know we were getting huge bursts of inbound traffic from, in this case it was, you know, one particular customer and, oh gosh, it just blew up the table. And so you know, we had to frantically go figure out how to get pressure relieved from it and then figure out how we were going to go, how we were going to replace everything, because that couldn't happen again, right, and then you know there's been all kinds of little variations along the way. That case it was hockey.

I think we might have even gone and written a blog post or it's like hey, let's, let's turn, let's turn this lesson into something that we can create some content around and share our kind of hard learned stuff. So I think at some point we even wrote a. We may have written an article in terms of hey, here's something you should avoid and some best practices. But yeah, dave, I know you all at Twilio ran a really large elastic search cluster. You know it was. It's fun running into cluster limitations and having to upgrade the instances and all that stuff which has happened along the way. I feel like. I feel like if it could pop, it has. We've hit API gateway limitations. We've hit. We were to the point where, you know, somewhere on Lambda infrastructure, you know latency and cold starts, we can't pay for enough provisioning concurrency, so you know all kinds of little fun there.

0:08:42 - Tyler Wells
Well, I feel like battling scale and building for scale is sort of never ending. There's always something that's going to come up, there's always something that's going to fail and there's also some new usage pattern that, like you said earlier, you did necessarily anticipate. But because probably a lot of your stuff is exposed via an API, like everybody, customers can do strange things. Customers can write buggy software that calls things into a loop that constantly is smashing your gateways and things start to spin out of control. And sure you know customers do that. But we have to protect ourselves against that because we all run multi-tenant infrastructure. And how do you prevent that one customer from running away and taking down all the rest?

0:09:28 - Seth Carney
Well, I mean, I don't know. It's funny I feel like I've talked to so many people at this point the base concept always ends up being the same and you have some kind of priority rate queue, and I know I'm talking like a really specific design pattern there, but, like if I'm just being super reductive, I feel like the majority of things come down to the right separate, understanding your workload, having the right separation, having the right design patterns in place, understanding how the load is distributed through your system, things of that nature. It's like really interesting. A lot more people are running and solving problems that do require I don't want to say distributed computing right, but like a better understanding of distributed systems, like AWS is forcing this in a lot of cases.

Right, like you don't necessarily always have a single place that you're doing your work. You know you're performing your workload or you're sharing that workload across multiple events or currencies within your implementation that you need to correlate or whatever. It is right, and so increasingly you have to have a better understanding of how that load is flowing through, how people can impact and influence it, do your best to coordinate off, which sometimes easier said than done, and then you know the big thing is like kind of having a backstop around observability. Make sure you get the right insights into your system, make sure you're getting alerted and you understand when things are going side, like you understand that baseline behavior and that when things deviate from that baseline you've got some ability to react to it.

0:11:00 - Tyler Wells
Yeah, you talked about something concept. I think that was leading me to want to ask about what do y'all use for tracing? When we start to think about distributed systems and we think about these hops and these dependencies specifically between your software and underlying AWS and for tracing becomes like insanely valuable to understand bottlenecks and everything else. How do y'all tackle that there at Courier?

0:11:24 - Seth Carney
Yeah, I mean. So we do all of our observability through data dog and so we've got a lot of stuff going on in terms of what's made available to us. So, like, I guess it's a few things like hey, I know we're not unique here, but we were invented. The nature of the business that we're in is like a highly invented, like general interaction, which leads to us having like a fairly robust set of information and metrics in terms of what's going on within the Courier application, regardless of where that's occurring, and so we generally have all kinds of metrics, we have all kinds of logs and, in terms of tracing, we rely on open tracing. Effectively that's published into data dog. We have sampling that runs against it. We look at outliers. We're able to understand in terms of like, look, the authorizer had this much overhead, this is what it contributed to. This percentage of requests took this long in Dynamo, this long in S3, this long connecting to Kinesis or whatever it may be. And we can do that across our API, across, like, our internal worker infrastructure or whatever it is. Primarily, it's really interesting because we do the send stuff for us is it's all mission critical, but being able to send and deliver messages is just hyper, hyper critical. Because even if you think about it in terms of an outage like people now want to send notifications to know that an outage has occurred it's just the the critical way that you end up communicating and tolerance for those types of outages is just so low. And so, especially interacting with the API, we pay a lot of attention to latency. We pay a lot of attention to general error, error rate of time, things of that nature. We have a pretty low tolerance in terms of error occurrence, in terms of escalation to on-call engineering and things like that.

We also try and work really hard to ship changes in a highly incremental way, right, and what I mean really mean to say incrementals may be a wrong choice of word here, but not quite blue-green either right?

So, like will their often ship a duplicative infrastructure and then we start routing portions of traffic over to infrastructure. We observe how that change in infrastructure affects the system. So, especially in our pipeline, we have a really robust set of experimentation and that experimentation, like we're able to understand to really nuance detail. When we ship a code change to our send pipeline, did that positively or negatively impact any real portion of it? Right? Profile load times transition from profile to routing, routing to rendering, rendering to send or any of like the nuanced details there exist in between and we can run really detailed experiments to understand the kind of the impact of that rollout. And we couple that with how we divert traffic over to it. What we end up with is like generally a pretty safe approach to rolling out our code Gosh. I wish it was 100% effective. It's not, but it does kind of reduce the last radius as much as we can.

0:14:35 - Tyler Wells
Yeah, I mean 100% is, you know, impossible to achieve. I think there's quite a few folks that always feel we should, but it's just not necessarily feasible. When you're talking about setting up the kind of blue-green but also being able to sort of split traffic, how much of that is taking place in your development environments versus your production environments? I'm assuming you're running, you're running tracing and everything all the way up the stack from, you know, across both environments, that kind of maybe no sampling in Dev. You see how everything behaves and maybe sort of tone that down once you get to production. What's that look like inside of Courier? We actually do run sampling in Dev.

0:15:13 - Seth Carney
Primarily because replication is important. Right, like we run look, our Dev environments aren't like our. Is a developer going to be running like a multi-region setup and are they going to be running? Maybe every single piece of infrastructure that we've got in production to that scale Not likely, like their Kinesis streams will be tuned, like all their stuff will be tuned way down. But it's effectively a production replica and that does include tracing. And the primary reason is because you need replication of stuff. Right, they need to be able to potentially turn that tracing all the way up, get 100% of what's coming through, so they get the resolution they need.

Maybe going to course, tune it back down, but, like, we run the same sampling rates. We run in Dev, we run in staging, we run in production. That's actually a slight lie. We run slightly higher in production in staging because it's lower volume. We do run somewhat lower sampling in production, but that's because we're getting a lot more traffic and we do get a pretty representative set at that rate. So we have our staging and production environments hooked into Datadog, not our development environments. That's because the majority of what we're pumping out are existing things that exist within CloudWatch and so the Devs are just hanging out in CloudWatch. They need stuff specifically there. It works pretty well. But yeah, I mean it's one of those things where you can never have too much information. But there are times I look at it and say I wonder if I've got information overload in terms of not seeing some stuff, I think I should be, but it's never been actually true. I think in terms of operations, you can probably have not, you can never have too much data.

0:16:45 - Tyler Wells
No, and I would completely agree. I find the problem that I'm running into is the size of the check I have to write in order to get all of that data, because that data has to go someplace and those third parties like a Datadog or a Honeycomb. They're not necessarily cheap and if you're using things like auto instrumentation or there's some copy pasta of instrumentation that's propagating through code and people turn that on and sometimes they don't even look at it. But guess what? It's running up a bill in those systems and it gets can get a little pricey.

0:17:18 - Seth Carney
Yeah, like look, I won't sit here and say that it's not not a balancing act, but like the thing I would say and I think you would, you would probably agree which is the cost of an outage is worse, and especially the cost of an undetected outage over a period of time is significantly worse, Potentially an engineering cost, certainly in brand and reputation. Then, more often than not, then the check that you're writing on that monthly basis, Like it's look, don't get me wrong, it's still a balancing act for sure, but I always feel good when I've got the data.

0:17:51 - Tyler Wells
No, I completely agree. I think it's one of those things as, especially engineering leaders all the way down to the sort of junior software engineers, are you going to use what you're sending? Is it going to provide value or you just sending garbage? If you're sending stuff that provides value, that is going to detect and or prevent an outage, okay, cost doesn't necessarily worry less, right. But if all of a sudden, you start running a bunch of tests and you're doing load tests and you're you're ramping up the amount of events you're generating for no one particular reason that are never getting looked at, but you're just printing money for Datadog, that's where I'm kind of like hey, hey, let's, let's think about this. Do we really need this? Like, how, how important is what we're doing If you're not even looking at it? You're just essentially just, you know, spending money to spend money and providing no value back. Well, I think even outside.

0:18:47 - Seth Carney
Look, if I'm just taking the spend aspect aside, right, the exercise is a constant gardening exercise, regardless of whether it's for spend or whatever else, because it's this constant struggle Trying to like ensure the signal ratio stays high enough against the noise ratio. And you know your shipping code dozens of times a day, maybe more, and all of that code you're shipping is generating events, traces, logs, metrics, all kinds of things, right, and you know it's this constant exercise of making sure that the stuff that you're generating actually is valuable and and like kind of naturally comes with that. You get some cost control and things of that nature. But that's what I found is just like it's. You're always chasing the horse down.

0:19:37 - Tyler Wells
Always pruning, always trimming it's a it's a gardening, I think is a great analogy there.

You know, if you want to be a good gardener, you've got to cut away the dead stuff. You've got to constantly prune things. To have them look pretty, provide value, you've got to cut away the weeds. I like that analogy. How do you think your mindset has changed when it comes to architecting and building for scale over the years? Because obviously, as scale has gone up at Courier, you have learned some tough lessons, as we all have. There are. There are things where you're like I don't understand why it's gone this way and you dig in and it takes maybe sometimes quite a bit of time to dig in and peel back and understand it. But how does that, how do you think that sort of influenced you and what have you? What are the? What are the maybe tenants you held in the beginning that have now you've just tossed away?

0:20:26 - Seth Carney
Yeah, you know it's really interesting because you know, maybe I'll contrast like where Ellicoll was, and then I'll talk about Courier a little. Because it's really interesting? Because in the marketing automation space, I don't know I felt towards our end of the end of my time at Courier, everything was I'm sorry Courier so long it's, but it all it was. Everything was just in time, like I always remember we were constantly up against our limits, right, so like. And some of these numbers are roughly accurate and I don't remember the customer names, but it was like, hey, our biggest customer at the time it's got 10 million contacts. And you know we've got to be able to lead, score all those contacts and process all their inbound events and all kinds of other stuff like that. And then you know, of course, do all the messaging and marketing to them or all the campaigns. And then you know the sales team comes in and they're like I've got a customer that's going to do, excuse me, 17 million contacts. And it's like, oh, how are we going to do that now? Because it took it took some real creativity and engineering effort to get the number to where it is today. That like number is kind of limit. And then it's like you know, the engineering team goes away and through some miracle they figure it out and then, like a kid, you not a couple months later we got a customer who wants to do 27 million contacts. And it's just like you know.

It was always. You know the cheese was getting constantly moved and you know there was no time to feel good about the engineering effort that you put in because there's a new, new one to tackle. And it was like kind of sort of interesting about courier is like a electrical was good, so courier is doing the same thing, was going through an emerging market but like technology changed so much in terms of that business over the course of when they were actually going into the market, there were different expectations. Now then, the expectation of a notification company now is that scale just exists. And that's the kind of situation that we were in from the start, which is customers just expect us to scale, they don't expect us to. You know, crap out at 50 rps or 100 rps. They're like no, we've got tens of thousands, potentially hundreds of thousands, millions of notifications. We had early customer and lattice. Lattice was already a big company by then. They had a lot of notifications they wanted to send, and the expectation is that courier could handle it.

And so you know, I think, a lot of infinite rps infinite rps and I think a lot of what I've really learned is specifically that there are inflection points and conversations along the way that you really got to have and I think, more often than not, when I look at it, I've found that the situations where we've scaled and grown the best is where we've had really great partnerships. We, of course, have minimum scale off the top that we have to worry about, but making sure we're having that conversation with customers who are coming in where scale is going to be an issue, working with them to understand where the pain points and pressure points are. We've had customers come in who is using our inbox component and they're like we're going to have 500 connections a second. Can you handle that? And it's like let's go do some engineering work and because I can look at it, admittedly a very high, that's like a very high number of connections, yes, and like I will tell you with that number, just like, generally speaking, that was bumping up against what was going to be our quotas in our aforementioned AWS provider, their soft quotas, yes, and we pushed well past those. But like it's like, hey, let's partner on this, let's work through that issue, and like the same is true, I think whenever you're going through that. That being said, there are definitely some real and true scale problems and challenges we've had along the way where it's like, hey, I'm mistakes I'll never make again. Diablo Lockhees are certainly one of them, but, like a lot of it is looking at A thinking about how long you expect the infrastructure to last.

You're not building anything for an eternity. If you think you are, time will prove you wrong. Time will prove you wrong just factually, and so, like you should have some period of time that you expect the infrastructure to last, you should have some basic estimates and ballparks that you're putting in place in terms of how you expect to scale, what you expect to support, so that, like, you can measure this is the real big takeaway. You should measure, you should understand how the system's performance lines up to your expectations and things of that nature. There's always, of course, things that are going to come in, potentially through sales, through growth, through success, that might change those expectations.

That's fortuitous. That's a problem that you want to have. Pat always has said, at least to me, when you solve a problem for someone and you do it in a good way, they're going to look for you to solve more problems. And I think similar situation here where, like when we're thinking about scale the right way, when we solve those scale issues for our customers in the right fashion, they're going to look for us to solve more problems, which is going to force us to think higher in terms of what we are actually going to be supporting throughput volume, rps, things of that nature. So like having some idea of how that's going to scale and then measuring against it is super important.

0:25:35 - Tyler Wells
Yeah, I feel like you solve that customer problem. You deliver that use case for them, which for them may be a huge win. And now you've built this trust. And now they have this trust in Courier, they have this trust in you and the engineering team and they're like, okay, what can we do next? How can we take it further? How do we expand what we've built on top of Courier into the next level so they can train more traction, get more revenue, whatever it is they're trying to do?

But now you've got that trust with you and they're going to come back and say, okay, seth, can you do this? And if you're like any good CTO, if you haven't done it, you're going to say maybe they're going to kind of look at you and say, hey, look, we've got some work to do. We've ran it up to this RPS, we run it up to this number of concurrent connections. But we understand you want to go further. We've delivered on this. Give me a week, give me two weeks, give me some period of time. We'll get back to you, but we're going to help solve this for you.

0:26:31 - Seth Carney
Yeah, and I think that's look. I think in a partnership, people really appreciate like I'll call it vulnerability, but like I think they appreciate that to a certain degree because it's truthfulness and honesty. I remember we were in a sales deal one time. This was a hair courier and like we didn't actually close the deal, and actually for good reason. They were way too big for us at the time.

And they asked us this crazy question and they're like what, if we need to send you 15,000 RPS, if we need to send you 15,000 messages a second? And like it was actually, I mean, this question it was a weed out question because if they said, if we said we could handle it, they knew we were going to be lying. They knew, they knew they would know, they knew there was no world we could support, that they knew about our company, they knew our size, they knew about our infrastructure. They knew, if we said yes, that we would just be lying through our team. Thankfully we didn't say yes and like we had a really great conversation with them right, and they were open about it, they said, hey, it would have been really bad if you had said yes to that question.

0:27:39 - Tyler Wells
And we had a really great conversation afterwards and yeah, go ahead.

Yeah, I was going to say I mean, if you're not transparent with core functionality such as scale, and you're not providing that transparently, you're going to road that integrity that you have very, very quickly and clearly. They must have that. Customers probably talk to other providers and the other providers are like oh, yeah, yeah, no, no, 15,000,. Yeah, bring it, that's fine. And then they got and do it and they probably got burned. They probably ended up in a really bad spot and you can imagine that other provider, their integrity, just like you know, plummeted to earth and you know they're over there talking to you and the first thing you say is like no, we can't do that, that's not something we've designed for. But hey, if you're telling us you're going to do that, you know we're going to need some time that maybe we can figure this out. There's probably going to a lot of engineering work that we'd have to do, but I tell you what a big enough committed contract. There's a lot of things that we'll do for that.

0:28:35 - Seth Carney
Or you know, maybe, or you know maybe there's yes, that'd have to be a really big contract, probably. But or or like maybe let's explore some smaller use cases, some places, some other places that make sense to start, while at the same time, like we jam, we work on this scale problem and we kind of grow. We grow to that position slightly more organically and like we're still able to help you out on, you know what the real value, profit, career provides.

0:29:03 - Tyler Wells
Yeah, I love that when you take more of that partnership approach to somebody that's asking for something that's, hey, I can't do it now, or it's coming later and say, hey, look, we can build some smaller use cases with you. We can. We can get the mechanisms and the workflows in place. And that's going to buy us time to get to where you need to be or come up with a better solution that's going to offer you what you need to solve your use case. But there's no reason we can't do these other things. First kick the tires on us. You know, let's build that trust, let's. You know we're giving you the transparency, but we can't do that. But guess what? You give us some time. You give us a long enough time. We'll probably figure something out.

0:29:41 - Seth Carney
Yeah, and the partnership is what's, I think, critical there? Right, Because you know, I think you both end up learning a lot along the way. You know us, of course, about what we're building and how we're getting better, and you know, a lot of times the customer learns a lot too, because that forces them to really crack open what process they're looking for, what really works for them, and to ask critical questions around what they need and don't need. So oftentimes it needs a lot of improvement on their side as well.

0:30:07 - Tyler Wells
Yeah, Sometimes it even leads to where they're coming back and saying, gosh, you know, we're us working together. We realize we actually don't need that. We can completely solve what we're trying to do. And now we've you know, we've solved whatever. We had to go out and done that without some sort of crazy number that didn't actually make sense for us.

0:30:24 - Seth Carney
Yeah, we um, it was actually really interesting too we I was recently talking to to someone and you know they were thinking that they were like you know, we really want to use Courier, but we're going to take this big project on upfront because we have this huge rat's nest of just events and everything else in terms of our existing notifications infrastructure. And I was like, yeah, but what if you? What if you didn't have to tease all apart? What if you could just ship those events over to Courier and you could start to peel them off one at a time? And you know, now you've kind of opened yourself up to new use cases.

You can see all the value in the new use cases. You haven't, like, you've given yourself a vector to get off the old ones, but, like you know, you haven't created this massive investment. You've got to make upfront and, like you know, I don't mean to harp on the word partnership, but like you know, they, they wouldn't, we wouldn't have been able to have that discussion if you know, kind of didn't have that open dialogue where it's like, well, this is the problem I'm facing. Well, hey, let's tackle some different solutions and think about how we might be able to make this a little easier for you.

0:31:21 - Tyler Wells
And you. It's. It's much easier to do that when you've established that trust relationship with that, that customer right. I mean it makes it so much simpler. Let's switch gears a little bit. Let's talk about so you're sending all those billions of messages, you're dealing with just a ton of events, that that your customers are sending to you, generating in your infrastructure, et cetera. What are you doing with that data and how does that data help you build sort of that foundational trust with a customer when they come to the platform and they want to utilize you for a whole bunch of use cases? You know that they're building on top of you.

0:31:54 - Seth Carney
Yeah, wow, that's a that's an interesting question we have. We do have a lot of data and I mean, look, we use it in in a lot of different ways. Like, firstly, we try to be a very data driven organization in terms of how we, how we build, how we decide what we're going to build, how we decide we're going to operate, what our pricing looks like, what types of features and things like that we may or may not make available a different. Like we try to be as as data driven as we can, fully recognizing that you know lots of improvements we can make make there. However, it's like it's kind of twofold right, there are things that Courier uses the data for that help us basically enrich our business, right, make better decisions, do all kinds of things that we think will improve what it is we're providing. But then there is this other set of data which actually is really critical to our customers. Courier is a piece of infrastructure.

It's a piece of infrastructure that you plug into your platform, your application, your architecture that provides a really robust set of capabilities and things like that around notifications and really engaging with your customers. But when you're plugging pieces of infrastructure in, you need to understand its health, its status, and so a big part of what Courier is providing and the metrics and data that we collect and then publish out are to help you understand the true health of how Courier fits into your ecosystem. That might be because you plugged into our Datadog or New Relic integration. Maybe you hooked up to our webhooks to get information. You might be looking at just our vaults, but we use all that information to provide really rich observability for our customers and hopefully provide it where they're trying to operationalize the rest of their systems. Right, mention Datadog a bunch here, but there are lots of. There's Elk stack, there's Prometheus, there's New Relic. There's lots of different. There's you mentioned Honeycomb. There's lots of different providers out in the space. Hey, there's just Vanilla's CloudWatch that folks are built on top of, and so lots of different ways that folks are operationalizing our systems, and I think it's really critical that Courier uses a lot of that data to and then our connectivity to those systems right, to push that data to where they are and really make it so they can have and build like operational excellence around their organization. And then there's probably actually a third tier right, which is how do we enable our customers to make more intelligent decisions?

Lot of customers want to see the data that's in Courier and their data warehouses. Why? Because the things that they're doing are not localized to just Courier. They may be running marketing campaigns that support the transactional notifications that are going out. They may be updating their doc, they may be doing all kinds of different things. They may be making pricing changes, they may be making larger business changes, they may be making new investments that could potentially affect a given experiment that they're running, of which notifications transactional notifications or growth engineering related to that might only be like a really small part of, but it does contribute to the outcomes and the understanding of that experience.

So, making sure that data then is accessible to organizations in their data warehouse, either through our custom data warehouse connection or you can always hook up segment right and we'll just pump that data over as source data which you can push into your data warehouse. But then there's our in-product analytics, which are powered by Propel. You're going into the application. Not everyone has a really robust operational story. So people do legitimately log into Courier, people do legitimately need to use the interface to make real decisions, and so, like all of our in-wrap analytics like I said, powered by the Propel API and products, we do also think about how to integrate them in there right, so you can understand your engagement rates by template. You can see things of that nature. So data plays such a huge role. It is the critical thing. It is the critical thing.

0:35:52 - Tyler Wells
And how does the build versus buy play into that? As you're thinking about, hey, I've got all this data, I've got these pipelines, I've got these systems emitting data. I'm collecting this data, probably in various streams, warehouses, s3 buckets, I don't know what have you. If you're like any other enterprise, it's kind of all over the place until you start to consolidate because you want to build with it. But what is your thoughts when it comes to the build versus buy, like how do you sort of weigh those decisions?

0:36:21 - Seth Carney
Yeah, I mean the core part of the build versus buy for us in terms of the current product and platform. We have to run a lean operation. We're a startup. We have limited resources. I know we've talked about Twilio a couple of times. I would love to have the resources Twilio had at some point. It's like, oh, the things I feel like we could build. And the funny thing is I'm sure Twilio felt the same way even when they had those resources. To be clear, they're looking to gosh. I wish I could get more.

But in our case it's thinking about first, our core value proposition, the things that are deeply part of our core value proposition. We're always going to build. We want to make sure our opinions stay tight, make sure we're able to really influence that. But I mentioned being lean. We do have a data practice here at Courier. We have a head data. We have a really robust data pipeline that's backed by Snowflake and DT and Vitran, and that also comes with cost right and a part of being CTO and a part of being responsible for our NDE org, you do have to be cost conscientious, and so for us to run Snowflake warehouses that are up and running all the time at the size they need to run to support the volume of queries we need to have. Well, all of a sudden now we're getting into the situation where we need to build a legit infrastructure that fronts that, so we can reduce our costs and reduce our response times and latency, guarantee our uptimes.

This was an interesting conversation I was having with our head of data, Raymond, the other day, actually about Propel, which is like one of the things that you should be thinking about is the SLAs for our analytics and selfishly offloading those to Propel and making sure that's something that we rely on your team for, because we do have a really lean data operation and the things that he's focused on are well, do we have a robust architecture for uptime and latency and all the other SLAs that we need to provide, slis and SLOs and SLAs that we need to provide? He's not going to be building the critical pieces of infrastructure that we need to power our business and to power our customer connections and to power all the other things that we need and then kind of like building on that lean nature. Right, we were talking about this just the other day. Right, snowflake isn't the only way we ingest data.

You mentioned S3 and a bunch of other analytics and ingestion sources, and having the disparate ingestion mechanisms like the API destination and other sources like S3 from Propel, help us do that without having to necessarily engage in our data. Sometimes we want to, that's a part of it. But having the flexibility to really build things that don't require a slice of our organization to go implement I can peel an engineer off, we can pump the data through a vent bridge, through a stream into Propell and go about our day it really enables us as an organization.

0:39:25 - Tyler Wells
I was going to say you peeled yourself off that one day, I think you message me on a Thursday or a Friday afternoon and said, hey, are those UI components? That UI can't read yet. Can I build on that? I was like, yeah, sure, why not? I think you turned around and you probably had that in probably a couple of hours. A couple of hours, you were rendering the data you wanted.

0:39:49 - Seth Carney
Yeah, it was great it was. It was a couple of hours Friday afternoon, Thursday, Friday afternoon had it turned out. New graph, new set of metrics and graphs we needed for our customers.

We had rolled out some updated billing stuff and we needed to get some usage data. It was super easy. Yeah, look, those in-app product analytics are super important to help drive our engagement, but also real decision-making and things like that that are part of the experience of the product. I've built this. I've been responsible for this before. I had to have teams responsible for building it and it's expensive.

0:40:27 - Tyler Wells
I think you actually said to me very early as you guys were starting to build Tyler I built this before, I've carried the pager for this before and I don't want to fucking do it again. Nope, this is your job. This is what you all need to do for me at Propel.

0:40:43 - Seth Carney
Yeah, yeah, I mean, look, it's just big sets of data. It's large sets of hardware we have to run. It's a certain set of expertise that there's opportunity costs for us to have that. I've got folks who are building that and working on that. That means that we're not building the best possible notification infrastructure that we could build. We're not building the features and capabilities that people need to power the right engagement with their customers. It means I'm building a data pipeline so I can show them engagement data and showing that data is really important. It's really important. I want to get to that fast and I want to get to that easy. I want to spend the real hard time with the engineering brains and the hard problems to solve. That's what I want to spend. That are career related. That's what I want to spend time on. The hard problems that aren't I want to buy. I want to buy.

0:41:33 - Tyler Wells
Yeah, I mean I think we found, especially during our time at Twilio, as we were starting to scale customers and use cases, that providing I should say not providing, but empowering our customers with that insights, that data on how the platform is performing specifically for them, consistently reinforced and built up that trust that they could then have on the platform to expand their use cases. If they couldn't have seen that data, if they couldn't have got their hands on that data, how would they really know? Because obviously things are going to go wrong. You're running on Cloud infrastructure. There is a whole ton of moving components there, like, for instance, specifically back at Twilio, we had call centers. Well, guess what?

Call centers consist of A bunch of human beings working on shitty laptops over bad Wi-Fi or bad connectivity, doing other things that they probably shouldn't be doing, trying to have calls. When, all of a sudden, you're in the middle of a call and you decide to open up some other app and it takes up 100 percent of the CPU, well, guess what happens to the quality of that call? So when you have that data and you give that data to the customer and the customer could come in and say, hey, twilio, you failed me again we say well, we didn't actually fail you. You can see here that how come the CPU spun up or how come this happened over here. We can do our best to try to help that. But without that visibility you can't even have that conversation.

0:42:53 - Seth Carney
Absolutely, absolutely Like. I know slightly different space in notification. It's not Twilio is doing a driving notification, dude call center is slightly different. But like there's a real world monetary cost that's associated with that occurrence, right, very much so In Currier's case.

Let's say you're a marketplace. Let's say you, Tyler, you're selling something, hey, I'm interested in buying it. And I said, hey, I want to buy this. And you just never got the notification. It looked legit, cost you money? Yeah, costing huge money, not hypothetical money. Not like oh, yeah, like in some world, tyler would have got a few bucks from that. Like no, literally, unless there's someone else to come back behind me and say, oh no, I'm going to buy that at the same price or whatever. But now, potentially, your margins have gone up, right, because you've got to invest more time in selling that thing. You've got to change your listing or take new pictures or whatever it is. Maybe you decide to lower your price, maybe it doesn't get sold, maybe I'm the only person interested in buying it.

And so real, tangible, real world effects. Crouch a password, right, like someone can't log into your application. The push notification for a discounted upgrade hotel room didn't go through For someone who would have purchased the upgrade. Like these are real world, real world dollars. Same is true on the analytics side. Right, like, if I'm not giving you the right information, that helps you to say oh, I made a change, this notification and my delivery rate dropped by 10% because the providers think it's spam now. Or I moved my call to action from the top to bottom and now my click through rate changed from 8% to 4%. Like that stuff, really math. Like in the world, yeah, people doing business stuff, it hurts your customers?

0:44:46 - Tyler Wells
Yeah, it does. It matters big time. I mean, that could be millions of dollars, you know. Imagine, you know. Imagine you can't buy on one of the big sale day on prime day from Amazon. Imagine that goes down. How much for that cost. That's insanity, you know. I know one of the discussions we had was probably the years between like 2014,. 2015. At Twilio there was a big mindset shift into reliability, into resiliency, into the notion of trust for customers, and the story that Jeff would always say to us was imagine that call doesn't go through and that call that doesn't go through is for a suicide hotline which is powered by Twilio. What is the level of that impact going to have on that person making that call and what are we going to do to make sure that never happens? Because we did. We had providers, we had customers that were setting up those types of call centers that had not dollar implications, had life implications.

And it really just resets the mind right. It really brings that home and sort of like, oh shit, like this is real, this is real beyond. Just. Hey, I'm doing a bunch of cool engineering and I want to make this thing not fail you know this could be somebody's life.

0:46:05 - Seth Carney
It's such a sobering story, Geez. Yeah, I feel like it's funny because I was going to mention a, before you mentioned that one, I had a one I was going to tell about a similar not similar to that, but one where it was not necessarily monetary cost but like was fairly impactful. But I got to tell you that's about, as that's the pinnacle which you just said right there yeah, when people's lives are actually on the line for a notification or a phone call. Yeah, you know, I think thankfully courier is not in that position at the moment, that is, I don't think anyone's sending notifications like that through us, and if they are like no, yeah, no, kidding, right yeah.

But like. But we do have folks like we have a customer who's in operations as a service. They send notifications when systems go down your system. Like you pay them to monitor your systems. I mean your system goes down. They tell you Like you want to get that notification. You know it's got to go through.

0:47:02 - Tyler Wells
I mean it's mission critical impact right there, mission critical impact on the infrastructure and the platform that couriers building, couriers providing, and they have to be able to trust it that when they send that notification to their customers that it's going to reach the destination and it's going to reach it on time and when it's supposed to be there so they can perform their duties, they can bring those systems back online. And it's, I always felt like telling those types of customer stories and especially we would bring them in to Twilio and have them tell them to us. You know, it really sets the tone and it really brings it home. I mean, it makes it real.

You all brought the I think customers came in for like all hands to do like a customer presentation right All the time, all the time they would come in and they would sit down and that's where we would hear some of those stories. You know they would come in and sit down and tell us, like how important these systems were to what they were building and what the impact was on them personally. So they would personalize those because they're the founder or they're the VP of engineering or they're the head of customer success and they would internalize that and you could feel the pain that they would at times share with you when something wouldn't work. Now, thank goodness that there was not all painful stories. It was also stories of joy, of amazing things people had built and how they'd had incredible success. So we got to hear those as well. But I think the other one had a lot of lasting impact, especially as we were thinking about reliability and resiliency.

0:48:32 - Seth Carney
Yeah, I hear, for customers is always great. You know it's. You know and we're not perfect. We've had, you know, our growing pains and incidents that have happened and you know it is. It's painful being on customers when they're you know, when they talk to you about the impact that an outage or a bug or an incident or whatever has had on their business. And like you really feel for them because you know, you realize how important, like the service that you're providing is for them and the impact on their brand that you can have and how people view them as a company. Right, and just because they've used your service, but just because they've made the choice to use your service, you now have this massive potential to impact how they view it 100%.

0:49:16 - Tyler Wells
So I've got to ask have you ever had to troubleshoot, or been in the thick of troubleshooting a major incident and had your children come walking into the same room because you held your composure through that, like I don't know if they're coming home from school or whatever. You held your composure like a pro and just kept right on rolling and I've been there and I've been on the other, the other side of it and you're just like they're there, it's fine, it's what's working from home is like and I'm gonna say I'm keep going, you will not break my train of thought.

0:49:45 - Seth Carney
You will not. So today is the first day of school and here in San Francisco and yeah, all of them are they were just getting back home from school and my daughter's got soccer today, so a friend's over and yeah, that one was. It was there was some distraction, but it's. Yeah, enough of the COVID. Covid trained me well at this point. A lot of you know a lot of virtual calls and things like that, where it's like you're trying to manage it, so it's just like straight face.

0:50:16 - Tyler Wells
Yeah, just straight face. You maintain that composure and just kept right on rolling with it.

0:50:21 - Seth Carney
So no, I love seeing it because I.

0:50:24 - Tyler Wells
Mute is definitely your friend. No, that was pretty cool. So so what's next for Courier? What do you? What's? What have you got on the horizon? What's what's coming up from an engineering perspective?

0:50:36 - Seth Carney
Oh gosh. Well, the big thing for us right now is we're expanding our global footprint. We're going international, so we're bringing up we're bringing up European instance of Courier. We're going to start standing folks up on that, which is both challenging and exciting, like lots of, lots of lots of fun stuff to tackle there.

You know, you and I I think I talked to you a few weeks ago yeah, we're going to be looking to push some higher SLAs in our infrastructure, which will come from actually really fun and interesting engineering challenges, both the residency project and the resiliency stuff, I think, are the ones I bring them up first because, like as an engineer, they're the ones where I look at them. I'm like man, that's what I want to work on. And then, but like, we've got actually some really, really interesting product updates coming. We've got a whole revamp that we're looking at in terms of how we think about integrations. You know, if you use our app at all, you've been seeing some slow and steady UI progress it's been making. We've got a dark mode coming to the app really soon, which is exciting Things like that. Some really interesting new API updates that are coming around managing assets, templates, things of that nature so a bunch of really fun stuff. That's some really big plans, I think, for 2024, setting the stage with some underpinnings right now. But a lot of fun stuff.

0:51:53 - Tyler Wells
But going global, that's no small endeavor, I remember we were on the whiteboard a little bit when Nico, my co-founder, and I were out there in San Francisco a few weeks ago and we were talking through a few things. And it's always a scary thing because you're like, hey, yes, I've taken this sort of this infrastructure code approach, I've got these nice pipelines for deployment. I've done it in one region. And now you're thinking to yourself, okay, I need to replicate that and I think I've done the work for that. I'm gonna start testing all of that and you're gonna start deploying that into, I'm guessing, ie1?.

0:52:32 - Seth Carney
It'll be yeah, eu S1.

0:52:35 - Tyler Wells
Yeah, I think that's the Irish, if I remember correctly.

0:52:37 - Seth Carney
Yeah, yeah, that's Ireland.

0:52:38 - Tyler Wells
Yeah, IE1.

0:52:40 - Seth Carney
That's right, yeah, and so, yeah, we'll be snapping in there. Yeah, it's like interesting. I almost rather the team come and say, hey, we need our SLAs way higher. I need you to be five or six minds. I'd almost rather that in terms of the engineering challenge and the residency like, because I think that like that one's got I don't wanna say really clearly there's a really narrow boundary, like there's a solution set that exists Well, and then you're gonna kind of fit yourself kind of narrowly into one of the bits in that solution set. We talked a little bit about that. Yeah, you know, with the exception of our good old friend Elasticsearch, lots of really sane solutions and like price conscious solutions that exist there.

The data residency one is really interesting because it's slightly different for every company, right, like, and like that's putting aside that. Yes, of course, you regionalize the regionalized, like the protected, regionalized data. Like putting that aside, there's always like a lot of nuance to how people think about your product, like for career. It's entirely possible that you have a US business that does. You have a US company that does business in Europe. They have regionalized European data. They store in European systems. They'll store in European courier, instance, but that European person might potentially generate events from their US systems. Right, and so there's this real world.

0:54:05 - Tyler Wells
Where does events get stored?

0:54:06 - Seth Carney
Yeah, when do the events get stored? How is the process? And the answer is is the law defines how that happens? To some degree, right, but, like the system interactions are like are slightly unique in that instance, right, because, like their US system, can't have any access or knowledge of their European system. Right, and so, like courier definitely has to act as a broker that says, oh no, you need to go over here and enforce some of these boundaries and things of that nature.

0:54:37 - Tyler Wells
So yeah, it's always interesting challenges there.

Yeah, I was gonna say it's always interesting because you take that sort of example and it's like the EU employee is visiting a customer in the US but processing things or generating events or doing things that can only be resident back in the EU. So now they got to make that hop across the pond make sure they're not connected to that API edge in the US and that data is going where it needs to be, so you're not running a foul of the lawn and seeing large fines levied against you or anyone else.

0:55:13 - Seth Carney
And, of course, like with that, you'll contend with oh, I never thought this data four years ago that we stored here might be possibly also stored with this other data which has to be regionalized, and so now you get to make sure you've got all the right boundaries in places and it's, it's fun, it's like it's a really interesting engineering challenge, but it is like nonetheless a very large engineering challenge.

0:55:37 - Tyler Wells
Oh, I mean, if it wasn't hard, it probably wouldn't be much fun, right? That's what I always at least tell myself.

0:55:43 - Seth Carney
Sometimes I like it. Yeah, whatever you're gonna do to sleep at night. Yeah, whatever you're gonna do to sleep at night.

0:55:49 - Tyler Wells
Every once in a while I want something to be easy and I get surprised when it is. But I think more often than not I'm like, oh, this shouldn't be too bad. And then you get into it and you're like, oh shit, that's right. I didn't think. Yeah, nope, not this time.

0:56:00 - Seth Carney
I'll tell you, except in the case of CDK, where it was like nope, this was exactly what I thought it was and what I wanted it to be, and I'm so happy that. I'm so happy that I'm living over in this place now and I never want to go back over to this dark other place again.

0:56:14 - Tyler Wells
No, no, cdk, is that very happy place, very happy place. And I think what's your next one? You've talked about the transition from provision concurrent lambdas to our friends over our friendly fargate containers.

0:56:26 - Seth Carney
Yeah, that's, yeah, that's. I mean, look, we've still got some work to do. To get over on to CDK, we've got a. We actually had a really cool configuration going. We've coupled and X with CDK and so we use affected in conjunction with that, and so what we get are truly incremental micro service deployments, which are which is super interesting it's really done wonders for some of the build times.

0:56:53 - Tyler Wells
I am behind the times on this. I need to get to that. After you told me about that in San Francisco, we briefly started to look at it, but it sounds magical and I want to see it is plus CDK.

0:57:04 - Seth Carney
So, it's amazing, I'll just say this, like I we didn't put it in initially. And I, because I was like, oh, basically we didn't have time right, I had time box to CDK effort because we just need to get some stuff over. I was like it's fine, we'll just eat the build minutes. And it wasn't like it was a big deal. But then I mean, I think I was like bored on like a Saturday or a Sunday or whatever, then maybe boards the wrong word but I was just like looking for an excuse to do something in code and I was like you know what? Oh, you know what it was. I was rolling out a new service and I just annoyed, I was like I'm tired of watching all this deploy, I'm going to go put affected in. And I was like I plugged it in and our build times dropped 80%. And now like that's very well, because like, if you're changing a large swathe, you're changing a large swathe code. Like of course that would be.

I'd love to see you're going to deploy more, but like the interesting thing is, the more often than not you aren't right, like if you have good practices around small focus, commits and things like that, like what you end up with is like, oh, I changed code that just affected this service. Well, now only that service deployed and everything else stayed consistent. And it's just like I don't want to say promised land, but like, oh, it's like one of the happiest off, like one of the happiest like infrastructure operational places I've been in quite some time.

0:58:27 - Tyler Wells
It's so interesting, it's so cool Annoyance and pain, such great catalyst for change. Yes, yeah, yeah. You're like oh, it's good enough, it's good enough. And finally, one day, you're like why is this taking so long? I want to do other things, I need to go someplace, or you know, whatever it is like, I got to make it.

0:58:45 - Seth Carney
Build times. Build times are like the canonical example. Yeah, your build times are two minutes and then all of a sudden they're 45. And there's like the real answer is like crept up from two minutes to 45. It's just you boiled the frog, right, but it's kind of like that. And then, like you go put all this pain, you know effort in you, right Cause there's pain. You're like why am I? It's taking too long, I don't got the time for this. And then, like you get it, you make it better and then it degrades again, right, there's that constant gardening coming back. So this is one of those things. The reason I say I'm in a happy place is because I don't feel like I've got to go do that constant gardening now.

0:59:24 - Tyler Wells
That's. That is a very nice place to be. I know that I need we're we're. We're going through a gardening effort right now with a number of things and I'm hoping somebody one of my people listening to this is like yeah, this is build times really do kind of work. We've got a repository called the mono repo which contains a whole bunch of stuff that gets deployed and built. That is kind of painfully slow, not going to lie.

0:59:47 - Seth Carney
Well, let me tell you this If, if I can give you a, really I can give you a like a couple click away to make that faster. And this is I'm trolling a little which is like a you can just go pay for a bigger runner.

1:00:02 - Tyler Wells
Well of course.

1:00:05 - Seth Carney
So no, but effected affected is the way to go, Like, in all seriousness, like I'm telling you happy place and it's been, it's been even in development and everything. It's just been super nice.

1:00:17 - Tyler Wells
I mean, I think we've aim X star way out of this enough. It's time to it's time to do some engineering work and and take a page out of your book. So well, seth, I know that you and I would sit here and continue to talk for probably another good hour, but you've got some children that are home on their first day of school. That I'm sure I always want to hear about the first day of school and how teachers my daughters was yesterday and I got a kick out of it when they came home and they were so excited after their first day at their brand new school. So that was a lot of fun.

1:00:43 - Seth Carney
Oh yeah, that is fun, like everything's new and shiny and you've got all the good equipment, so that is fun.

1:00:49 - Tyler Wells
Well, it was in the morning. It was new and shiny and very, very scary, and so when they did finally come home from school and were ecstatic about how great of a first day they had, that took a lot of angst and pressure off mom and dad. So it was, it was good to see.

1:01:07 - Seth Carney
Yeah, that's always it was. It's always tough when, when you're kind of waiting for the outcome of that stuff. So yeah, it'll be fun to catch up with them see how things went, make sure they didn't drive their teachers nuts, you know that kind of stuff.

1:01:18 - Tyler Wells
So Absolutely Well. Seth, thank you so much for joining me and, of course, thank you for bringing. Being a customer of Propel and I always appreciate these conversations and look forward to them, even when we're not recording podcasts, because you and I get to, you know, kind of kick the tires and some ideas on lots of fun stuff you know often, which I think is really cool. So I appreciate the relationship and the partnership that we have.

1:01:40 - Seth Carney
Yeah, it's great being here. I really appreciate having you.

You could be building more

Get a product demo to see how Propel helps your product dev team build more with less.

Stay updated and connected

twitter icon