Epsisode
9
Data Chaos Bits: AMA on Propel's Cloud Odyssey: Exploring Infrastructure Choices, Change Management, and Cost-Efficiency Strategies
🎙️ In this podcast episode, we dive deep into Propel's journey of choosing the right cloud infrastructure, change management processes, and scaling strategies without breaking the bank. 💰
Some of the questions we answer in the episode include: 1️⃣ What cloud infrastructure did Propel choose to use and why? 🌐
2️⃣ What CI/CD solution does Propel use, and why was it chosen? 🔧
3️⃣ How does Propel handle change management and testing processes? 🔄
4️⃣ What are the biggest considerations for cost efficiencies and the best decisions to scale without rapidly exhausting funds? 💸
5️⃣ How does Propel handle third-party software and its own software installation? 🛠️
6️⃣ Can Propel scale to allow direct customer connectivity to data for the purpose of customers using their own BI tools? 📊
7️⃣ What are Propel's thoughts on using Azure and .NET stack? ☁️
(0:00:01) - Propel's Cloud Infrastructure and CI/CD Choices
Propel chose AWS for its Event-Driven Architecture, using CDK and a CI/CD solution with GitHub, Code Build, and Code Artifact.
(0:08:06) - Change Management and Testing Process
We consider bug fixes, feature editions, automated tests, validation, rollback strategy, and GitHub repo for QA and testing.
(0:15:56) - Cloud Infrastructure and Cost Efficiency Strategies
Installing third party software and our own software using Helm charts, operators, cost efficiencies, serverless, AWS bill, Dynamo, and scaling.
(0:29:41) - API, GCP, AI, and Azure
Propel enables customer data connectivity, GCP testing, AI research/recommendations, Azure NET stack, analytics API, and measuring AI model quality.
0:00:01 - Tyler Wells
All right, welcome to the Data Chaos podcast. This is a special private AMA for one of our customers And I have my esteemed co-founder, mark Roberts, here joining me from Berlin. Mark, welcome, and you just got paged.
0:00:17 - Mark Roberts
Thanks, dude, i did just get paged, okay.
0:00:24 - Tyler Wells
All right, and we're back.
0:00:26 - Mark Roberts
I do have my other notifications turned off, but Pagerduty.
0:00:30 - Tyler Wells
I do not have mine turned off, not a problem, gotta keep that one on. So at least one of us has got to be there.
So all right, let's get started with the questions. I'm gonna open with the first one, so I'll read them and then we'll start to answer Question number one what cloud infrastructure did Propel choose to use, and why? So there is an interesting backstory here. Originally, we had started with GCP and really the only reason we wanted to use GCP is both of us had been building or actually all of us, i should say, had been building on AWS throughout our entire careers or time at Twilio And we really just wanted to try something different.
Obviously, it heard good things about GCP, gave it a try, but I kind of found every time I was in there, it just felt like things were sort of half-baked. Something would be like part of the way there, you would talk to a product manager. They're like oh yeah, we're almost there with that. It'll probably be a few months, but we're gonna get to that and it should be where it's supposed to be. And so there was just a bunch of things like that that I thought were kind of not I don't know, just kind of felt clunky, and then- And at the time you were trying to configure that with Pulumi.
0:01:43 - Mark Roberts
Is that correct? Yeah, i was using Pulumi.
0:01:44 - Tyler Wells
Yeah, so I wasn't using Terraform or anything else like that. I was trying something different, because Pulumi was multi-cloud, as was Terraform. So I was just trying some different stuff And didn't really just kind of just felt off about it. And so I started, you know, searching around and end up stumbling on these set of videos called App 2025 from the AWS Developer Evangelist, and it just it spoke to me. It was everything that we'd ever thought about in terms of infrastructure, how to run infrastructure, how to build apps and infrastructure, you know, event-based architecture, event-driven architecture And it was just I just remember like messaging Niko and I was like all right, no, we've got to go back to AWS. This is crazy, like this stuff is so much better. Let's just start building on that. And we've kind of been building there ever since.
0:02:36 - Mark Roberts
Yeah, I think that event-bridge stuff is really amazing. I'm so happy without Easy, they made it for us to be able to build our entire event-driven architecture around the event-bridge. If we did not have the event-bridge, I think we would have stood up our own Kafka cluster or some other system so that we could emulate what we're able to do natively with event-bridge today. It's been a huge boon to our productivity.
0:03:05 - Tyler Wells
Yeah, and I really would not want to have to be operating Kafka right now, and especially, you know, early days. it's just like let's just let this thing run and event-bridge just runs. It basically works. It's got a ton of integration points. It's easy to run filters on it. It's you know, if you forget something in the pipeline, just you know, you just add another sort of destination or spot for it and carry on. It's sort of at the heart of everything we do. So it's been real nice.
0:03:33 - Mark Roberts
And I think the other thing we should touch on is CDK. Cdk for configuring all of our resources in AWS has been a big one as well. We can stand up so much infrastructure. There's no. Obviously there's no clicking right. We don't click around to configure things in AWS. But what are the alternatives? You can look at Terraform, but Terraform doesn't have that same level of flexibility that CDK does, where you can really programmatically set things up. I never had a chance to look at Pulumi like you did. I don't know if they went with more of a Terraform or CDK approach, but curious to get your thoughts there.
0:04:07 - Tyler Wells
Yeah, it's definitely more of a CDK approach, so it's more programmable, right, and so more more like writing in a real language as opposed to HCL.
0:04:17 - Mark Roberts
Yeah.
0:04:18 - Tyler Wells
Not a big. I mean, yeah, it's a lot of people use it. You know YAML on steroids, but not my favorite.
0:04:28 - Mark Roberts
It's cool what you can do with it. But we've been implementing our Terraform provider and Getting really well-versed in the HCL and it's interesting Make some choices there.
0:04:40 - Tyler Wells
Definitely. Let's go on to the second question What CI CD solution does Propel take advantage of, and why was this chosen?
0:04:52 - Mark Roberts
Well, i remember when I joined, what did we have? first AWS code build setup And I think what we were trying out back then was there was a special way to run CDK projects that was pretty tightly integrated with code build And on paper it sounded really interesting. In practice, i remember there were some shortcomings. First of all, we ship all of our software to a private code artifact repository in AWS. However, we do our development in GitHub And I remember that tying together GitHub with the code build, with the code artifact, and actually getting information out of the code build process and into the GitHub pull request was pretty challenging. It just didn't feel like a great experience.
Some of us, for example at Twilio, we would open up PRs and then bots would automatically go and post messages on the PR let you know the status of it. Setting that stuff up with code build was a little more challenging And I remember it was a little bit slow using code build. So one of the first changes we made was moving to Circle CI. Circle CI was a tool that we had used previously on the video team all of our SDKs that we had previously built at Twilio for video. We're using Circle CI to run builds in parallel. Post status updates to the PRs. Let us know we can merge them. So that was a really natural fit. We got by with Circle CI for I don't know six months How long were we running it?
0:06:42 - Tyler Wells
Oh no, i think it was north of a year.
0:06:45 - Mark Roberts
Yeah, it was easily north of a year.
0:06:48 - Tyler Wells
Yeah, it was north of a year. And then the big reason why we changed was partially because we had this enterprise license with GitHub and we had all of these actions that we were paying for but not using, and so it was like, why are we paying twice? And the Git actions have been pretty super solid for us and I think it's been somewhat of a tighter integration without having to deal with yet another third party, and so we ended up shutting down everything that we had in Circle CI, brought it back down to a free developer license. We still have it, but nothing's being built on there right now, and so we've gone 100% Git actions. That's probably now at least like the last six months, eight months.
0:07:35 - Mark Roberts
Yeah, and we're very happy with it. We have had to tweak it a bit. We did eventually move some projects to self hosted runners where we wanted to build for arm 64 or we just wanted bigger machines for tackling some of our bigger builds. But GitHub actions has been great. All these providers today, they kind of have converged on the same set of functionality, so that migration from Circle CI to GitHub actions was really natural and it's working great now. Agreed.
0:08:06 - Tyler Wells
Question three What is the change management process used to merge and deploy to production?
0:08:15 - Mark Roberts
Yeah. So that is a good question. I'm trying to think of where exactly we want to start, because whenever a change is proposed, i kind of want to ask, want to ask where does it come from? Is this a feature edition we want to ship. Is this a bug fix we want to ship. Is this a refactor that engineering needs to do in order to make maintaining the code going forward easier? So, depending on where the change is originating from, we might want to ask some questions. If it's a feature edition, we might want to ask okay, have we properly specced the feature? Do we know what we're building? First of all, do we know the change we're introducing? That's really important because that's what helps. Yeah, that helps us, as engineers, check out each other's work, go do the review process and make sure that this change represented in this pull request actually represents the desired functionality. Similarly, if we have a bug fix, we always want to be super clear, like what is the bug we're trying to solve here And is this truly a bug? Could there be customers depending on this functionality that we might break when we fix the bug? So these are things we're keeping in mind as reviewers.
Let me just tackle the bug fix case first, because I think that's easiest. When we get notified about a bug, always step one we want to write a test in the pull request that demonstrates the bug and this test should fail, because if it fails that means we really found that bug. We know that we found something that's wrong. Step two the engineer implements the fix for that bug, usually in a follow up commit, and once that change is implemented we see that the test is passing Okay. So that's kind of step one. We have a pull request with a test that was failing a fix, and now the test is passing. We push that up to GitHub. Our automated tests run. So we lent the PR, we build the PR, we run unit tests, integration tests and if all of that passes, then we do something called validation, and validation means we actually take that pull request and we deploy it to our staging environment where we run automated cluster tests on it to ensure there's been no regressions in any of the other functionality. If that looks good, the PR is now validated And as long as we've gotten one or two approvals on the PR, we're good to go.
We can merge that thing to main and it'll kind of do the same deployment process where it will send it out. It'll do another run of cluster tests. It'll ask one of our engineers are you sure you want to deploy this now to prod? Usually we say yes And then we draft the release notes.
So we say this change represents bug fix X, it's a customer facing change. We also have an opportunity to list any non-customer facing changes, for example, maybe a refactor or a dependency needed to be bumped. And then we always include the rollback strategy. So in the odd case that this change that we're shipping to prod has an issue, we always instruct ourselves how do you roll this back? 99% of the time you just redeploy the last release. So that's the bug fix scenario, the feature addition scenario. Honestly it's pretty similar.
The reviewer of the PR is also just going to make sure that the changes there are matching the feature that was requested from our product team. They're going to make sure that tests are being written that actually test the desired functionality not only the happy path but the sad path to so error cases, unexpected cases. And then we're going to do that same process. We ensure that automated linting, building, unit tests, integration tests are run. We do a validation process where automated cluster tests run in our staging environment And then, if we get the approval from one or two other engineers, we merge it. It ships to prod, We draft the release notes and we're good to go.
0:12:25 - Tyler Wells
It's pretty straightforward, i think, but also has a lot of checks and balances and safety in there as well.
0:12:32 - Mark Roberts
Yeah, and this is something we blogged about a few times. We've got a couple blog articles about this. If any of our listeners want to read more, we'll share those links.
0:12:42 - Tyler Wells
Makes sense, all right. Next question. This is question number four Where are repos stored? Well, that's pretty easy Everything's in GitHub. Yeah, simple answer. Let's move on. So question number five how is QA and testing performed? Automated, automated, automated. Except for kind of the final stage where we do a almost like an acceptance level test with product management myself maybe, and or Nico, but for the most part I would say, you know, we've got pretty good coverage across just about everything at this point and automate absolutely as much as we can.
0:13:25 - Mark Roberts
Yeah, very rarely there's things you cannot test, but when you hit those, it's worth asking. Actually, is there a way I can automate this? or is there really something fundamental about this? For example, maybe costs, or maybe there aren't the APIs that you can call to do something in an automated fashion. It's always worth thinking must this be a manual process or can I automate it? Because if you automate it, it's just going to pay back usually automate all the things, all right.
0:14:02 - Tyler Wells
question seven how is integration tested?
0:14:08 - Mark Roberts
So integration. We have multiple services. All of these services usually need to cooperate together within our cluster to provide the customer experience we want to offer. So we do two things. We have these automated cluster tests that I mentioned as part of our pull request process. Every PR is validated and we run cluster tests for the affected service as part of that validation process. In tandem, in our staging environment we have cluster tests running on a loop every 30 minutes. So our staging environment is always running the full set of cluster tests to make sure that everything is working together. So keeping these two sets of tests has enabled us to ensure that everything is integrated and working together properly.
0:15:01 - Tyler Wells
And then we've most recently, over the last number of months, added another layer of testing which is Cyprus. So now we've got all of those tests from the front end. So everything driving through our console, actually through the user interface itself, is now part of all of our sort of testing toolbox as well. That's right. Question eight are k8s used for scale and is it possible to scale to zero? I know you've got a couple of answers here, but yes, we 100% use Kubernetes. We're actually using AWS EKS to orchestrate and manage all of our Kubernetes. So our entire data plane is all managed and built inside of Kubernetes.
Let's see mostly what we do is it's always, like Mark said here or as I'm reading from some of this here, it's all third party software. So we typically try to install as much as we can from Helm charts If they're not Helm charts and we're going to use operators if there's not, well, i'd say we start with operators, then we go to Helm charts and then we have manifest. Specifically, we have to configure things, but even the manifest are all in code And so when we Mark talked earlier about us using CDK, we also use another thing called CDK or CDK eight s, which is essentially CDK for Kubernetes, and so that allows us to programmatically and dynamically create manifests that are then managed in the same process as our CDKs to apply that configuration or apply those manifests to our Kubernetes clusters. I really like that approach.
0:16:44 - Mark Roberts
Yeah, it Kubernetes. It almost gives you this install anywhere option. If you can spin up a Kubernetes cluster in AWS or one of these other cloud providers, you now have a place where you can deploy third party software using those operators or Helm charts. And yeah, you'll have to configure some things, but you get from zero to 100 really quick.
0:17:06 - Tyler Wells
And then, following on to that, our first party software, which is stuff obviously we've written here at propel, doesn't run in Kubernetes. We're using Fargate for that. But even the getting to Fargate for a lot of that stuff was an evolution. Most a number of our services all started in lambdas, but then when cost as well as performance latency we're sort of undesirable for some of our services that needed to be more synchronous in nature, needed to be more responsive, the team moved everything to Fargate containers And so now things like our entire API is for the most part is GraphQL API, the most part is all in Fargate containers. A number of our other services are running there in Fargate containers as well. So we have a nice mix of ECS for a lot of the third party stuff and EKS for a lot of the first party stuff.
0:18:00 - Mark Roberts
Oh, vice versa.
0:18:01 - Tyler Wells
Oh, did I screw that up? Yeah, sorry, reverse that. EKS for the third party stuff. So the Helm charts, the operators, everything else, like that. ECS for all of the first party stuff. So that's everything that we've written ourselves. So sorry, reverse that. Thank you, mark, it's good. podcast review almost like a code review there.
0:18:21 - Mark Roberts
All right.
0:18:23 - Tyler Wells
What are the biggest considerations for cost efficiencies and best decisions to scale without rapidly exhausting funds?
0:18:34 - Mark Roberts
I feel like that's a deep one.
0:18:36 - Tyler Wells
That's a deep one, but I would probably start with. If I'm thinking of services, i'm probably starting with serverless, because you're not paying for things that are sitting around idle. Obviously, you know that comes with some level of performance hit. Yes, you can do things to try to keep those warm. You can also go with the. Is it provision concurrency, if I remember? but if you're doing provision concurrency, you might as well just have EC2s or, in our case, fargate containers. There's no real reason to pay for that. Plus, they're more expensive. Let's see Other cost efficiencies.
Watch your AWS bill weekly. It's something like we're using vantage, i believe, and vantage allows us to get deep insights into those bills. So things like CloudWatch can creep up on you out of nowhere and all of a sudden you're paying $1,000 a week for cloud watch that you may not be using. Dynamo is another one. That one kind of is a money printing machine for AWS. You've got to watch that. So constantly having a good handle on you, know the transactions you're making to Dynamo, what you're writing to Dynamo, watching your testing, if you're doing a lot of stuff to Dynamo, but just really paying attention to those bills. Anything else you want to add to that, mark?
0:20:01 - Mark Roberts
You just think about some of the tactical things we've considered as well. So, if you need one of your services to be highly available, how many instances do you actually need to run? Can you get away with two of them? Do you need three of them? That's obviously going to imply different costs. Additionally, I think about where do you deploy services to. So do you need multi-region support? Do you need a single region? These are different tradeoffs you can make in order to control your costs. One other thing is if you have applications which are not latency sensitive or they can tolerate a little bit of latency, scaling to zero is great for that. That's why we use lambdas for a lot of our asynchronous events. The asynchronous events do not need that super fast latency that our synchronous APIs do, And so we can afford and we're very glad to just turn those off when they're not handling asynchronous events. So scaling down where possible also helps control.
0:21:19 - Tyler Wells
Yeah, in certain cases I know you asked earlier about Kubernetes and scaling to zero With our data plane we can't do that. Obviously, that data is always there and available, So we're not ever scaling that to zero. But, as you mentioned, our admin APIs, or some of the APIs that we have, can scale to zero because they're just called so infrequently. But yeah, I mean, I basically I mean to kind of add to that I have started a weekly habit of reviewing our AWS costs and looking for creep. So any place that I start to see things going up in double digit percentages week over week, I start asking questions and we start poking into that. Sometimes it's because, hey, this has to, we have more customers. Other times it's like, oh shoot, I left this, this metric stream on and that's going to honeycomb, and now that's costing me a whole bunch of money and I shouldn't have left that on or turned it off. We're scaled it down.
0:22:11 - Mark Roberts
Yeah, i'm also remembering one more thing, which is just that AWS cost calculators they're actually very good. I'm remembering our evolution of the API layer from Lambda to Fargate. We had actually chosen some very large Lambda sizes to try to improve API response times And then, by using the AWS cost calculator, we could see that what we were paying for that size of Lambda in order to achieve a certain level of performance, we could get the same thing with a smaller Fargate instance that was always on and and still have great performance, even better performance. So understanding the workload and then going to the AWS cost calculator can can help a lot too.
0:22:57 - Tyler Wells
Yeah, that was a significant price difference. What I remember is doing that All right. Next question What is the best solution for operational, super low latency real time data storage? So when I hear that question, i hear operational. I feel like that's not something you should be running. When I think operational, that sounds to me a honeycomb, a data dog, a splunk or somebody like that. Because if you're thinking operational data, you kind of want to get that. I mean you can run it yourself if you want. I don't know many big enterprises that do. I mean, if you can start to run things like Prometheus and that works, reason, i think that works reasonably well. But now you've got another service you need to run. If you're asking my sort of choice for best solution, i'm probably starting with honeycomb. I think it gives you a lot of bang for your buck.
It's a very different way of thinking about. You know observability, but from an operational standpoint I think we've been very happy with it. Yes, it's got some learning curve to it. It's not your typical. Hey, i'm sending metrics, a bunch of metrics, to data dog and I've got dashboards and I'm setting alerts on dashboards. It's definitely a different way of thinking, a different mindset, but as soon as you sort of move your mind over to Hey, everything in this event, and I can make these events as wide as I want, as much as verbose as I want, and just start sending those things to honeycomb And then I can start asking questions. Once I sort of understand the patterns of questions or the nature of questions that I'm asking, i can create triggers and create SLOs, slis. That's how I sort of think about, you know, operational, super low latency, real time data storage. Don't do it.
0:24:50 - Mark Roberts
Yeah, and if you use some like honeycomb or another provider, you kind of separate the concerns. Like you operate your infrastructure for your product, you do your best to ensure everything's up and running all the time. You don't want to necessarily also have your operational system in the mix, the thing that's supposed to tell you the health of the service over time, the thing that's supposed to page you, also running along with that infrastructure, because if an issue occurs you may lose your operational metric system That's supposed to report on the health of your product And now you don't know that actually it's not running. So by delegating that to a third party you can feel a little more resilient to issues that may occur internally.
0:25:37 - Tyler Wells
Agreed 100%. See next question. Is there any reporting tools or best practice to plan, report and prepare for SOC, pci and audit compliance? I mean best practices, sure, do things the right way. I don't know. No, i mean I think you've got to get a platform right. So we started with Trustero and then we've recently moved to a platform called Vanta, which is Vanta sort of the, i think, the big gorilla in that marketplace. They've most of the platforms all do the same thing. You're gonna install a bunch of agents into your systems. They're going to run tests against those systems to make sure you're following best practices for security, compliance and everything else like that.
It's largely a necessary evil. It's a slightly painful necessary evil depending on the size of your organization. But I think you just have to bite the bullet, pay the cost, go to somebody like a Vanta and dedicate someone's time and effort to get everything set up and you'll achieve. You know, soc2 type two. We got ours earlier this year. You know I ran, i would say, the vast majority of that stuff myself. Mark came in and helped me when there were things that you know I needed some additional lift on. But you know, you just kind of bite the bullet and you just gotta do it. There's really, there's no shortcuts here. Just get onto a platform and go forth and do it.
Next question How are payments best handled for early startups, stripe or other solutions? Well, we're all Stripe, yeah, as again one of these things that we didn't really want to mess with. We just sort of want it handled for us. Stripe has been reliable, has been pretty easy for us to set up. So I say, choose Stripe or one of the competitors, largely up to you. I don't think you want to go find yourself another payment processor Stripe. We have all the APIs we need. It's been simple enough. I would probably just stick with that. Yeah, you've got to pay a little bit extra, i think from a percentage wise, but you get so much tooling along with that in APIs I don't think it's worth the hassle, especially at the small scale that you're at, or small scale startups at, to even mess with it, yep.
Next question Is there a better payment solution at scale? I mean, that's hard to say. I mean to talk a little bit about our billing. We've just now started to add, or moving towards, self-service, and so we've brought in a billing platform called Metronome and Metronome sort of handles, all of basically our event driven like billable items, i guess is a right way to look at it, right. And then that integrates with Stripe And so out of our event bridge we send the billable items in forms of blobs, of JSON. Those all land inside of Metronome. We have different contract types and Mark. You can probably expand more on what the right names are. I'm calling it something too generic.
0:28:52 - Mark Roberts
right now There's contract plans on demand trial. Metronome takes care of all of that. But what's great is we do exactly what you said, tyler, which is we send those usage events from our event bridge directly to Metronome And then they're responsible for kind of aggregating those and then eventually either presenting the invoice to the customer through Stripe or charging automatically through Stripe.
0:29:21 - Tyler Wells
It's pretty simple. I don't think it's something to overthink right here. If you get to the point where you've outgrown Stripe, you've probably got a really good problem on your hands and you can probably negotiate with a bunch of other payment providers out there, but you'll probably stick with them for quite some time. Oh, a propel question? cool.
Can propel scale to allow direct customer connectivity to data for the purpose of customers using their own BI tools? for reporting, for example, customers use Power BI, or a customer uses Power BI and wants to connect to the data? Well, 100%, yes, we're an API, so propel does expose APIs that can then be utilized by your customers. You can choose, as the customer to propel, how you want to expose those to your customer, so I would probably suggest in that case, proxying them. But you can essentially provide the same type of data that you're getting from our GraphQL API to build your reports or your data applications. You can take those same things and serve those to your customers as well, giving them an analytics API that they can pull that data into their infrastructure and do what they want with it.
0:30:41 - Mark Roberts
Yeah, I think this is a really interesting use case. You definitely want to continue to invest there and make it easier to plug in additional tooling.
0:30:52 - Tyler Wells
I would definitely say our more sophisticated eg probably series being above type customers that also have more sophisticated customers all more or less demanded and ask about the exact same thing. They want to build their data applications but then also want to empower their customers to pull that data into their own systems so they can do their own analytics. So 100% we support it. Kind of answered this one, but we'll pull it up here again. Does propel use GCP? Why or why not? We use it for testing.
0:31:28 - Mark Roberts
Yeah, that's what I was gonna say. We use it for testing.
0:31:32 - Tyler Wells
Yeah, we have customers that come to us with data sources that are in the Google ecosystem, So we need to understand how those work. So we do use it for testing, but we're not really using it for much of anything else. We're not doing any production workloads on there, nothing that's customer facing Just couldn't find a reason really to that. We needed it. Aws has offered us what we needed. We've stuck to AWS and we'll continue to stick with them for the time being. Are there any propel generative AI research and recommendations?
0:32:09 - Mark Roberts
I don't have any off the top of my head right now. Certainly, we share them in our reading room on Slack. I think my main recommendation would be to just follow the papers that get retweeted on Twitter and read into those. Every once in a while I see something super interesting that comes up, but yeah, I don't have a list with me right now.
0:32:39 - Tyler Wells
Yeah, i mean it's open AI, it's Lang Chang, it's one of the other big LLM creators out there, or model makers Yeah, i don't know. I don't know if I have anything that I would say like, oh, you've gotta be doing this. I think it's mostly we're still experimenting ourselves, trying to decide where we want to fit it in and when we'll fit it in, and largely the way that at least my mind starts to think about how we would use generative AI is what can we introduce from a product standpoint that is gonna make our customers' lives easier, better, more efficient when it comes to utilizing their data to empower and or build data applications? And so that's pretty much what's going to help me drive any sort of thinking around that when we start to go down that path.
0:33:31 - Mark Roberts
Yeah, I would also just add, like, measuring is really important. One of the papers I saw recently was about measuring the quality of AI model outputs, And so when building a product around AI, I think it'll just remain important to measure the quality of the product experience. It's subjective. You need to capture a lot of information from the customer to do that. But yeah, I would say just monitoring.
0:34:01 - Tyler Wells
And, of course, propel. We can help you do that. So any of that token usage and or usage metering that you need on top of your generative AI solutions, we can help capture that so you understand how those things are being used and utilized. All right, our last question in today's rapidly evolving cloud stack and integrations, what are your thoughts on using Azure and, more specifically, anet stack?
0:34:32 - Mark Roberts
I haven't used NET in a long time but C-sharp is good. I think they have F-sharp as well, but honestly, i've been out of that ecosystem for so long. I mean, i would say, use what your team is effective and productive in. Not familiar with some of the cloud stack integrations or any integrations with Azure butNET there's good languages in the NET suite.
0:35:04 - Tyler Wells
Yeah, and I think I'll echo Mark's sentiment. You know, utilize what's going to make your team the most proficient with what they have to do in solving their jobs. I mean, there's different languages for different things, different problems you're trying to solve. In terms of looking at, like, the cloud ecosystem itself, i think Azure was sort of the last to arrive. I believe it was AWS and GCP, and then Azure sort of came later in the game. What does that mean? I have no idea. I've not used it, i've not really played around with it, i've really not had a need to go play around with it.
So you know I think largely you know what languages are your developers coding in today? What languages make them the most efficient at solving the types of problems you have to solve for your business? Where are you going to get the type of infrastructure you need with high reliability? They're probably all the same at this point, But if there's something super specific that you must have at one of the other you know clouds, you know, maybe utilize it, maybe you wrap it in an API, maybe you find some alternative, but we found sticking to a single provider to be the best for us. We've found sticking to two languages. In our case it's TypeScript and Go to be the most efficient for us. We don't want a proliferation of languages, but again, it all depends on your team and your organization.
You know, i think that's it. That wraps up this AMA. Mark, thanks for joining me, and we'll do another one of these things soon.
0:36:53 - Mark Roberts
Yeah, sounds great. I enjoyed working through these questions with you.
0:36:58 - Tyler Wells
All right, we'll talk soon. Thanks, mark, see ya.
0:37:01 - Mark Roberts
Bye, bye.
Transcribed by https://podium.page
You could be building more
Get a product demo to see how Propel helps your product dev team build more with less.