Episode 88: Christine Yen of Honeycomb on the Role of Observability in Bridging the Ops/Dev Split
Christine Yen joins host Brian Dawson on the latest episode of DevOps Radio to share her journey from engineer to co-founder and CEO of Honeycomb and why observability is so important in today’s software development.
Brian Dawson: Hello. Welcome to another episode of DevOps Radio. This is Brian Dawson. I'll be our host. Today I am joined by Christine Yen, cofounder and CEO of Honeycomb. Hello, Christine. How are you?
Christine Yen: Hey, Brian. I'm good. How are you?
Brian Dawson: I am doing great, doing great, excited to have the opportunity – I've actually, as we discussed a bit, I've actually had a number of touchpoints with Honeycomb and have watched them closely, did not know a lot about your or our story, so I'm excited to engage and learn more.
So, to that end, Christine, to get us started, can you give our listeners a brief introduction into what you're doing today and what led you to where you're at today?
Christine Yen: Yeah. So today, I am, as you mentioned, CEO and cofounder of Honeycomb, but I started off as a software engineer. I've been writing code at Silicon Valley startups for a while now and I had always been focused on the product side, always been focused on building something that customers and users used and wanted to make sure that we were building a really great experience for them. At the company directly prior to Honeycomb, I worked at a company called Parse with my cofounder now, Charity Majors, and Parse was a really formative, interesting experience for us.
It was a multi-tenant platform. All the engineers were in support, which meant that at any given day, on any given day, we could be on the frontlines dealing with a customer who was angry that our platform seemed to be down from the perspective of their app. And what this meant is that I, especially my whole career was building software that people depended on. I cared a lot about our service being reliable and fast and all the reasons you outsource certain parts of your stack to a platform. And trying to figure out, okay, why – why when everything else about our service looks like it's up, why does it look like it's misbehaving for this one customer? Why does it look like it's misbehaving for their app, their use case? And detangling that mess, detangling, kind of following those clues, understanding why software might misbehave in certain circumstances, that's really what laid the groundwork for my interest in Honeycomb as well as Charity's interest in Honeycomb. And I think she was very much more on the ops side of the equation while I was more on the devs side of the equation and it's been a real delight over the last eight years – wow – eight years of us knowing each other and working together that we've seen that line between dev and ops get blurred and software ownership really becomes part of our normal lexicon in talking about shipping software and having the folk who are writing the software really thinking and having the tools to be aware of what their software is doing in the wild.
Brian Dawson: Yeah, yeah and so as a software developer myself who came up –well actually, share a bit of my background, I kind of came of age in software development in the game industry, console game industry when you developed things. You took a CD-ROM burner, you burned them to a CD and then you shipped them out. You had very little ability to debug as you code in a true production environment, and then _____ in that production environment, you really didn't have the ability to get feedback. So I almost between that and deploying server based software, developing server based software where you develop something, go through the old traditional big bang deployment and then the rest of your time would be spent waiting for issues to arise and then spending all night digging through logs trying to find the cause. I can thank you for identifying this challenge and taking it on.
Christine Yen: Yeah, it's those feedback loops that are so interesting and such an opportunity for people to learn. Right? That's what test room development was all about. It was think about what your code should do, make sure that it does that and then build little feedback mechanisms to tell you when those assumptions start to deviate from reality. Again, this ops devs split is something that is really interesting to me, still very much present out there and is something that I ran into quite a bit because when you think about 2013, 2014, 2016, these were, as we were all, as an industry, moving towards containers and micro services and multiple storage engines and no longer just relying on a single database, you know, one of the things that I run into repeatedly as a software engineer trying to own my code and production was, you know, a lot of my interactions with the ops team would be me writing code, being very proud of myself, releasing it after writing all the tests that I thought would be necessary and having Charity or someone on her team knock on my door and go – metaphorical door – saying, "Hey, Christine, something is weird. We think it's something that you released. Can you come over here and take a look?" And I would go and I'd shoulder surf and there would just be like these walls of dashboards that showed graphs of things that I couldn't tie back to my code. They'd be, you know, "Cassandra right throughput." Or, CPU utilization on Cassandra host notes." And they'd be like, "Hey, what do you think caused this?" And I would just be standing there thinking about my code, thinking about the language that I used, right, users and apps and maybe endpoints, totally lost and totally unable to engage in their world. And DevOps observability, software ownership, all of these things are about merging those two, about taking the understanding of software as it exists on your machine and framing it in the language of production and the wild in the cloud.
Brian Dawson: Yeah, that's interesting. Yeah, kind of for want of a better word, native correlation between the activities on one side and the activities on the other side, which we can argue that it had been and maybe in some places still too much is based on people sitting in two separate chairs and two separate areas with two separate levels of expertise having to kind of manually reconcile and correlate the activity they're seeing on both sides, which is not efficient.
And I'm going to want to dig into the founding in a minute, but there's so much and this is, as I told you, I've got to keep myself responsible. There's a lot I want to dig into here, but what I do love about the observability movement and I'll be curious on your thoughts is that I feel that as we've pursued adoption of modern practices in terms of agile continuous integration, continuous delivery, more pervasive automation, different approaches to architecture, API driven design, micro services architecture, a lot of this is couched in moving fast and ultimately is rooted in a recognition that there's value in having fast and share feedback but in my observation, all too often, you know, we look forward, right, what do we do to get out the door?
And as we're establishing these processes, we're not actually staying true. And I'll go back to the root of kind of an agile or iterative approach, which is all about rapid feedback or CI, which is all about rapid feedback. But somewhere along the line, especially as we get closer to production, that concept gets lost and you know, that's where I take a lot of interest in observability. Before I move on, I'm just curious, do you have any comments, thoughts and observations on how the industry has or hasn't adopted a value system based on rapid feedback as we've tried to modernize?
Christine Yen: Yeah. You know, this has been something that we have chewed on consistently over the lifetime of Honeycomb. It's what are these kind of social, cultural boundaries where practices almost get stopped or paused?
When we first started Honeycomb, Charity being very vocal in the ops world was like, "Oh, well, this is a tool for ops people." And we struggled for a while to sort of try to differentiate what we were doing, what observability meant from monitoring, even though we saw it and we were like, "Oh well, we want to reach people who are carrying pagers and we want to be talking to folk who care about reliability." And it took us a little bit, you know, a year or so of talking to people and really being immersed in this space to realize yes, ops people today are the ones that are concerned with reliability, but developers are the ones who are – developers are involved in the feedback loop too. They're involved in these release cycles. They're involved in making decisions and holy crap, you know, this awareness of production almost gives benefits them more, benefits developers more than it benefits ops people. And yet in so many organizations, there's, "Oh, well, production's ops' problem," or, "Oh, I can't get developers to care." And there are these assumptions about the other type of person, when in fact, hey, all of software's feedback loops, all of the cycle from your development machine to production, it's only getting shorter. It's only getting tighter and our tools and our practices have to evolve to reflect that.
Brian Dawson: Yeah, and there's so much there, and I'm going to bookmark – I'm going to have a comment and we're going to shift, but I want to bookmark that if we get a chance, you know, before wrap this podcast, hopefully we'll get to go back and talk a bit about org structure, the impact that org structure has on tool acquisition and related decisions and how that ultimately may have an impact on that kind of social boundary where practices stop. But, we'll treat that as a teaser and I want to shift a bit to again how you got here. So as somebody that I started developing plenty of applications, I have hard drives full of novel new things that were going to change the world, but frankly, I struggled figuring out how to cross that, take that journey from being an engineer or a developer with an idea to a founder that really brought something to market. Can you tell me a bit about your experience going from engineer to cofounder? What did you learn? What were the challenges? What was that thing that drove you to taking that leap?
Christine Yen: Yeah.
One thing I honestly have tended to skip over in descriptions of my background are, when I was 23, 24, I was unhappy with – I had a startup, it got hired by a big company. I was unhappy with the big company and I was like, "Well, I'm young, I'm single, I don't have a ton to lose. I've got – my living costs are super low. I'm going to try the startup thing, see how it goes." And I started a company in the consumer space trying to help people book event spaces, something unimaginable now in COVID times, but ages ago, and spent a year building a lot of software that I was very proud of and the company went nowhere because I didn't have the right partner and we didn't know what we were doing on the business side and I'm sure that there were problems with the market.
There were lots of things and it was an experience that taught me a lot, exposed me to a lot. Went through my combinator and I went through a lot of the startup motions, but it taught me a lot about the value of finding someone who really complements you, of finding a problem that you know enough about, right, you don't have to be an expert in, but you know enough about to be able to trust your own judgment and to be able to suss out when someone's giving you bad advice and I think also showed me that hey, it's not actually about – like the barriers to trying something new and getting something started are numerable. Does that make sense?
Brian Dawson: They're numerable, but not insurmountable to try to project.
Christine Yen: Right. And you know, a lot of things had to be right in my life and it's certainly a privilege to have the financial stability to take a risk like that, but you know, it meant that when the next opportunity came along – I actually came out of that thinking that was not going to start a startup again because it was too hard for all the right things to come together and it was just like "Oh, that was a phase and it was a fun experiment." What it meant is that when I had left, you know, Parse eventually got bought by Facebook, I was unhappy at the big company, you'll notice a cycle. And I was chatting to Charity about what I should do next and I was chatting to Charity about what I should do next and she was like, "Hey, what do you think about this? What do you think about this space? I'm thinking about leaving Facebook." I just felt this moment of like everything clicking together where I think if – knowing her, knowing how the two of us have very similar work ethics and perspectives on how to attack work problems, knowing that we were compatible on that front while being delightfully complementary in terms of our kind of expertise and skill sets meant that it was really exciting to consider working on something with her and I had – my philosophy, anytime you change jobs is okay, it's fine to be unhappy somewhere, but take what makes you unhappy and turn those into learnings about what you're going to do differently next time. And going into Honeycomb, I felt at least the confidence of, well, I'm not going to make the same mistakes that I did with the previous company. And here are all these things that make me really feel like something is real here. One of the things that's inevitable when you start a company is there can be a million people telling you, "Oh, well there's these other companies that do this already." Aren't you guys too late?" And certainly for us in 2015, looking around at the Data Dogs and Signal FXs an d many other successful companies at the time, it certainly felt like there were a lot of folk in the space and what is aid about that conviction about that knowing enough to have faith in what you were doing, that was something that was very necessary to push through all that noise and say, "No, really observability is different. And let us tell you how."
Brian Dawson: Well, and I'm curious, at risk of turning this into one of my favorite podcasts, how I built this, is on the other side of, you know, knowing enough to have what I almost call some competitive or logical confidence in this thing that you're emotionally excited about is one side of it, but is there a benefit to not trying to overanalyze it, not knowing too much 'cause it sounds like, as you said, look, the challenges are numerous or numerable, but they're not insurmountable and if when you start, you just focus on all of the challenges, is it fair to say that that's when you risk shying away, backing down? So I'm just curious if there's anything to be said for, hey, there's also some things I didn't know and that allowed me to charge ahead and make this happen.
Christine Yen: Very much so. You know, one of the things that – Charity and I, in many ways, tried very hard to not be the stereotypical technical cofounders. We were like, "Okay, we know sales and marketing is hard. We know building the business is hard. WE know it's important. We're going to give it the respect it deserves." And yet, you know, over the years, it feels like we are still, even now uncovering areas on the business side that we – where we are having to learn and we have to get better quickly and you know, I remember someone telling us in 2017 like, "Oh, you think this part, getting your first few customers and building the product, you think this is hard. There's going to be a day when you're in hand to hand combat with all your competitors. And that's just the next phase." And we kind of laughed it off and were like, "Ha ha, yeah. We'll worry about that when we get there." Now in 2020, we're very much there and it has been a long, painful road with many nights just trying to – recognizing how much we still have to learn and yeah. I think that if we thought about how much –
Brian Dawson: How hard it would be.
Christine Yen: Yeah. Getting better at sales and marketing and really understanding the parts of how an enterprise deal gets put together, none of those are why we are cofounders to our companies.
Brian Dawson: To solve problems and focus on – and I'll throw in an adage that seems apt and I forgot, there' some TV show I just finished, Netflix binging too much, but continually they would say, "Well, that's tomorrow's problem." Right? And maybe they're saying, "Yeah, we know it's a problem, but that's tomorrow's problem. We'll cross it when we get there." I'm curious to ask, so being that, you know, engineer to CEO and Cofounder, I'm going to ask you kind of a multiple level question, a compound question.
Christine Yen: My favorite.
Brian Dawson: How did being a practitioner help you on that journey, right, being an engineer, that had actually experienced this problem and then now as CEO, how does your experience as a practitioner help you guide and steer the company?
Christine Yen: Well, in the early days, my experience as a practitioner – the most boring answer here is, being able to realize and build a vision that we wanted. Right? It just gave me direct control. One level more interesting than that is, it made it really possible to relate to our customers in a much more effective way. Right? When early on, Charity and I would go on site to customers and our conversations would be incredibly informal. They would be ones where they would tell us about their problems, their outages, their struggles. We would share out stories, we would talk about how a tool like Honeycomb could have helped us in the past and I think that even now, Honeycomb benefits from a deep bench of credibility here in the way that we run our engineering team, we talk about it and we talk about how observability has allowed us to do more with fewer people. I think that that is something that has continued to pay dividends and now as CEO, honestly, I think that my background as a practitioner, as an engineer, is almost means more to unlearn. If we just say that that bench of credibility is a constant, I have to train myself not to just jump in and try to solve a problem. I have to train myself not to be like, "Oh okay, let's just do this," and charge forward and instead, talking to the team and really - can you imagine a practitioner engineer, a sales leader and the gap there.
There's been a lot to unlearn, but it's been – it's been a blast and incredibly fun and the learning is part of why people do startup things.
Brian Dawson: Yeah, that's a phenomenal answer, again, inspired a deeper dive that I probably won't be able to go into, but, you know, now drilling down, you talked about observability and it's interesting, earlier on, you mentioned, I forget the exact context, right, but well isn't observability just monitoring? Right? And frankly, you know, I'll be honest, as an engineer that moved into a product marketing role, so I was already sensitive to marketing, but before I was really kind of sold on the power of marketing with honesty and integrity, I'd be like, "Yeah, observability is just another term for application monitoring to solve it."
And I now know that that's not the case, but maybe you can help me and listeners that may feel the same way better understand what is the difference between observability and application monitoring and if you're a developer or practitioner or even a release manager or IT infrastructure engineer and you're not looking into observability solutions now, why is it important that you do?
Brian Dawson: There are a couple layers to this answer. The first, even though these are – they're just words, they carry – monitoring as a word on its own, even application monitoring, carries with it a whole host of assumptions that are grounded in what today's or yesterday's technology could do.
For example, monitoring comes with these mental pictures of, you know, dashboards on the wall or something that is ready out of the box with just lots of things for you to scroll through and look smart. One of the things that we wanted to – the reason we wanted to move away from that, the reason we wanted to sort of define a new word to mean a different set of – carry with it a different set of action is what we saw with monitoring was almost a passivity, an assumption that okay, the graphs in front of me are what I need to find the answer to my problem. You're essentially relying on questions that someone asked in the past codified into dashboards and you're using that to triage a problem that you're facing today. And to us, that just felt incredibly backward.
You never go, when you're sick and you don't know why, you don't go to the doctor and have them look through your medical chart and just pattern match what might be wrong based on what happened in the past. The doctor asks you questions. "Okay, well how are you feeling today? How long have you been feeling that way? What part of your body? How does it hurt?" New questions that help them pinpoint their understanding of your body or your system, your problem in today's terms. And almost in this, again, active problem solving mystery hunt solving way, that was a big thing. The second part of this answer is, again, monitoring, it is still associated with a certain set of tools because people are used to a monitoring tool plus a logging tool or one application, one performance monitoring tool. You have your suite of tools.
Brian Dawson: Right, another dumps it into a blog management and then there's another app that scrape sit and yeah, sorry to interrupt.
Christine Yen: Um hmm, and people are used to their cornucopia of tools and I look at that and I'm like, "Those are relics of – that's a result of pathologies that we had back when each of these segments started." When we think about observability, when we think about what's happening today, it's – the news is good again. It's a sociotechnical problem. It's the teams and the cultural practices and the norms of having developers be responsible for the coding production and for putting developers on call for the practices and the processes that can be put in place when you have those people there and you give them the right tools. And it's not just about a tool set anymore. It's not just about adding a certain type of data into the bucket.
Brian Dawson: Yeah, there's a couple of things I'll dig in on there is, you know, a sociotechnical problem and you said at the end, is there's a human element to it and people that listen to DevOps Radio know we talk a lot about the culture and the human element and I always – I'm getting more and more passionate about this reference that look, at the end of the day, practically any application of software that you can find, it's something that starts with humans and ends with humans, even if it's an embedded system in an automobile. At the end of the day, you're enabling humans. And you know, as much as we try to automate or abstract away some of those manual and human components in getting from point A to point B, there's also throughout that, especially say you said, as we embark on kind of our modern practices and the new ways of developing software, it by and large just is fundamentally a sociotechnical problem.
Right? The other thing that pops to mind and you can tell me if I'm wrong or right, I'm always expecting someone to just tell me I'm dead wrong, so if I am, I almost see kind of the old view of application monitoring versus observability as being static versus dynamic and one being reactive remediation versus active learning. Is that a fair characterization of the difference between the two?
Christine Yen: That absolutely can be. I think one of the reasons I'm hesitating is that, you know, that is certainly the end goal we want. We think engineering teams, everyone should be able to do that sort of active, proactive exploration. Realistically, some folk are not there yet. When your house is burning down, you can't really start worrying about what's going to happen next week. And observability isn't constrained to folks who are able to think about next week.
But I think the active and the passive or sorry the static and the dynamic, absolutely. There's a recognition that again, unlike 2005, not everyone is running the same framework anymore. My rails app or, you know, my system is no longer comparable to your system. Back when everyone was running rails apps, these vendors could make reasonable guesses about what might be – have gone wrong based on these static assumptions of how you've architected your system. Today, you know, we used to throw around that screenshot of Netflix's ball of death of their microservices, but that's basically what more of our software looks like today.
Brian Dawson: So this is – so we've talked about how, you know, we've progressed as an industry, technology has progressed. We've progressed in the way as a community of software developers and operators, and Honeycomb is right there. Now, I'm curious to ask, what do you see next for the community, the industry, and is that next something that Honeycomb is focused on?
Christine Yen: I think this is not so much next as it is starting and I look forward to the acceleration, specifically the recognition of human factors in engineering.Jessica DeVito and there's a number of folks who have been talking about this for years, but when I look at SRE and DevOps, SRE is almost this next evolution of recognizing that things like burnout is a factor in well performing engineering teams, that you have to balance across these humans, and I think we're seeing bits of this around incident management tools, but generally recognizing and taking into account these teams of humans I think is a direction that we're all headed in. It's really interesting to us. And as for Honeycomb's kind of place in that, I think building for teams of humans is something that we've always seen as a core part of how Honeycomb is going to do what Honeycomb does.
Since the beginning, Honeycomb has really seen building for teams of humans as being a sort of core direction of how we want to approach observability and you know, early on, to ground this again in Charity and my's previous relationship before Honeycomb, you know, during a rotation, we wanted to build a tool that would help a Christina on call if Charity was asleep offline somewhere, but something that made it easy for me to tap into her brain, to almost look up, "Okay, well when Charity was on call last week, what did she do? How did she solve this problem" And in terms of Honeycomb's future, that's the direction that I look forward to going as well.
Brian Dawson: Awesome and as you can kind of gather from some of our earlier conversation, that, you know, hits me right in my heart, is that we're approaching problems that way.
I also identify that you know, you identify or you've been identified as a product focused engineer and I love the idea that really at the end of the day, product focus is kind of customer and human focus.So I've been waiting to ask you this. We talked about it about it, so now I am really curious. We're going to move into a standard portion of a DevOps radio episode that is called DevOoops. That's not DevOps, but DevOoops – OOOPS. Someone stuck an extra O in there at some point. And what that is, what is a software development challenge or just a challenge, frankly, that you faced in your career where, you know, you're vulnerable enough to have made a mistake but it turned out to be a learning opportunity that you carried forward that you could, you know, grace our listeners with.
Christine Yen: I mentioned a little bit earlier on that time when Charity's team would come knocking on my door. That was in reference to a real occurrence. Parse was a mobile back in service. I was building out the analytics product and it was backed by Cassandra. And one day, we had an incident. I had been pushing new code. This was a like, "Oh, well, I can't imagine what could have gone wrong." And what had happened, and I remember pairing with one of the engineers on her team, to look through – at the time, we had logs and I think at the time, we were actually using TCP to – we were capturing TCP logs to a Mongo instance to try to capture what was happening right that minute. And what we found was that one of our user's apps had launched in Russia and they had done something funny with the implementation such that every single – essentially, they were running into what we call now a high cardinality problem. They had instrumented their application in a way that was capturing basically one data point in our analytics solution for every single possible permutation of an HTP path with parameters that they could conceive of. And it just so happened with that app, there were a lot of combinations. And this is something that due to the high cardinality nature of that data was causing our Cassandra cluster to suffer. It seemed incredibly unintelligible, not just because it's the new product, but because it was also in Russian and I am pretty sure that the outcome of that day was just that we blacklisted that application from the analytics part of our product and threw some monkey patches in to bring just to stabilize it for that day. Not a great solution, but one that sort of highlighted a fundamental shortcoming of the architecture of that sort of analytic solution. And I think what made that be so formative for me were two things, one, as software engineer, trying to use these ops tools, these monitoring tools to understand the problems, it was, you know, it was worse than the graphs speaking Russian to me. it was the graph speaking Russian and then the graphs making no sense, even if I spoke Russian.And secondly, I think it really highlighted the – I had architected the analytics solution saying, "Well, we should be able to trust our users to send good data in if we write the documentation well enough," which now –
Brian Dawson: Classic.
Christine Yen: Yeah, classic. Right?
And at Honeycomb, something that you know, we're able to come on and say, just saying, "We cannot trust people to send, "Good data" and they shouldn't have to. They shouldn't have to worry about it. The developers, whoever's doing the implementation should capture whatever they think might be helpful and ideally, the system should be architected in a way that nothing's going to melt down if the end user starts sending unexpected data. And I like that story because it really highlights what happens when teams have that sharp boundary between dev and ops and when the tools just magnify that difference. And are, you know, think that they're only building for one audience or another.
Brian Dawson: Yeah, thank you for sharing that. That was really insightful and I'll add you started to say sharp boundary.
The other lesson about well, they should only send clean data if they read the manual and they know how to do it I would argue that it's also removing a sharp boundary between you as an engineer and the customer, right, having a bit of empathy and understanding and moving a little more into their space. So that's powerful. Thank you for sharing that, and ultimately, that is at least one of the first moments that led to the role that you're in today.
Christine Yen: Definitely.
Brian Dawson: Awesome. So our other standard component, which I'm really interested to hear what you share here, is where we ask guests to share a resource, a book, podcast, blog, but a resource that you absolutely recommend our audience read, digest, follow.
Christine Yen: Yeah. If I may, I'd like to actually recommend two.
Brian Dawson: Okay. Yeah.
Christine Yen: There are two books, and this is coming from someone who I read some nonfiction. I don't read a ton and both of these were books that I didn't want to end. I didn't want these nonfiction books to end and was able to sit and chew on each successive chapter. One of them is a book called The Most Human Human by Brian Christian and it tells the story of a competition where teams were trying to build chatbots and the competition is for whichever team can build the bot that most successfully passes the Turing Test, but the author also participates in this contest because not only is there an award for the team that writes the best bot, there's an award for the human that is able to most, also most successfully pass as human.
And it's a book about – that explores the question of what is human intelligence and what does it look like when software tries to mimic human intelligence and what are the things that humans are really good at that is most difficult to automate and what does that mean for how we view – how we educate and how we – what sort of intelligence we tend to value in our society. Really, really great and I think especially relevant in our industry as we think about, everyone is reaching for that AI button to add to their product.The second book is one that is I think also relevant to folks interested in this world, but especially given the events of this summer. This book is by Ruha Benjamin called Race After Technology – and it explores all of the ways that technologists are unaware of the biases that are coded into technology while simultaneously trying to fall back to technology to push against human bias. It's just incredibly relevant. It is published very recently so it's got a lot of examples from technologies that you and I use every day and really worth a read for anyone thinking about how software intentionally or unintentionally affects the broader world of humans.
Brian Dawson: I'm so intrigued by those recommendations and as you kind of called out, extremely timely, both of them. So I'm underlining and circling and bolding and I'll probably pull them down soon.
And then just for the audience, so that was The Most Human Human and Christine, who is that by again?
Christine Yen: Brian Christian.
Brian Dawson: Ryan or Brian?
Christine Yen: Brian with a B.
Brian Dawson: Okay, Brian, like Brian as I am. And then Race after Tech and that is by?
Christine Yen: Ruha, R-U-H-A Benjamin.
Brian Dawson: Awesome. Thank you for those references. Thank you for sharing that. I'm excited to actually give them a read. Before we head out, I'm curious, do you have any final thoughts to share with our audience and absolutely share the final thoughts that you think are most pertinent? I am curious though if there are thoughts around, now that you've kind of taken this journey from a practitioner, from an engineer kind of on the left-hand side as we sort of metaphorically call it, to partnering with somebody from the right-hand side and solving the problem or investigating and working to reduce that boundary between the two.
You know, any final thoughts generally, but I'm particularly interested if you have any final thoughts or words for those practitioners, those developers and ops engineers that need to find a way to work together better?
Christine Yen: That is quite a compound question.
Brian Dawson: Yes, I have a problem with that. You're lucky if you just get two questions in one. Usually it's like three or four. I can simplify it.
Christine Yen: Sure.
Brian Dawson: Let's try again. Do you have any final thoughts for our listeners?
Christine Yen: Yeah. I think that I'll key off of the last piece of the compound question I remember. When we started Honeycomb, we very much felt like there was this weird parallel, speaking of boundaries, between what we were trying to do and what folks have been doing for a while on the business intelligence side, this act of starting with a high level question, you know, "Why is my service down," or, "Why is my service slow," and peeling it apart and exploring in a dynamic way. All of those actions are things that folks or various other intelligence tools have been doing for years. And yet, we've, as an industry, developed these silos and yes, there's like – there are technical reasons and different requirements for the experience, but we have really found value in trying to cross those boundaries and seeing what we can learn from folk who are used to that as their primary question answering tool and recognizing that there's so much great thought that gets put into products all across the ecosystem, all across, you know, every product out there, and my parting words for those practitioners, whether dev or ops, are look up. Look around. See what other good ideas are out there and don' get overly focused on your area of expertise because chances are, someone has solved the same problem you're trying to solve under a different set of constraints in a slightly different world and even with those differences, there's probably something you can borrow and port over to your world to make it a little bit better.
Brian Dawson: Another just like the resources. That was another unexpected, but I think really key and powerful response. Christine, I thank you for those responses. I thank you for taking the time to spend with us here and share your experiences and I look forward, you know, I think as I said earlier, I have and expect to have other touchpoints with your space, with your product and with your company and so I look forward to watching Honeycomb, yourself, Charity and the other members of Honeycomb on their journey.
Christine Yen: Thanks so much for having me.
Brian Dawson: Thank you.