mabl’s Dan Belcher, Izzy Azeri on Using AI to Speed Software Delivery

Dan Belcher and Izzy Azeri, co-founders of mabl, join host Andre Pino to discuss how artificial intelligence and machine learning address a key challenge for DevOps teams: speeding up the testing process. AI revolutionizes - and accelerates - the way applications are tested and ultimately delivered to production.

Andre Pino: In today’s episode, I’m joined by Dan Belcher and Izzy Azeri. Both are co-founders of Mabl. Welcome, gentlemen.

Izzy Azeri: Thank you.

Dan Belcher: It’s great to be here.

Andre: So I think that the use of artificial intelligence in application development and, specifically, in continuous delivery is a topic of real interest for me and for our audience. And I know that’s something near and dear to your hearts. Maybe you can tell us a little about what you guys are doing and how it relates to the DevOps world out there.

Dan: Sure, sure. Thank you for having us, by the way. So what Mabl’s really about is companies that are moving to continuous delivery or are already using continuous delivery have been struggling for some time to maintain the same velocity in testing that they have in development. And so what we saw was a way of bringing artificial intelligence into that equation, to allow testing to keep up with the pace of development. And ultimately for you to achieve continuous delivery and have a pipeline that has really high throughput as a software team.


Andre: That’s great. And so what does it mean – when you say, “Apply artificial intelligence to the world of continuous testing,”? What exactly do you mean by that? How does that work?

Izzy: Sure. So we think, much like a human thinks and works, we should be able to build machines that do the same thing. So when we think about artificial intelligence as applied to testing, we’re automating some of the manual tasks that testers would do these days, in order to save time in the testing cycle. And that would be things like maintaining the tests, because things or elements on the front end are changing so frequently in a continuous delivery model that tests need to be updated for that maintenance. And so we believe artificial intelligence can help with that. And then secondly, we believe that we can help testers with regressions, or automatically identifying bugs or regressions that have happened in their application. So instead of testers manually sifting through test results and images and screenshots and log files, machines can do that and help provide data that would suggest why an application or a test changed or failed.

Andre: So does that mean that you’re using machine learning to actually devise the tests?

Dan: Yeah, a little less so in terms of devising the test. We think it’s, you know, up to the software team to describe, you know, “Here’s how my application should work.” And so we think there’s an important role for QA engineers to play in defining what should be tested. But then we think artificial intelligence can really offload a lot of that burden around test maintenance, as you mentioned – and regression detection. You know, once those tests are defined. I think it’s still an unsolved problem in AI for you to be able to guess what the true functionality of an application should be. Andre: And so does it also help in improving test coverage?

Dan: Yeah, it certainly can. One of the things where machine learning can play a part there is by looking at real user activity and real user traffic in terms of, you know, how people are using an app, and then comparing that to where, you know, test systems, where they’re exercising that functionality and suggesting to the software team areas that could use some more testing.

Andre: Wow, that’s awesome. So you’re actually using, you know, the technology to monitor the application in production and to help identify perhaps areas where testing might be improved.

Dan: Clearly, that’s an area that we see a lot of value. We haven’t made that specifically available in mabl. It’s an area of research for us today.

Andre: No, I think it makes a lot of sense. And so what are some of the key challenges that you’re focused on today?

Izzy: So, today we’ve developed a system which, you can point to an application and you can create tests in mabl and mabl will run those tests and maintain those tests and update those tests, over time. One of the things we’re focusing on today is visual change detection. So, you know, applications are changing frequently and there are parts of an application that you care like menu bars or sign-up bars or sign-out or e-commerce checkout flows. There’s also areas of an application that you probably don’t care change. Like for example, if it’s an e-commerce site, the ads that are rolling. Or if it’s a brokerage application, the stock chart changes all of the time. And so one of the key challenges we’re doing right now is being able to identify dynamic areas of a page versus static areas of a page and only telling you when static areas of the page change and exactly what changed in those static areas. Many users have said this is a very manual thing that they do today. And so we’re trying to make Mabl smart enough to be able to do what’s done manually, automatically in the future.

Andre: That’s awesome. And so from a machine learning standpoint, how does one get started? In other words, is your technology something that people put in place and it just runs and provides feedback? Or what kind of guidance and direction do you have to give to the technology?

Dan: Yeah, great question. So it’s really easy to get started with mabl. That was one of the core tenets when we designed the service. You can go to mabl.com today and sign up and point mabl at your app or any website and she’ll start to crawl the site and look for potential issues, like broken links or JavaScript errors and what have you. But importantly, as she’s doing that, she’s capturing lots of data about the typical state of the application, meaning a screenshot of every page, the HTML of every page, timing information and so forth. And so we use that data to start to create machine learning models about what typical behavior in the application is. Then the next step as a member of the software team or a QA engineer, is you install a Chrome extension and you use the application as your end users would. And Mabl sort of follows that and creates tests based on your training. And so then we start to execute those tests and that is used to create more models and more confidence in the models. And then finally we start to predict what the behavior should be, and then we look for deltas between what we expect the behavior to be and what we actually see.

Andre: Wow. Yeah, that’s pretty interesting. And so what’s the feedback that you’re getting so far?

Izzi: Yeah, Dan and I talked about this the other day with some folks, and I’d say – you know, we started the company a year ago and the two big things we didn’t realize were: one, wow, building an AI-based testing service is pretty difficult. More difficult than we originally thought. But more importantly, so many companies are adopting DevOps principles, continuous delivery pipelines and there’s been a lack of innovation on the testing side as compared to the rest of a developer’s workflow, that demand seems to be really significant for helping testers and really bringing testers to the point where DevOps teams are and software teams are generally today.

Dan: Yeah, and I guess I’d add as part of that – teams are looking to pull mabl deeper into their kind of core way of building and shipping software. And so the first version of mabl kind of lived off to the side; it was a tool for QA. And we’ve been pulled by our users to integrate much more deeply with tools like Jenkins so that it’s seamless and very straightforward to plug mabl right into that pipeline in a very automated way.

Andre: I think there’s a lot of potential here for this work and use of machine learning across the board in the DevOps category. Not only in continuous testing but probably in security and many other areas as well. So if you guys were to close your eyes and think about what the world of DevOps looks like in one or two years, what do you see? How do you see developers and testers working, a couple of years out?

Dan: I think the way that we measure success – and we’ve done this now, you know, with a couple of companies – is really, how many hours a day can we save software teams from doing things they don’t feel – you know, spending time on activities that they don’t feel is really going to set them apart or set their products apart, versus a competitor’s, right? And you know, really maximize the amount of time that software teams spend, you know, building value that’s unique for their users. And so, you know, AI in general, and mabl in particular, is really about reducing the number of hours that QA and developers spend – and DevOps. You know, building, maintaining and wading through data from a testing perspective. And so for my part, you know, looking out several years, I hope the teams are saying that they spend very little time dealing with those activities.

Andre: Well gentlemen, thanks so much for joining us today. It was a great conversation.

Izzy: Thank you so much.

Dan: Yeah, our pleasure.

Andre Pino

Your host: Andre Pino, CloudBees (also sometimes seen incognito, as everyone’s favorite butler at Jenkins World!). André brings more than 20 years of experience in high technology marketing and communications to his role as vice president of marketing. He has experience in several enterprise software markets including application development tools, middleware, manufacturing and supply chain, enterprise search and software quality and testing tools.