In this edition of the BORNSECURE Security Influencers podcast we talk to Larry Maccherone, one of the most respected forces pushing for the adoption of agile methodologies and security automation.

During the conversation we discuss topics such as:
- How Larry’s background as a developer influenced his thinking on application security, including the equivalence of a security vulnerability to a software defect, and the importance of trusting your developers to write secure code.
- The emergence of more advanced vulnerability assessment tools to validate the security of “the code you write” (IAST) and “the code you import” (SCA) and how the data flow analysis is superior to static analysis (SAST) legacy approaches.
- The importance of not just finding security vulnerabilities, but also providing actual remediation advice to developers so they can fix the problem in no time.
- Why Larry believes that the pull request is the ideal place to run security tests, and how that drives developer adoption of beneficial security practices prior to launching the CI/CD pipeline.
Check out a video of the podcast below, or feel free to download using your favorite podcast platform. We are also including the transcript of the conversation if you prefer to read the conversation.
Google Podcasts and Apple Podcasts coming soon!
Daniel (00:04):
Larry, Hello. Thank you so much for joining today. I’m very excited about this conversation.
Larry (00:09):
Me too. My pleasure for being here. Thanks for having me Daniel.
Daniel (00:12):
I think most people know you as an agile and security expert, but your background is really as developer and entrepreneur, right?
Larry (00:23):
Correct. I’ve been a serial entrepreneur. I started my first business while I was still an undergrad. And I’ve continued to write code essentially my entire career until this day. And I’m the primary author of a dozen or so open-source projects. One of which gets almost a million downloads a month. So lots of contributors and users of that particular library of mine.
Daniel (00:49):
How was the transition from that background as a developer and entrepreneur into agile and security?
Larry (00:56):
I was part of launching a movement called Build Security In going back almost two decades now. And I did that because I thought that development teams should own the security of their products even two decades ago, but that idea was way too early and it didn’t take off. We had very little adoption of the framework. It got lots of good funding from DHS and NSA and folks, but I realized that the thing that was missing the most there, in terms of us getting adoption, was in how to actually “hook the brains of the developers” and get them to want to do this. And the agile movement was just taking off at the time, and they were being very successful at fundamentally changing the way software development was done.
So I joined the agile movement essentially, and started sort of spending all my time there. I was the director of analytics and research for the analytics product line for Rally Software, which is an agile lifecycle management tool vendor. So yeah, I’ve been all over the map, but the common theme is, is helping software engineers do a better job over time with better approaches and metrics for feedback, for instance.
Daniel (02:20):
So what I’m hearing is that in your mind, security is just one aspect of the quality of software, and agile was a bridge into building fully secure software, right?
Larry (02:35):
I have always thought of quality. In fact, in my first spin-out company from my original factory floor automation company, GE power gen was our primary customer. Our software was involved in 60% of the world’s power generation. If we got hacked, then they could bring down the world’s power grid.
So we developed a framework for developing essentially defect-free software, which to us, a vulnerability was just a defect back then. And we spun that out as Qualtrax going after the quality sort of vibe there. And that’s what got picked up by Carnegie Mellon and the silo lab and why I ended up there next.
I think the agile movement was interesting because it expanded the scope of responsibility for the development team, which prior to the agile movement was sort of used to throwing the product over the wall to the QA group, and having requirements thrown over the wall to them from the product group.
It expanded the scope of the development team to actually include those two functions. It broke down the silos. And at the beginning of the agile movement, the QA folks, I would get up on stage and I would talk about what’s coming and they would approach me after my talk. And they would be like “oh, that’s all great, but it’ll never work at my company because the developers can’t be trusted with the quality of the products. They’ll just put crap out there.”
And I hear the same thing now from security people, they just don’t quite have the right mindset. And they don’t have a framework for safely transitioning ownership of the quality of the products from themselves, which they can’t keep up. Now, that’s the truth over the development team, who’s shipping multiple times a day in a lot of cases. And I’ve worked on creating that framework at Comcast and we were very successful there. And I’m now trying to spread that sort of concept throughout the whole industry.
Daniel (04:35):
I love two things that you just said. One is the fact that you consider a vulnerability just as a defect: it’s not a different class of problem, it’s just a defect in the software and the same way you fix a problem in the business logic, you also have to fix a security problem. And the other thing I liked that you said is the trust in developers. That you don’t have to insert these gateways or these validation processes, that you actually have to trust them to write secure code and empower them with the right tools, methodologies, and culture.
You also briefly mentioned Comcast at the end, and I want to spend some time talking about that experience in detail. I think I read that you were coaching over 500 teams and 10,000 developers is that right?
Larry (05:26):
We did not have 10,000 developers on the teams we were coaching, but there were 10,000 developers total at Comcast. So that was our total target audience, but we were at 60-70% penetration into that group. And there were 600 or so development teams that were sort of in that target audience. We were approaching 500 teams onboarded to our program by the time I left. So we had to scale the program.
You have staff that is employed by the cybersecurity group at Comcast, but those folks are used to sort of doing things like traditional vulnerability management and, and bolt-on security. We had to get some of their budget and some of their headcount into my team for two roles: one is coaches, and these are very similar to agile coaches scrum masters, make great secure development lifecycle coaches. We called them at Comcast because you’re used to working with software engineers and getting them to adopt new practices, which is exactly what this transformation is all about. So that was one role we scaled with. And we had to create software that these coaches could use to do gap analysis with the development team and help the development team make a plan for adopting the practices and then track the metrics on how often they get a new workshop and how successful the team is in adopting those practices actually. So we built those tools to scale the program, and we built collateral and materials that the coaches used to deliver that program.
And then the other role was pipeline engineering. We had to hire essentially CI/CS engineers to help development teams integrate these tools into their pipelines and, and to build recipes and plugins for CI/CD platforms that sort of fit with our mental model for that. And those two roles essentially never existed before at Comcast, or I think in most enterprises. And we essentially had to create them from whole cloth.
Daniel (07:40):
So these two roles, the coaching and the CI/CD, were they just carved out of the security team or carved out of DevOps?
Larry (07:53):
You just hit the nail on the head. It’s not a good fit for any of that. I mean, in a perfect world, DevOps includes security, right? The Phoenix project, which launched the DevOps movement starts with the security guy. And if you’re doing DevOps, right, you’re actually doing DevSecOps, but the budgets are not really carved up that way: engineers have their own budget and they like to spend very little money on tools. And security has its own budget and they like to spend a lot of money on tools. And we were sort of stuck in that “no man’s land.”
My budget came from the security organization, but I have to admit at Comcast, I was more well-liked by the engineering organization than I was by some groups within the security organization, because I was disrupting their whole way of thinking. And I was making it align better with an engineer’s way of thinking.
Daniel (08:57):
I think that respect comes out of your background as a developer. I think developers really appreciate that, but going back to those groups that you were mentioning where you have the ratio of developers, to security specialists, and to coaches?
Larry (09:19):
I really had no security specialists. If you had those acronyms at the end of your name on your resume or your LinkedIn page, you went to the bottom of the pile for me. I didn’t require any of that before I hired you for either of these two roles. I required you to have engineering prowess and credibility to talk to engineers, which meant you had to be writing code the day I hired you. And I kept you writing code even after I hired you. So we were toolsmiths, we weren’t just configuring somebody else’s tool. We were writing tools that actually made that possible. And I taught those folks all the security they needed to know in order to adopt the security related practices.
Same thing with the coaches, I hired people with no security background for the most part that were really more agile transformation coaches and scrum masters, those kinds of people. And I taught them all the security that they needed to know in order to do that, the whole sort of collateral package and then the consulting where a software we built to enable the coaches sort of had all the content baked into it. So they didn’t really need to know a lot when they got started.
Daniel (10:46):
In terms of numbers, how many coaches did you have per developer, or how many developers do you have per coach?
Larry (10:52):
You can do the math. We did a workshop every 90 days, the initial workshop involved the entire development team and all the product people and all the folks that run their ops and CI/CD platforms. So we had big groups. And if you had sort of external groups to do product and, and, and CI/CD then you had to have representatives from those in the room. And that was a 90-minute to two-hour long initial workshop.
Then we came back every 90 days but we didn’t require everyone to be in the room on the 90-day resyncs. And then the coaches are working with the teams between the workshops.
So you do the math. If you have 600 teams and every team needs to get visited every 90 days for somewhere between an hour and two hours you know, the coaches were doing two or three a day. I did as many as five or six a day of these workshops. That’s not a sustainable pace. We had about 20 people doing coaching when I left Comcast a few weeks ago including dedicated coaches that were funded directly out of the security budget, but then we federated the program because it was all sort of consulting where in a box, we could get local security people in other parts of the org to essentially adopt our framework. And that’s really how we got to the next level. We sort of plateaued with the teams we were directly supporting until we federated the program. And then now we reached the next plateau. So with about 20 coaches, dedicated ones doing two or three of these workshops a day, and the federated ones doing maybe two or three a week, we can cover all 600 teams.
Daniel (12:48):
So you have these team of coaches that visit developers, empower them, help them with culture and with transformation. I’m sure tooling was also a part of this empowerment. How do you see the AppSec tooling industry evolving over your time at Comcast? What did you see at the beginning? And what do you see now that you left a few years after? How many years was it?
Larry (13:15):
It was five years.
Daniel (13:15):
How was that evolution over those five years?
Larry (13:15):
Well, it’s kind of interesting because of my background. I was in security for about a decade and then I left security and went to the agile and DevOps worlds. And so when I started at Comcast, that was my first time coming back to the security world.
When I was at Carnegie Mellon, I actually wrote a book on how you evaluate SAST tools for the NSA. I was the principal investigator for a project to essentially define how you evaluate SAST tools. And so of course I had that SAST mentality just like everyone in the security world does. Then I quickly learned that library usage was actually a bigger attack vector, a bigger attack surface. And so software composition analysis was sort of a new tool category for me, five years ago. I picked up on that very quickly.
Almost as quickly a new tool category came out called IAST. Hdiv and Contrast, who’s now who I work for, are both IAST vendors. IAST is great because the biggest problem with SAST tools is that both tools are doing what’s called source sanitize sink analysis. They’re looking for data that gets to a dangerous sink, like an SQL statement coming from an untrusted source, like user input, without going through proper sanitization. But the problem with SAST tools is they have to build this data analysis statically, meaning they have to sort of infer where the data should go based on what the source code says, which is a very difficult task to get accurate. Especially when you have loops and if statements and all of that. Whereas IAST essentially cheat, and they instrument the app at runtime and they watch where the data flows. They can have perfect data flow analysis and eliminate all the false positives that SAST tools had.
As soon as I learned that and understood that, I tried to steer my program to be more IAST oriented than SAST. Now, the one downside of IAST tools even to this day is that you have to exercise the app with enough route coverage for it to build the data flow model. And a lot of teams back at the beginning were not, you know, true DevOps teams doing more what I would call cargo cult, and they didn’t have a test suite that would exercise their app and manual testing just isn’t quite usually robust enough for, and I asked her to be very effective. And so I had to stand up some SAST offerings, but I favored the IAST and I encouraged the developers to do that. And over time in the five years, we got more and more teams to sort of have the more mature DevOps practices, which were more conducive to the IAST approach.
Daniel (16:14):
You spoke very eloquently about IAST; any other product categories that you see up-and-coming in the AppSec space, helping along the culture and the process that you mentioned?
Larry (16:25):
Jeff Williams, the cofounder of OWASP and Contrast and I just did a talk yesterday. The timing of when each broadcast will not line up, so it won’t be yesterday when this comes out. But we just recorded a talk yesterday and we’re recording a talk today on the four dimensions of application security risk. Analysis for the code you write is one risk dimension. The code you import is another risk dimension, and that’s, you know, SCA for the code you import. And IAST or SAST for the code you write. The products you buy are sort of more policy and compliance approaches to sort of how you make sure those are secure, but you can actually wrap those with a RASP solution like Hdiv and Contrast has as well.
The new category that I think is sort of interesting, there are a couple of vendors that are sort of leading the way and Appiro is one of those, and I’m on the advisory board. So I think very highly of their particular solution, but it’s really securing the environment that you build the software with essentially. And if you look at the Solar Winds situation, it was really that sort of developer tool chain attack. Some people like to refer to that as a supply chain attack. And it was also because their product ended up being bought by other people. And so from the perspective of the solar winds customers, it was those customers’ supply chain that was attacked, but from solar winds perspective, it was a developer tool chain attack. And, and so that’s the new category that there isn’t a lot of folks sort of targeting prior to now. And I think that’s sort of up and coming.
Daniel (18:31):
Very good. So what about frustrations? I’m sure that dealing with vendors, you get calls every week, we called you a couple of years ago. You have to face these conversations very often. What kind of frustrations do you have from vendors like us? And what advice would you give?
Larry (18:50):
That’s Interesting. I mean, I’m a vendor now. I’m going to have to be really careful about how I ask this question, but I’m going to try to answer it from the Comcast perspective, because I got phone calls literally every day from folks who were trying to sell me their latest tool.
Three-quarters of the time the pitch was, “did you read about in the news this big attack, our tool finds that vulnerability before and would have prevented it”. And I always say “you may have a great solution, I might want to learn more about it, but the reality is, scanning and finding more vulnerabilities is actually net negative value”. And, when I say that, people are like: “no, you want to know in order to resolve and that” but that’s the point is that just the finding part is net negative.
I would argue that the folks that Equifax didn’t get fired because they got hacked, they got fired because there was a vulnerability that they knew about six months before they got hacked. So you have actually increased liability when you know about it, but you can’t get it resolved rapidly. And so I really think that the only thing that really matters when you’re looking at tools, the most important thing that matters is essentially how conducive it is to resolving the findings as presented by your offering and as integrated as feedback by your offering. And, and so I think that’s really the one metric that matters, so to speak, for tools when I’m first considering them and for us at Comcast. And I, I really am, I am totally convinced of this, but I had such great success of convincing everybody else of this, but our whole program at Comcast was built around this idea that I’m about to mention here is that the ideal, the perfect place to integrate these tools is in the pull request.
Developers are essentially in love with the code they wrote this morning, it’s their new heartthrob. And they want nothing more than their peers, their friends to say that their new heartthrob is beautiful or handsome. And the way they do that is they create a pull request that somebody else has to sort of say, “yes, that’s good enough to get into the next higher level branch”. And they will jump through all sorts of hoops. They will read, they will watch videos, they will learn whatever they have to do to get that wart removed from their new heartthrob before they show it to their friends. And so if you plug into that pull request, and there’s a red checkmark on that pull request feedback cycle, either in an integration test or a style checker, or a unit test suite, or a security test suite, they’re going to work really hard.
And the immediacy of that feedback they’re highly motivated to fix it right there. The sort of secret about this approach is that the immediacy of feedback leads to learning. So, we saw this in the data at Comcast dramatically. When you had this tight feedback loop, this sort of under a day median time to resolve a cycle, which in the pull request is almost always under a day, is they learned so much more rapidly how not to write vulnerabilities in the future. So not only did the resolution curve catch up with the findings curve rapidly, but the findings curve leveled off tremendously. So there were fewer vulnerabilities being injected in the product over time. And so vendors that don’t quite have that as part of their story always lost out for me. They never got past square one for me.
Daniel (22:52):
I think that instant gratification of taking advantage of that moment of the pull request, I think it’s very interesting, when you think about that path to fix it, and the ability you thinking about remediation advice, like showing you a video, showing you a static material, or you’re thinking more about actually suggesting changes or watching the code, have you seen any efficient tool doing either or both of those approaches.
Larry (23:24):
For SCA, you know, Dependabot was, was, was sort of the first folks to sort of automatically submit a pull request against my open source projects without me asking because we were my, one of my open source projects had a dependency that had a publicly reported vulnerability against it. So for SCA, that works great. And if it kicks off the pull request, then that automatically runs my integration test suite. And so I know whether or not upgrading the latest version broke my product or not. And so it was simply clicking on a single button and the change was merged and I was done. And you can do that for SCA very easily. It’s much harder to do that for the code you write, and I’ve never seen anyone successfully pull that off. I know of at least one effort underway from a vendor you wouldn’t expect it to come from, but as in the security space, who’s working on it who knows if that’ll actually work out well in the end or not, but much harder to do that automated, but I, I’m not sure how valuable that really is in the end. I think feeding back to the developer rapidly, they don’t want those red check boxes. They want, they want clean stuff. So if you feed back what you got, they figure it out. And I think security people are actually too reluctant to give feedback directly to the development team. They feel like they have to triage it and massage it and prioritize it before they bother them. But that comes at the expense of the immediacy. And the immediacy of the feedback is so much more valuable than that. Maybe slightly more accurate.
Plus it’s a lot more expensive to have armies of people, triaging vulnerabilities before you give them to the development team. And the full request comes back directly to the developer, whereas feedback from a security group three weeks later. By then, the developers already have a new heartthrob: the code they wrote this morning, that morning, not the one he wrote three weeks ago. But the feedback comes to the product people really, it gets in the backlog queue, and so it might take weeks or months before it gets back to the development team. And that developer could be gone by then, and you lose the immediacy of it for those reasons. A the work takes time for the security group to do, but B it goes back to an earlier stage in the cycle. Whereas the feedback loop from a pull request goes right to the developer. It’s very tight feedback there.
Daniel (26:06):
So thinking about that feedback, if we want to favor the immediacy, if we want to favor that love that the developer has for the code in that particular moment, why do it in the pull request? Why not do it in the IDE as the developer types?
Larry (26:21):
Yeah, good point. I mean, when I started the program at Comcast, one of the big mistakes I made is, you know, there’s no further left than in the ID. Right. So do it in the ID. Right. The problem was, is that, and we had a massive investment in this. We spent a million dollars on the tool that was geared towards IDE usage. We had a road show where we got literally 4,000 different developers into the room to hear the pitch for an hour about the IDE integrated tool. We got only 191 developers to actually sign up and start using the tool. And then after a year of usage, it was literally only one team of about a dozen developers that was consistently still using the tool. And so the thing that was missing, and this might be different today, and this was five years ago, essentially this learning, but, but I still think it mostly holds today.
Larry (27:16):
The thing that was missing was essentially that team’s social reinforcement. It’s that I have an expectation, there’s a requirement. My team has a definition of done or a quality standard that we have to meet at the end of any, every sprint. And if that includes that you get clean scans before you do your sprint demo before you declare the sprint done, that reinforces the behavior. So once we shifted the focus away from IDE integration and into pull request integration, we actually got more IDE usage because then they wanted to check it before it got into the pull request.
And now I actually see the pendulum swinging back the other way. The CI CD tool of choice back then was really Jenkins, was sort of like the dominant player. And it was fairly linear. And today, most CI/CD tools, including the newer versions of Jenkins, to be fair to them, spin up lots of containers in parallel. And so I actually shifted all of my development to Chromebook for about a year, you know, as sort of like a terminal almost because I had in the cloud CI/CD pipeline that would literally spin up a dozen or more test servers simultaneously to run all of my 20 or so test suites against my product before I shipped it. And I couldn’t do that on my desktop anywhere near as fast as I could do it in the cloud. So I actually see the pendulum swinging back away from IDEs and towards this “in the pipeline integration” kind of thing. And, and I say in the pipeline, but I hedge a little bit on that stage, I man, because I think people think a single seat CI/CD pipeline, but if you integrate at the pull request level, what happens is the pull request triggers a web hook to any tool that’s listening on that web hook. And that’s usually how the CI/CD pipeline gets kicked off. And if you have most of the stuff running in your CI/CD, you tool that great, but you can actually stand up a separate thing that’s running in maybe a completely different CI/CD platform with the same web hook. And it runs in parallel and it reports back to the same place. And so that’s another reason why pull request integration is actually superior to in-the-pipeline integration.
Daniel (29:55):
I love that view of having 20 different analysis tools running in parallel. And obviously you can do that in your machine, right? So I was just coming from the place that, you’re using one single tool. Why wait till the pull request. But in your scenario it makes a lot of sense. And I think that’s very insightful.
So you ended up earlier into the role of data and data analysis to make decisions, to understand the impact of your decisions. How do you think about data? You also have raw papers for RSA and similar. Can you walk us through that, thinking that data driven, thinking that you applied to a giant security?
Larry (30:39):
Yeah. So one of the things I’m most proud of in my career is that I published the largest ever study correlating agile and lean practices with performance of the development team productivity, predictability quality, including security responsiveness with afforded mentions. I measured the outcomes. And then I had four others that I wasn’t able to measure in that, in that initial round of studies, but which I later published on. So I, there were a number of things that were sort of really wonderful about that, that work. First of all, having that quantitative evidence to sort of challenge the folklore and the consulting advice that is out there or validate it. I mean, confirm it.
I have a talk for which I start with my gal, the truth. She ain’t always kind. You know, I sort of do it in a stick with a Dick Tracy hat on and, and the, the message there is that with data, you can actually make the case, and it’s not just your opinion.
It is in the data. And I, I actually disagreed with some of the agile recommendations with that, with those write-ups sometimes wildly disagreed with their recommendations. Co-Location being the biggest one that I published that I basic agile movement said you had to be co-located. And I basically showed that was not the case, that the teams that actually were distributed in the same time zone had the best performance, 25% better productivity than teams that were actually in the same room. And so that sort of debunked that, and I did a similar thing and, and you, you hit on this earlier and I forgot to sort of feed it back to you here.
I published at RSA in 2020 which is the last in person conference that we’d been to pre COVID the impact of DevSecOps quantified and the big sort of surprising thing there is that most security training is worthless if not net negative. And we were investing a lot of energy, time and money on it at Comcast. And it was not effective. The one exception there was immediate context sort of in the moment just-in-time secure coding training. So if your tool reports that you have a SQL injection vulnerability, you click on a button and it shows you a five minute video, right. Then there was the YouTube age, right. That developer sort of learned about how not to write SQL injection vulnerabilities right then and there, and that was highly effective and everything else was essentially not that effective.
Tool vendors out there that are just selling sort of standalone secure security training. It doesn’t work in my opinion, and I have the data to back it up. Secure code warrior is, is, is a partner with, with contrast and, and I just did a podcast with metallic Mathias from secure code warrior, not too long ago. And they are in that Justin time-space, that’s sort of their, their big advantage. And so I, I S I think that that’s really the place where we’re training.
Daniel (34:18):
No, no, I love what you say. We actually take that approach at Hdiv: when we find a problem we give some basic advice videos on how to fix it, and some of them are coming from Secure Code Warrior as well, we are also working with them. So the data that you’re gathering, is this automatic metrics that you generate based on results, surveys, or is it a combination of both? How do you gather the input?
Larry (34:52):
There’s a book on how to measure anything. And I’m trying to remember the author now. I know him, but I can’t think of his name, but let’s fill it in later if we get a chance. It basically is about how you translate, how you convert qualitative insight into quantitative insight. And so the very first thing I did at Comcast was to gather this qualitative insight. It wasn’t a survey though, because what I found is in most security surveys, you get the answer they think you want to hear from the people you’re surveying way too often. And it sort of makes the whole thing sort of useless. So I never did surveys. I never did assessments in that sort of approach, the coaching model. Essentially, you sat down with the development team and you, you said you started with just a list of the six most important practices, and you asked the team, how would you rate yourselves on adopting this practice?
And it’s a conversation and it’s for their benefit, they’re doing a gap analysis on themselves, so they can decide what to improve next. And so they’re much more open about how good they’re actually doing. They don’t claim to be perfect on it because the next question is, okay, how are you going to improve next? Then of course they want the lowest hanging fruit there. So they’re inclined to answer accurately, but also they frequently don’t understand the terminology and the questions that security people ask. And they’re like that next question makes an assumption that doesn’t apply to serverless development. And so how do I answer the question? And you can’t actually, there’s no chance for the development team to get clarity on how best to answer the question, whereas with a coaching approach, you get that clarity. And so, yes, I had qualitative insight, but we were constantly aggregating it into quantity, quantitative insights.
And then I gathered data from the tools and I correlated these two. That was my talk last year at the RSA: essentially, the teams that adopt these practices have lower downstream findings and things like pen testing and, and incidents, and, and Qualis kind of scanning kind of things, 85% lower risk essentially associated with adopting these DevSecOps practices earlier. More and more now you can actually find rich insights by essentially correlating the automated, purely quantitative information from multiple tools that hit that touch at different points in the life cycle. And I haven’t published anything yet there, but Contrast has published a lot there and I’m going to become more a part of that as time goes on.
Daniel (37:48):
Yeah. And I’m not 100% sure about this, but I suspect that you also write like a thin layer of code to integrate these insights, data, and metrics, so it comes straight out of you, is that right? You have built manual tools to consolidate all this?
Larry (38:08):
Yeah, so one of the big products I built as part of the tooling at Comcast was essentially this engine to gather all the data into one spot so we can do research and we could feed it back to the team in one view kind of thing. Is that what you mean?
Daniel (38:24):
I think I read somewhere that you actually wrote tooling and piping software to connect all these different elements and elevate the visibility of each one of those tools individually, right? I think it’s very cool to get hands-down, and write your own piping when you don’t find a good solution in the marketplace.
Larry (38:47):
Well, keep in mind, in order to employ pipeline engineers and to keep them happy in that job and their skills sharp, they have to be writing code. So you have to make up projects, almost.
Sometimes we never really had to do that, but I think most security people will think of it that way is that, you know we have to make up projects. We have to decide to build instead of buy this round because we want to give them relevant experience. And it’s very hard to have a credible message to development teams from the security group, folks who have never written code or aren’t writing code right now, or don’t have modern DevOps practices. Don’t have modern pipelines. You have to actually dog food, all this stuff on your own before you have credibility to actually convince them to do it themselves.
Daniel (39:43):
That’s a very strong message. And we fully subscribe to that. We always like to say that we were developers before turning into application security experts.
So I think we could talk for a long time, but I want to be respectful of your time. I know you are in a new project right now, so you probably have a lot of things to do. So that’s all from my side. I just wanted to thank you for taking the time to sit down with me, and I hope we talk again soon. And best of luck in your new project.
Larry (40:15):
I think we’re on this journey together and I really appreciate you having me on Daniel and look forward to working with you more.