Episode No:108

Revolutionizing Code Quality: Overcoming Challenges in Software Development

Amartya Jha

CEO & Co-founder, CodeAnt

Listen & Subscribe

hyperengage.io/podcast

Ep # 108: Revolutionizing Code
Quality: Overcoming Challenges in Software Development ft. Amartya Jha (CEO & Co-founder, CodeAnt)
Ep # 108: Revolutionizing Code Quality: Overcoming Challenges in Software Development ft. Amartya Jha (CEO & Co-founder, CodeAnt)
  • Ep # 108: Revolutionizing Code Quality: Overcoming Challenges in Software Development ft. Amartya Jha (CEO & Co-founder, CodeAnt)

Episode Summary

In this episode of the Hyperengage podcast, hosts Taylor Kenerson and Adil Saleh interview Amartya Jha, CEO and co-founder of CodeAnt, a company that has developed an AI solution to detect and fix poor quality code. Jha shares the origin story of CodeAnt, born from his firsthand experience with the challenges of managing and rectifying bad code in software development, which led him to leave his job and create a solution with his co-founder. He details the process of identifying and addressing the widespread issue of code inefficiency, including the absence of proper documentation, duplication, and security vulnerabilities. Throughout the conversation, Jha explains how CodeAnt’s tool integrates directly into developers’ workflows, providing real-time feedback and fixes for coding issues. The discussion also covers CodeAnt’s journey with Y Combinator, highlighting the perseverance and adjustments required to secure acceptance into the program after initial rejections. Jha emphasizes the importance of building a product closely aligned with developers’ needs and how early customer engagement and feedback have been crucial to CodeAnt’s development and success.
Key Takeaways Time
How Amartya discovered the problem of rewriting old code while leading infrastructure teams in India, where it took 1.5 years. 0:19
The solution: co-founding CodeAnt to build a tool that fixes code issues. 1:15
Conducting several user interviews early on to identify key pain points. 3:44
Focusing on solving one major issue initially to provide early value. 4:11
Securing two early Series A customers who paid to help build out CodeAnt. 5:51
Learning how to clearly pitch CodeAnt at Y Combinator. 6:19
Large customers that now use CodeAnt’s tools to automatically fix issues at scale. 10:21

Empower Your GTM Strategy

Monthly expert advice and top GTM insights in your inbox.

Transcript

Taylor Kenerson (0:00:02) - Hello and welcome to the Hyperengage podcast. My name is Taylor. I'm here with my co host Adil, and with an amazing guest today, Amartya Jha. He's the co-founder of CodeAnt, which is an AI to detect and fix bad code. And we are so excited to dive more into this solution and problem that he is solving. Thank you so much for joining us. Amartya Jha (0:00:27) - Thank you. Thank you for having me here. Taylor Kenerson (0:00:29) - So we have to dive in. CodeAnt is less than a year old. How did you discover that there was this problem and how did you hypothesize you were going to tackle it? And how are you actually solving this issue now? Amartya Jha (0:00:48) - So I was actually leading platform and infrastructure teams back in India. One of the biggest pain point that I saw was the code that we are writing is not good enough. We actually have to redo a lot more of this. Why? Because the code we have written is not documented, is not proper. We have a lot of anti patterns here. We have lot of duplicacy here. We have a lot of security vulnerabilities here. Now we need a system which can actually go and actually fix this. We didn't had tools which actually fixed it. We had a lot of tools which just told you where the problem was. But as a developer, I always felt that there needs to be a tool who can actually fix it because it's a repetitive grunt work. That's when I thought, okay, it's high time. And it came from a pain point that in my previous job, I had to redo a software which was written for last eight years. So the next 1.5 years were just redoing that. So it came from that pain that nobody should ever be in a position to do this. So I left my job, I met my co founder, and we both share the same pain. And we were like, okay, let's build something that we want. Taylor Kenerson (0:02:10) - And how did you navigate then? So you understood, like, you found this pain, you had the pain. Actually, you found someone, a co founder, that also resonated with this pain. So how did you go from the zero to one? You just started coding right away. Did you develop a plan? How did the beginning look? Amartya Jha (0:02:29) - So the very beginning was just understanding pain point that you have and trying to have similar set of users who have the same problems. So for us, it was very easy because we were building a dev tool and we were developers for all our life. So it was just reaching out to your friends and peers, asking them, bro, is this something that ever burdened you and just understanding the pain? There so that was one and second was very narrowing it. Like what is one thing that I can solve today which will give them 80% of the value and make sure that they are in the right path of making sure their code is good. So for me it was finding that one particular thing so that we can start with that. So the first one month, one and a half month was entire user interviews. For me it was just talking to my friends, understanding one pain point that they also had. Adil Saleh (0:03:27) - Amazing. I was pretty curious when I first looked at CodeAnt, because we also work with developers per se, I'm not a technical guy, but from a marketing standpoint, from a user standpoint, we work with developers and tell them, okay, this is something that we want to build. We never know what kind of code quality is it, but we just care about the front end. So how do for people like us, because a lot of these folks that are not technical, it is so hard for them to take this tools like code end to, whether to incorporate these or not, how it's going to make an impact in very early stage. And then second question, I was more interested in knowing, is it specific to some kind of languages or coding or is it specific to all. I know that there are some complex problems that have complex logic to the back end coding from a full stack standpoint, how does that work, how it fits to all sorts of engineering? I would say kind of backend coding infrastructure. Amartya Jha (0:04:32) - Awesome. So we have two questions there. The first one is, if you're not from a technical background, how will you know the difference that a UI looking good versus backend, which is actually performant. So it's always a case of understanding how well your experience is with a particular backend. For example, how well it is able to load, what is the latency, what is the cpu memory profiling, how much you are burning on your cloud cost. So these things constitute about how well a particular thing is actually working under the hood. If you face any issue in any one of this, it actually tells you that the underlying thing is not performing. And then you go dig deeper and actually try to figure it out where you can make changes. If the latency is pretty high, you try to make sure that where your actual call stack is taking the huge amount of time, and then you figure out that particular piece of code and you try to refactor it. In larger organizations, finding and doing this is a continuous effort. And it's always felt that the code that we have written, if we would have written in a better way at the first go, then we would have been at a better place. So that was one thing. Second, you talked about where we are in the journey with code and AI. What do we support, which languages do we cater? So if you look at the entire spectrum of languages, you'll see that if you just support python, Javascript and Java, you control more than 60 65% of all the languages which are out there in terms of companies and enterprises who are using them. We right now basically have availability in all the four to five languages that I just mentioned. Plus there are some AI features which work on their language agnostic, so they don't have any language barriers there. Adil Saleh (0:06:36) - Okay, and one quick question on this, since you guys are using llms, leveraging llms to make this solution work for programmers, how do you see it unfolding over time when there are multiple llms coming up? Are you trying to compare with different llms or you've just sorted in the first two years? Okay, this is one scalable solution that you're using one LLM, or you're using all four of them and picking the best of all these llms to provide the best solution. Amartya Jha (0:07:07) - The thing is, if you just use any LLM for that matter in the market, you won't get the desired output, because what is the desired output here? If something is wrong, you should actually correct it without breaking any existing code logic. The good thing about LLM is it has context, it can understand the code base. The bad thing about it is it can pretty easily hallucinate and you can pick any LLM, you can try to guardrail it as much as you want, but still you will see hallucinations, how we can stop it. So we took a very different approach here. What we did was every language is made up of asds, which are basically abstract syntax trees. They can say that they are the very bare thing that a particular language has. We understood that, how a particular ast is written for a language and then tried to wrote out our own aSt parsers and rule based engines. These are actually pretty fundamental to that particular language. Now we use this information that we have written. For example, in Python alone we have written more than 1500 rule based engines to auto fix bad code. Now once you have this kind of data, what do you do? Is you just fine tune it on any performant basically model which is out there. So we have different clients. Some clients have their own models, some clients try to use their open source models, you just have to fine tune it on this particular data set. You'll be able to get the best out there. So to answer your question, any model off the shelf won't solve the problem. You have to do a lot of work. It's not even a grant. You have to do a lot more deep work to get to your desired outcome. Adil Saleh (0:08:59) - Yeah, you got to make sure that it's more tailored for those and you need to condition it that way. Okay. So thinking about bad practices, a lot of teams that I've worked with, like welfare teams and all of those, a lot of problems that we get to face when I ask them why this happened, what we did like a year back and why we're not able to fix it, why it was not scalable a year back, why is it a technical debt today? And they say it was just bad practices, bad coding practices. So are you guys also trying to fix those and detect those? Because a lot of early stage companies, they cannot afford to have experienced software engineers coming from the right skill set and capabilities and of course the background, they tend to face this a lot. Taylor Kenerson (0:09:49) - And to add to this too, how do you prevent this from happening as a new company to actually take the best practices first? Amartya Jha (0:09:58) - Awesome. So there are two aspects in this. The first aspect is how do I first find out all the bad pieces in my code base today and actually fix them? And the second piece is how do I ensure that no bad code will ever be pushed again? So we actually focus on both the areas. So what do we do is we start from the entire developer journey. So the first thing is developer would be writing some piece of code on his editor. It can be pycharm, jetbrains, id, neobim, whatever it is. So we directly sit there, make sure that we prompt developer whenever he's writing any bad code automatically so that he's prompted that he has to fix it. We directly suggest also auto fixing for that. Okay. Then comes is like it can happen that some developer didn't follow what we said even after we showed them, showed some suggestions he didn't follow and he pushed something. Now we have a CI dashboard where the organizations can come and see the entire health of their repositories. So as a tech lead or engineering manager, they can come see all their repositories, see how many security vulnerabilities are there, how many complex functions are there, how many bugs and anti patterns are there, how many functions are not documented and they can actually come and bulk autofix. We have a client who scanned more than 1.5 million lines of code and bulk fixed more than 3000 4000 antipattens in two days. And then there's the last piece, which is we want to make sure that no bad code ever get pushed in into a code base. The best way to achieve that today is that either you sit on the git hooks that you don't let push, or you sit on the prs. We sit at both the places. I'll talk more about the prs. Whenever a person is creating a pull request, we scan that pull request for 1000 plus security vulnerabilities, anti patents, dead and duplicate code. We describe that pull request for him because developers are lazy, they don't like to write descriptions and add change log, so that whenever a pr is created, the reviewer knows exactly what the pain points are there in this pull request. And the developer also has a sense of commitment towards this pull request, that they'll go ahead and fix it because it's a small change that they can do it right away. Doing these make sure that you tackle the problem from the high level. That is, you see the entire depth of the problem and b you make it so repeatable that with every change you fix their code base. Taylor Kenerson (0:12:44) - It's really key being able to see at a macro level what needs to be fixed. Especially just like you said, you being able, like the technical person sitting on top of your engineers and your developers, being able to see at a high level. Okay, what is the overall health of my repository and how can you actually take action? Like you just said, that one client of yours, 1.5 million lines of code he fixed in two days. I mean, you as well as anyone know that that probably would have taken way longer than that. So it's incredible the amount of time that's being saved and how efficient you can actually be by implementing something like a CodeAnt. So now touching a little bit on YC, I know you're part of the winter batch, so can you just tell us what was your decision to go into YC and how has YC been helping you codan so far? Kind of project where you need to go in terms of GTM, customer segmentation, how you're targeting customers, what your whole kind of strategy looks like. Amartya Jha (0:13:41) - Cool. So it was my fourth application to YC. It was not like I chose YC, it was like I was trying to get into YC. I got rejected three times with the. Taylor Kenerson (0:13:53) - Same idea or new idea each time. Amartya Jha (0:13:57) - So yeah, new idea each time. The new co founder, also in the fourth time. Cool. Yeah, so it was more of a desperate effort to get into YC because I really wanted to get into this because it has the elite folks who are building stuff. And our interview was also pretty different. You won't believe we got rejected, actually. And one week later we got accepted. Taylor Kenerson (0:14:25) - Wait, walk us through that. Has that ever happened before? How did that even. Amartya Jha (0:14:33) - Was? We were interviewed by our YC group partners Tom Bramfield and Bosnian. And so I'll tell you the entire process, we got a selection email and I was back in India, I told all my YC seniors like, yeah, we got selection email for the first time in four years. Let's crack it. How can I prepare for interviews? So every one of them told us like five different tips to do. The tips can be like, yeah, you be very confident. Someone said that, yeah, you go know, end to end about your product. Say this, say that. So we got a lot of advice then. What we didn't do is we didn't actually figure it out. What is our actual pitch? So we just took their advice. Somebody gave a list of hearts, list of questions, so we just kept that and we just memorized everything for next 48 hours. We just did that. And when we went to the interview, the interview is very small. It's like a ten minute interview. So Tom and Tyler were there. So they started asking very basic questions like, what are you trying to do? What are you trying to build, how it's going to explode or something? And our answers were pretty much not related. It was more towards explaining them why we are better. And that's what I felt was not correct. So instead of explaining the entire product, we just explained them one particular piece of problem, which they also felt it's not repeatable. So very next day, I got an email from Tom stating that the thing that we are building is not a product, it's particular piece of that problem that is not repeatable. Only one good thing happened that day was Tom actually wrote exactly what was missing. And that's the best thing about YC. And the good thing about that was for us, it was just the metric. So we just had to show him the metric that, yeah, people are actually using us a lot. And apparently we did have metric. We had like 400, 500 users that time, individual users who were using us like crazy. So I remember I told my co founder, it was like 12:00 a.m. In India. I told him, wait, we'll shoot a video for storm, show him the entire tool, show him the metric, and let's see what happens. Let's just write him an email, a proper email, line by line, answering every question that he has. So Tom, being a gracious person, he just looked at the email. The very next day, he just told in one line, let's have one more call. So we got another interview of 10 minutes with Michael Siebel. And Michael was pretty much, and these guys were pretty much prepared about the interview. They know inside about what you're doing. And this interview, we didn't follow any advice. We just kept listening to the question, answering it to the best of our abilities. And within eight and a half minutes, we knew that we can get into it. And the next morning, I got a mail from Tom, would love to chat. We called Tom and we got to know that we got into YC. So, yeah, experience was different. We got rejected. Then I mailed Tom and we. Taylor Kenerson (0:17:36) - Got it, that's unbelievable. Adil Saleh (0:17:39) - And that too, for the fourth time, right? This was the fourth. Amazing. Amazing. And it is kind of a story of being persistent and always knowing the reasons why you're getting the doors shut on your face. It's a harsh reality. Like, a lot of companies that get accepted, we talk about always, oh, that's really nice. He got accepted. What did he do? Successfully? All of that. So we get all the same things, to be very honest. Like, we spoke to more than 30 companies. All of them were succeeded. A lot of them might have been rejected, but they didn't share. I really appreciate that you did, and it's so big of you. And I think this episode has a lot more learning for people listening than most of them that got fabricated and all those stories. So I love the fact that you've been pretty much mindful of what you should have done when you got rejected. Like the number of users they wanted. You always had everything, despite the fact that you took a lot of you got advices overloaded, but you realize at the end, and they did so too, because they are smart people. Michael and Stone, these folks are so smart that they speak to hundreds of startups every single month. And they know how people mean things and they see you through their eyes, not just about the technology and commercial side of it, but as co founders, as people, how committed you are, which they must have seen in you. And that's why congratulations on it. Taylor Kenerson (0:19:09) - And I think that's a huge point too, is that it goes back to that authenticity piece is you got the advice from so many people, right, but you had something there and you knew it in your heart of what you were solving. And when you took the advice from someone, you just then become a robot and it just becomes like, oh, you're trying to be like everyone else. Where your real UVP is you being yourself and just sharing, just like you said, sharing what you actually have and just kind of going with the flow. So that's actually kind of beautiful. And I think more companies need to stop looking at everyone else and kind of just double down on themselves. Look at others obviously to get an understanding, but don't just see what's worked for others and go run with that playbook. I think that's a huge lesson in learning there. Adil Saleh (0:19:53) - Absolutely. And one interesting thing that I found in code end, like a lot of these individuals, even they are building something for their own, or they're working at some company doing their own coding. They can use code end. That's the best part. A lot of tools that we get to like products that we get to meet, they are like more data analytics. They need companies that are more of a b two b. They want companies to integrate their ops team, to integrate the product and everything and make a vp or head or these kind of people to make it a scene on them. And it is so hard for them to get the user experience, which is super critical in the first year or two to make sure that you get the right experience. So you talk about some of the b two B customers, like any larger teams using code end, what kind of different experiences you had and how you're making your way out from this wins and loses. All of these combination of experiences that you get with b two b customers. Amartya Jha (0:20:48) - So we are building a developer tool. The first thing that you have to do is be very close with them. You can't build this tool in isolation. So the first thing that we did was launch a free version, which absolutely solves the exact pain point. The developers that are out there and then have multiple plans for people to get onboarded. So in the very first month we onboarded two series a startup, both of them $20 million funded and combined they had around 100 engineers. We actually build a product with them. You can call it like co building. And then we got to know from the very first hand experiences like what is exactly what we have to build, if we have to serve any company for that matter. And then today we are serving two unicorns and we know how we can scale the product for their needs, what all things we have to build for them if we are getting a team size of more than 200, 300 developers. And how do we add value to that? So if you talk about the product journey, the underlying core remains the same. Only thing is the way you deliver that particular piece of value. So for example, for a developer getting everything in his own editor will be the best thing that he wants to do. But for a vp of engineering or engineering manager, seeing the entire health of their code which is happening in the organization, able to manage the good practices, and as you said, make sure that bad practices are disabled and control what is getting checked in is the biggest key because for them the biggest priority is I don't want bad code to be pushed in. How can I save it? Because you can't tell developer every single time that don't do it. These guys have documentation which is of ten pages long, have every nitty gritty details, but nobody has time to look into it. You need to actually enforce it in such a manner that developers don't feel that they are actually stopped by that enforcement or they are slowed down by that enforcement. It should come into their workflow and integrate so seamlessly that they should think that, okay, it's telling me something which is unique and I can actually implement it to make my code better. Right. Adil Saleh (0:23:05) - And it should enable for these vp of engineering, they want to not just impose it, but formalize it across operations, make sure that it is a part of the ecosystem, it is a part of a tech stack that they need to make sure they get engineers get triggered or something. How does that workflow? Amartya Jha (0:23:29) - So whenever engineers using extensions, so they got prompted, whenever they write bad code, whenever we try to push bad code, they got prompted. Whenever they raise a pull request, they're actually tagged and told that, hey, these are the issues that you can actually fix. Common issues or net comments that you have forgotten or like security vulnerabilities that you haven't thought about, because these are the stuff that engineers actually don't know about. Honestly, I was a developer for a very long period of time, and I've led teams. One thing I felt was, no engineers knows about security vulnerability. We don't care until. Unless someone tell us, okay, wow, we missed it. Let's add it. So basic things we do know, some of the things we don't know, and there are some of that grunt work that no engineers want to do. Like whenever a junior engineer raise a pull request, I might be writing a 30. Net comment that you can improvise this, you can do this, you can do that. And the engineer will be thinking, oh shoot, my code actually works. Why don't you directly merge it? Just merge it. These are small things which we can later. Adil Saleh (0:24:36) - Yeah, since I'm not that technical, but I'm thinking otherwise. Not otherwise, from a different angle, let's say engineer doing all this repetitive work, tedious work, and a lot of them, they get fed up. Like I get to meet a lot of when we were building our first product, a lot of folks, they kind of fit in the first six months. They think that it's kind of very tedious. And are you also thinking of taking some initiatives on educating your customers on how this helps actually the wellness of their mental wellness and all of that, these side of things. Amartya Jha (0:25:15) - So education is the most important piece here. That's where we spend a lot of time. You're just fixing their quote base, won't help in any manner. And trust me, no engineer will just take a random suggestion that their code is bad. We all have lot more pride. So you actually have to tell them in a way that they understand. We are not checking their code base, we are helping in their code base. That is 1 second. We have to tell them what is the actual impact of this particular piece? If I'm suggesting that you have missed basically error handling here or the source configuration here, I want to tell them that what is the actual impact in the real world scenario? What can happen? Show them that so that they can relate with the problem and then see, okay, yes, if I won't do it, this will be the problem. So this education has to come up with every single pull request that we do, plus every single time they are prompted in their editors, like this is the bad thing. They're able to see, okay, wow. Why it is bad and they can decide for themselves whether they need it or not. Taylor Kenerson (0:26:22) - Very interesting. One last question before we wrap. Amartya. You said that when you first were going to build CodeAnt that you actually onboarded two series a startups to code develop your application with you. How did you convince two series a startups to kind of work with you to develop something from scratch and be that design partner, that tester, that user and just open and leave those lines of communication completely transparent while still trying to build your company and helping provide value to them. Amartya Jha (0:26:56) - So one good thing was we got two design partners. Both of them were paid, so they were paying even when the product was not ready. Why was that? It's because the early customers that you'll get for the product can't be a person who actually doesn't have that pain point. He needs to be a desperate customer. He should be someone that needs it today, right now. If he's not getting that product right now, he's bleeding a lot. So you need to find people like that. We were lucky. The first customer that I got wanted a product in production, in production within next seven working days. Why? Because he has his code cleaning week coming up by the end of the year where he has to clean the entire code base that he has done of last year. His entire team is ready, but they knew that if they go ahead and do it manually, they won't be able to complete the task in seven working days. And they needed a tool for that. So there's our incumbent. Our competitor is sonar, which is $400 million in ARR. So they wanted sonar with the intelligence of GPT. They were not able to find one. So they were hunting in every WhatsApp group to find folks like that. And somebody recommended us to them. So in seven days we went from not having the product to getting the compliance checks done, deploying it in their environment, building the product, giving them the basic set of features to go live. Today we have their customer success story on our website where these guys actually did scan 1.5 million lines, documented 10,000 functions, which is like seven years of their code base. Actually auto fixed more than 3000 bugs or anti patterns you can call. And yeah, they had a very great those seven days and they still actively reusing it. So coming back to your point, your first few customers can't be someone who just tend to have that problem. He needs to be a person who desperately have this problem right now. If he doesn't get your solution even in the half bake or it's not even there, he should be just excited by that idea and think like, okay, yeah, if this person can deliver this idea, it'll be a ton of value for me. Taylor Kenerson (0:29:22) - I love that we are going to end there because that is a huge nugget. Thank you so much for sharing. We so appreciate you. Thank you for your time. I'm so excited to get this episode out and to have people listen. There are so many valuable insights and just thank you so much for sharing. Adil Saleh (0:29:37) - Absolutely. Yeah. Amartya, it was real nice meeting you. Quite inspiring. Amazing. Good job. And you left us inspiring. There's going to be one more better thing that we're going to be doing after talking to you today. Taylor Kenerson (0:29:50) - Thank you, sir. Well, have a good day. We'll talk soon. Bye.

Keep Listening VIEW ALL EPISODES >>>