9. 9. 2024
11 min read
The Impact of Proprietary AI Technology
Another Startup Huddle summary blog is here! This time, we're diving into the AI world with Zach Rattner, co-founder of Yembo.ai. Zach shares his journey of building an AI-powered inspection platform, tackling challenges, seizing opportunities, and the importance of clear communication with stakeholders. Keep reading to get the scoop!
Silvia Majernikova
Social Media Marketing Manager
Zach Rattner journeyed from corporate life to startup founder, leveraging his computer engineering background to co-found Yembo.ai, a leader in AI-powered virtual surveys. Last year, he added an impressive milestone to his career by publishing "Grow Up Fast: Lessons from an AI Startup," where he shares candid and insightful lessons from his journey in building a startup in the fast-evolving world of AI.
Why do you believe AI products are experiencing such rapid growth today?
In short, people seem to catch on to what's possible now. When I started building Yembo some years ago, this inciting incident made it possible. There was this academic benchmark called ImageNet, which you can think of as a thousand-way multiple-choice test. So if I hold up a picture and ask you like, what is this, it's very easy for a human to say, oh, that's a cat sitting on a sofa, or that's a dog, or that's a barn, etc. The ImageNet competition was a challenge for algorithms to do the same. So companies and universities would submit their algorithms, and they would compete. So they'd compete by having data that humans labeled that would go through it and write down the answers. But then they wouldn't tell you the answers; they'd show the picture, and then you'd have to run your code and say what it was. In 2015, Microsoft submitted a solution that won, and it crossed a threshold where humans were benchmarked and were about 95% accurate. So, you take a typical college-educated person and show them 100 examples, and that would get about 95 of them right.
So, for the first time in history, the best object detection algorithms were better than people at identifying items and images. It blew my mind. This would change everything, but people had yet to learn this. And you went on news articles, and people didn't really seem to be talking about it that much. Many fundamental technologies and algorithms evolved and continued to grow, leading us to where we are today. There's more innovation happening today than there was then in the space. But the public seems to have caught on and realized that when these discoveries are made, it's not like a bunch of academics in the corner who are kind of shielded from mainstream society or the only ones talking about it anymore. The time to adoption has shrunk somewhat dramatically. Something interesting happens; it gets deployed, and people see it quickly.
The fundamental discoveries, like in the science realm, like papers that come out, don't have very good user interfaces. So, for example, in that paper I was talking about, where Microsoft won, you can read it; they spoke of their error rates and showed all these plots, but as a layperson, you can't do a lot with it. The interesting part about many of these AI technologies is that they benefit from parallel computing. It's not just a standard computer; these high-end graphics processors are powerful. And almost necessary in a lot of use cases. So it's different than just reading the paper and doing something. In a lot of cases, you have to build physical hardware. You have to think through what AI can do and how you can bridge that gap. If I publish a paper and say, 'Look at this new capability,' it only moves the needle on workflow adoption if implemented practically.
Do you believe the current AI boom is another bubble, or is it fundamentally different from past trends like Web3 and blockchain?
So, identifying a bubble requires looking at real traction. It's relatively easy to get people tweeting and talking about you. There's a lot of hype, but I feel like what separates fluff from things that move the needle is whether these things are being deployed and embedded in workflows. Many people use ChatGPT once or twice, tweet about how cool it was, and never return to it again. As an engineer, I feel like when there's a technical differentiation in the product, that's one big standout. So a lot of these companies that are just API layers on top of open AI APIs, like I don't see a ton of innovation there, but I feel like getting the AI technology deployed and embedded in a workflow, that's where it's gonna have real transformative effects because you'll be able to do things like, basically give people superpowers.
You can make something easier to use, better understand, higher fidelity, and require less effort. And that's where a lot of this innovation will come from. When the algorithm becomes possible, there's a transformative spark.
There was a lot of speculation about people who would buy tokens because they were buying lottery tickets, but it didn't translate to real-world usage. So many of these companies would do the ICO and generate some hype, and people would buy up to tokens, and then that project often didn't go forward. Not all were outright scams; many were well-intentioned, but they just couldn't get traction for one reason or another. But in this case, it feels different because it's happening all around you. I told people four or five years ago, like, you probably use AI, and you don't even know it, but now everybody knows it, right? It's not like a hidden thing anymore. So it's getting harder and harder to avoid.
How can companies cut through the noise of hype cycles and ensure they work with the right technologies and firms?
Well, I don't think a hype cycle is necessarily bad. Mobile computing in the 2010s was a hype cycle. The internet in the nineties was a hype cycle. And yeah, there was a lot of buzz, but we're not worse off for those inventions. It's easy to get caught up in the midst of things in the interim, but a real benefit has happened with these algorithms being possible now. But then, how do you cut through all the noise? This is the issue. Or how to work with the right firms? I have a biased view as an engineer, but technical innovation should be the key, and you can't control what you don't understand. So companies that make their whole business model around, like wrapping their technology around something else, I feel like you can't go that last mile. If you cannot cook up your core technology as a company, that will be a competitive disadvantage. It's relatively easy to put something together, but making something last is a bit more complicated.
Why does it seem so easy to launch new products in the AI space, with companies emerging and gaining significant valuations in just a matter of weeks?
There's a lot of interest in the space. So, it's an exciting way to get people interested in your product. But there's also a lot of tooling available too. Many of these companies that are out there, also often startups themselves, have APIs that they're offering and services that make it easy to embed AI into your workflow. And I don't think that's necessarily bad. I would rather live in a world where too many products can be used versus where one or two powerful folks monopolize it. It does make a lot of things to sift through.
You can ask ChatGPT to summarize the best ChatGPT plugins. But it's good to experiment, it's good to try things, and it's good to have people be able to put something together. If you have an idea and a couple of free weekends to put something together and have a working, viable thing, you can go and have people use it. It's invigorating waking up in the morning and trying it all. I've probably signed up for 15 or 20 different things this week. It's a sign of a vibrant ecosystem. So AI brings many powerful things, making them all possible, but generally, at the end of the day, I'm an optimist about it all. I think these things can be used for good, and with the right people around, I think it will.
What do you see as the biggest challenge for someone trying to start a company in the AI space despite the overall positive outlook on innovation?
Well, lots of things are hard. I would say understanding your target market and what their real needs are. There are user studies and things like that, but there are perceived and actual needs. It is understanding what your AI is going to augment. There's some workflow, some process, some way people do things today without AI, and understanding how AI will improve it. If a certain element is getting disrupted, think through how we can make this beneficial to everybody. So, in our world, we had a lot of pushback, for example, in our early days when people didn't understand that this was a tool to help the sales agents. People were nervous about, is this gonna take my job? But now we have some deployments under our belts, we can demonstrate. We have data and case studies, and we can show that this doesn't take away your job. It frees you up to win more business, grow your company, and be more efficient. But it doesn't replace, right? The AI isn't building rapport with the customer. It's doing the back office bits. So, in the early days, consider who will be affected by this and how you can make it great for all stakeholders. That is the right time to do it. If you haven't thought about it and you've already shipped, it's too late. There is a lot of urgency to go build, and people sometimes forget to be thoughtful and think about the stakeholders, which can derail some things sometimes.
How would you approach validating multiple ideas as a founder in the AI space, and when would you know that you've achieved product-market fit with one of them?
It's great to have ideas. At a startup, you're almost always in execution constraints, where you have more ideas than you have the bandwidth to do. If you have five ideas, get it down to one you focus on. When you go and do that, think through the metrics. So we always like to work backward when starting a new product, and we'll think about, okay, if this thing works well, what would I like to see happen next? And oftentimes, it's different from the numbers themselves. For example, I want x daily active users or y dollars of revenue. Still, it's more about thinking about what that adoption would look like and whether you can get your first champion, and people always talk about big scale. They want to go from idea to a million users, and sure, that's great, but I feel like step one is "Can I win over one person? Have my first customer who loves the product?" and then going out.
So I think this idea of product market fit, I see a lot of folks trying to go too fast, too wide, and then the day that you have your MVP live is probably not the best day to blow a bunch on Google Ads and try to get a million people to use it. Try to get your first, understand your target market well, and find 10 to 1500 people. You can do it in multiple ways, but finding that target group, seeing if the product resonates, and going there with that open mindset. Don't assume it will be 100% perfect on day one. The iPhone took a couple of years to copy and paste. When people return with critical feedback, it's almost always a signal of buy-in more than hypotheticals.
At first, when people would email me feedback after we launched something and there was negative feedback, I took it critically. But I realized it's a signal of buy-in like they're trying to make it work. It takes time to launch the product, and then having time in your schedule, don't just move on to cranking out more features. Take the time if a customer played around with your product for a few days and wants a two-hour call with you. It is a really good way to see if it's going to work out or not.
Regarding feedback, what are the best methods for gathering customer insights beyond traditional user interviews?
Feedback comes in all kinds of forms. The user interviews certainly work. That's my favorite because you can pick up on subtle things like body cues or people checking their phones while talking to you. They're telling you this isn't super important, even though they may not say it with words. But there's more to it that you can do. At the end of an interaction, ask how was that experience. Google Maps and Amazon do a good job with this. They've never assumed that the customer got it; they still ask for that feedback. So I think building that into the product, asking for feedback at critical junctures, and ensuring you're asking a question for each workflow you're interested in monitoring. You can also use analytics to see if people tell you they love the feature but spend three seconds on the page; maybe they don't love it too much. We should instrument our products enough that if something were to go wrong, we would not want to depend on calling and complaining. So, blend all these different sources and then use the user interviews as a component of a robust feedback strategy.
If these handpicked highlights have sparked your curiosity, you won't want to miss the full conversation with Zach Rattner on our YouTube channel or your favorite platform.
You might
also like