People & AI - Navigating Privacy in AI with Patricia Thaine

In this episode of People & AI, join host Karthik Ramakrishnan and his guest, Patricia Thaine, CEO and co-founder of Private AI, as they unravel the complexities of AI in a privacy-centric world. Patricia illuminates the risks of creating embeddings with personal information and shares essential advice on selling technical products to large enterprises. The duo discusses the nuances of deploying AI models, the intricacies of data privacy regulations, and the evolving business landscapes in the post-GPT era. With a rich linguistics and computer science background, Patricia offers unique insights into the importance of privacy in innovation, revealing how Private AI technology is pioneering in data anonymization. Do not miss this insightful conversation filled with expert analysis, personal anecdotes, and a shared admiration for the intersection of AI and privacy.
March 29, 2024
5 min read

Listen on

Apple podcasts: https://lnkd.in/ga4t4WuZ

Spotify: https://lnkd.in/gBzmKsDE and

YouTube: https://lnkd.in/gbDP34SU

Transcript

Karthik Ramakrishnan [00:00:05]

Welcome to another episode of the People in AI podcast, where we explore the forefront of artificial intelligence. I'm your host, Karthik Ramakrishnan. I'm the CEO and co founder of Armilla AI. Join us as we delve into AI's latest advances, ethical considerations, the risks, but more importantly, the exciting capabilities of this technology can provide, particularly in the context of the enterprise. And I'm very excited about today's episode. Patricia Thaine, a good friend of mine who we've known for a number of years now and had a chance to work together recently as well. Patricia is a CEO and co founder of Private AI and has spent her career at the intersection of linguistics and computer science. Patricia studied homomorphic encryption and NLP at the University of Toronto and worked on computational methods for deciphering lost languages. She also published privacy preserving methods for language, national language processing. Patricia is an official member of the Forbes Technology Council and was named a technology pioneer through the World Economic Forum in 2020 that did justice to all the various things. I know there's a lot more here than I was able to cover, but Patricia, in your own words, can you tell us a little bit about your journey leading up to the founding of private AI?

Patricia Thaine [00:01:20]

Sure. Thanks so much for that kind intro Karthik. So, the inspiration behind private AI really stemmed from an understanding about the need for privacy in innovation. And as AI expands as well. And I started working on acoustic forensics in my PhD, and not only was it very difficult to get data because of the privacy concerns that you'd have around there, and acoustic forensics lets you understand who's speaking and recording what kind of educational background they have, et cetera, et cetera. So basically, gathering information about the speaker. And the purpose is to often improve automatic speech recognition, but can be used for many, many other, far more nefarious reasons as well. And so on the one side of the coin you've got, you shouldn't use this data because it can be problematic, and you shouldn't capture this data because it can be problematic. And on the flip side, you've got, you can't get this data and you want to innovate. And how do you reconcile the two? Make it so that privacy requirements are not a barrier to innovation, but in fact enable you to get access to data that you otherwise would never have had a chance to play with.

Karthik Ramakrishnan [00:02:29]

Yeah, I remember, I recall very specifically as well, there was a very large telecom operator that we were trying to work with during my tenure at element AI. And one of their biggest challenges was they couldn't give us any of the data to help them because it was basically trying to help out the customer service department. So through the call logs and a lot of conversation goes on about confidential PII information, the data, we couldn't do anything. And I guess this was pre Private AI, because if Private AI had existed, I think that would not have been an issue in 2016 or 2018. So that's fantastic. And I think obviously, you're solving a very key problem, particularly when we think about regulations like GDPR and the Canadian data privacy regulations that exist. Could you speak to a little bit about how Private AI's technology actually helps in those two contexts? One, how do you make that data actually available and what types of data deal with? And then b, what the process of redaction looks like in the context of GDPR and these regulations.

Patricia Thaine [00:03:37]

Sounds good. I'm going to nerd out with you here, so bear with me and stop me if it's too nerdy. Let's see. So, first of all, when thinking about what technology to build with Private AI was trying to think of a technology that could be generalizable, that would be really easy to integrate, that would help companies comply with data protection regulations. And when looking into what was available, it was really clear that even the very basics of data protection regulations couldn't be adhered to because the technology just wasn't there. So if you look at the GDPR, it became this very aspirational regulation, and the technology is still being built towards being able to comply with it. And what these regulations tend to require is things like data minimization. So you have to minimize the amount of personal information you collect to the absolute minimum that you need for the task that you're going to be performing. Requests can be forgotten, and access to information, requests that you need to know what kind of information you're collecting about a particular individual, you need to know where it's stored, you need to know which tasks it's being used for, you need to consent for each of those tasks. And what the GDPR says is if you anonymize data, you no longer need to comply with the GDPR for that data. Now, one way we help is very much with that data minimization front. You need to be able to remove things like this in the situation you said with the telco, credit card information, any healthcare information that was mentioned, because somebody will talk about what happened over the weekend, maybe they broke an ankle, maybe they have cystic fibrosis. Who knows? It's stuff that should not be shared with a third party. You definitely need to remove the names. Maybe you need to remove more personal information like dates of birth. And in this really messy data, which is unstructured, you've got of course the transcript. But in that transcript you've got the errors from the ASR transcript system. You've got the disfluencies of human language. You might have multilingual conversations that are in Spanglish or English or Francais. There is so much to take into account when it comes to unstructured data, and that's really where you need AI in order to be able to understand the data that you have in order to minimize it. And then you can take it one step further and anonymize it. But I want to impose a lot of caution when you use the word anonymization. Oftentimes people think anonymization means removing names, Social Security numbers, and you're good. But they won't consider things that are quasi identifiers, things like religion, things like approximate location, things like what political affiliation they have. And when you combine these together, they increase the ability to recognize somebody exponentially. And there are some use cases in which anonymization is very useful. There are other use cases in which redaction is really what you need. For example, if you have really sensitive information like credit card numbers, and you don't want some fraud happening with that credit card number. So ultimately, to answer your question, the way we help is we have the most accurate system in the world for identifying personal information in this really messy and structured data, text, audio, images, documents across 52 different languages, and growing. And we make it available for developers to integrate in a very easy way into their software pipeline, their product, anywhere they want in their environment.

Karthik Ramakrishnan [00:07:03]

Very cool. And how do you deal with training versus inference? I think it might be, nothing's easy, but easier to pre-redact or anonymize in all these contexts. When you're putting in training data, that's easier, but inference type, it's real time. And so how do you see those two situations and where do you play? And if it's in the real time? Really curious. How do you keep up with the scale that might be required in terms of API calls?

Patricia Thaine [00:07:31]

Yeah, so we do play a lot in helping sanitize training data. We also play in the sharing of information period, regardless of whether it's for AI or not. When it comes to training, there are a few things that folks don't know a lot of the time. One, if you are fine tuning on data or training a model on data it is going to, quote, unquote, memorize that information, and it can be spewed out in production. So you have to be really cautious not only about the personal information, but also about the confidential information that your company might have and the original level of access control that you have for that data because of that personal information or confidential information. If you are training on that, should be the level of access control you. And then another thing that a lot of people don't know is that when you're creating embeddings, because embeddings are becoming much more popular now, when you're creating embeddings with personal information or confidential information, you can reverse that embedding and capture that information. So you also need to be concerned about it there. So the way that we help is by stripping out that personal information when you don't need it. And the cool thing about unstructured data is that a lot of the value comes from around that personal information. The sentiment in a particular cool. How well did this customer service agent do? What sentiments associated to a particular product? What topic is being talked about? Can you summarize this email, et cetera, et cetera? There's so much you can do, and then the personal information is just toxic and actually doesn't allow you to generalize. And another cool finding that we have is that by removing as much personal information as possible, including quasi identifiers like religion and location and origin and things like that, it actually reduces the bias of the output.

Karthik Ramakrishnan [00:09:19]

Interesting. We've gone from that world of, it's more of a clarification because you've talked about this now, which is about inference time or hallucinations. Right. And I think in the world of generative AI, this is a real problem where models have known to reveal private information, which they shouldn't have. So if they had used Private AI in that situation in the training. So you're saying that would not have been part of the embeddings. If the output should not have seen that, the input should not have that. And so if they used you, then we're really getting rid of that problem of model hallucinating out.

Patricia Thaine [00:10:03]

If they don't see the information, they can't memorize that. And the kinds of things that have come out are, for example, there was this company in Korea, in South Korea called Scatter Lab that had trained on billions of conversations between users and then in production was spewing out, this was several years ago, was spewing out into our names and addresses and so on, I believe addresses of the users to other users. That would not have happened if that model had not seen that information originally. And there's papers by, for example, Carlini et al from a few years ago already showing GPT-2 memorized addresses. Online. You have papers about character language models, memorizing the most likely sequence of numbers when there's one hidden inside a Pen Treebank database. And this other paper by Carlini et al called The Secret Sharer. And so there was some inkling in academia already about this problem, and a lot of that has yet to pass over to industry.

Karthik Ramakrishnan [00:11:05]

No, that makes a ton of sense. And I know you reference his papers quite a lot, so it looks like a lot of the work that you do is based on the research that he's done, which is fantastic. Let's switch it a little bit. Right? Great. We have a good idea of what you're trying to achieve with Private AI. Let's take the other shoe. When you go to a client, and I know your clients are typically large enterprises, I would say at this point in regulated industries, which have significant ramifications if privacy is not adhered to. So in that context, let's talk about some of the challenges that you face. It's not just about the technology, and I would love for you to put on the hat, but imagine you're talking to another startup founder who's trying to sell to a B2B enterprise. What would your advice be of having trying to sell something pretty complex, I would say right into a B2B environment.

Patricia Thaine [00:12:07]

So the easiest sales are the ones when they've tried to solve the problem themselves and recognize how difficult it is. The most difficult sales are when they have tried to solve the problem themselves, recognize how difficult it is, however, the person who worked on the solution internally is still working at the company and their job is on the line. If it doesn't work out, ideally their job is not on the line and they could be moved to another project because there's a lot to do on core products in an organization. So I can give particular advice on how to sell a technical product to engineers and product owners, and it's to be very detailed in terms of accuracy of the models. It's to be very detailed in terms of the performance of the models for latency and throughput. It's to be very detailed about why this is a hard problem and why you have an edge at solving this hard problem. So not hiding behind this is actually secret information that we're not sharing. These are developers, they're not going to take kindly to that. Be upfront with them. Tell them why this is difficult, why you've solved the difficulties they faced, and they will then appreciate you and then be very meticulous about communicating what you do. With regards to security, we deploy within our customers environment, and what that means is that we have to constantly run security scans of the container that we deploy in our customers environment. We have to communicate with them what our security practices are in general, and we have to be really on top of our game there. And then another piece is try to, of course, there's a general discovery. Try to understand the motivation. What problem are they solving? What have they done before? Who's owning this project? Who owns the budget? And with certain enterprises, they might need more information about ROI. And this ROI can be very tricky because they might be working on a new project with this data. To give you some idea, 80% to 90% of the data out there that companies have is unstructured data, and that is what AI is unlocking. So what can they do with this? That is a question that every organization has yet it is answering in the process of answering for themselves. But they might rely on the vendors to help them answer that. So if you have examples of other organizations ROI, without naming names, if you have information about what the best practices might be with regards to figuring out that ROI, how very much you can handhold the champion within the organization to promote your company for that budget will be time well spent.

Karthik Ramakrishnan [00:15:05]

Excellent. That was a good review. And a lot of this is general enterprise sales process. Right. How do you match, walk a client through the process and understand what the steps are, where their pain points are. Excellent. Now can we add a layer of how does this get even more complex when you're trying to sell an AI solution now? Right?

Patricia Thaine [00:15:29]

Good question. Okay. We do have a lot of questions around accuracy, and one thing we've been doing is we've been working with Armilla to provide a certification as to how accurate our system actually is. And having that third party report to be able to showcase this isn't just us saying it, this is a third party who has been reinsured, saying this, and there is that guarantee around the accuracy, which makes a big difference with regards to the time for POCs because our customers spent a lot of time testing the system. So by shortening that, that means that our time to contract is a lot faster. It's something that they can also show if they show their higher ups to see this has been thoroughly vetted. And one other way customers might do this is by calling each other up or saying, hey, do you know this company? Have you tried their system? I hear you're a customer. However, when it comes to privacy technology, I do find that people are even more sensitive about sharing that they are a customer publicly because it's similar to security software. You don't boast what it is that you're doing as a security posture because it might make you a little bit more vulnerable if people know where the cracks are. Having anything that helps during that, speeding up that POC, makes a big difference.

Karthik Ramakrishnan [00:17:07]

That's excellent. Well, glad, certainly on getting an independent verification. It's no different from, I'm sure you face this, B2B sales, you have to do your cyber security SOC 2 compliance, which proves that you are secure and you run these security checks on a regular basis to keep your platform up and running. I think similarly, definitely a guarantee could help, but something you just said that I think is a big problem in your situation, I would suppose, which is they can't talk about you, but you want them to talk about you, but you also don't want them to talk about you. So it's a bit of a tricky situation from the security context. But you're doing really well. I think you've given us lots of tips on what works. I think you're obviously following advice you've given so far is the things that you're following that has worked for you. From an enterprise standpoint, though, the on prem deployments of models is challenging. Right. Cloud, of course, easier. You have kind of one multi-tenant solutions. Excellent. That's where the world wants to go. But when you have these on prem deployments, how do you deal with the complexity and what are the complexities there to begin with?

Patricia Thaine [00:18:29]

Yeah, so because we deploy as a container and run as a rest API, it's almost like they're using the cloud. There's an initial bump of, you need to be able to deploy this internally so that your organization can access it. The complexity there is, do you have a container repository in the first place? A lot of companies do. And then they can deploy that and then their entire organization already has access to it. So it's very easy to just get going. And if you have a team in one part of the organization who finds out about you, they can then go in and use that API without too much hassle. The main thing is being able to clearly communicate the benchmarks. And then if the throughput and latency performance isn't matching what you expect and what you've shown them, you need to go in and dig into how it's being deployed and then make recommendations. And the complexity is really on what kind of hardware are they running? What kind of boundaries are you working with within the hardware? Are they trying to run on their local laptops, for example? Are they trying to run just on CPU? Are they trying to run on particular GPUs? And then having that comprehensiveness of detail really helps in moving things along and pinpointing whenever anything might be going wrong. So we sometimes get asked, can you deploy your model on prem and have somebody tune it with our own labels? And one reason we hesitate to do that is because we are, with the current models, able to say, okay, this is a configuration problem, or this is a problem with our output. And then if it's a problem with configuration, we could dive in and help the client very quickly. If it's a problem with our output, we train up a new model, send a new model to them, and it's like a bug fix essentially. And if folks start putting in their own data, then the issue that you come across is you no longer have that ability to distinguish whether it's a configuration problem, a problem with your model, or a problem with their training data. And the amount of time that we spend on ensuring that our training data is extremely accurate, that's the make or break, in addition to some other secret sauce as well. But a big piece of the make or break is the quality, breadth and depth of the data that we collect. And we spend a lot of time training the people who annotate our data in multiple languages. So we need to expect the same from anybody being able to annotate the data. So in sum, have the documentation necessary when it comes to benchmarking, have an ability to diagnose problems efficiently. Don't get in the way of your ability to diagnose problems efficiently, and have a system that you could deploy widely across multiple users so that you can also learn from the users as they use your system, and everybody else can benefit from your learnings.

Karthik Ramakrishnan [00:21:46]

That's amazing. Each one of these can be blown up into its own sort of sub points, and we can spend a lot of time around that. But I think this is fantastic, kind of a quick safari of the things one should consider. Where do you see the future of privacy going, number one? And do you see, with the new AI regulations coming through, a bigger onus or change on the privacy requirements around data, what is the impact there? So as an organization as an enterprise. Now let's put their hat on, or even a startup that's trying to sell to the enterprise. What should they be thinking about as to where this industry is going? Do you have a view on that?

Patricia Thaine [00:22:27]

So the thing is, a lot of the privacy requirements at a basis apply to what's going on in AI, period. The big shift is more of a perception shift, an understanding shift of why privacy is so important. But for example, the GDPR applies to any EU citizen regardless of where they are in the world. So if you are web scraping and you have an EU citizen's address and they did not give you consent to train your model with that address, you're not being compliant with the GDPR. That is, I think, something that isn't well understood. I don't think the courts have dealt with that yet. So a lot of the future of privacy is going to depend on what is going to happen with the courts as folks train on different data, on scraped data, as folks start fine tuning models on personal information without understanding the consequences fully, because it's also very difficult to pinpoint this person's an EU citizen or this person's a citizen of Brazil. So we need to be compliant with legislation there, et cetera, et cetera. The future of privacy is very much around people paying much more attention. Luckily there being tools that allow you to deal with it much more easily. As a developer, we always believe that developers are the ones who are going to be integrating privacy into their software. And when we started it was too early for that. But we are starting to see the signs that developers are ready. And so we're very much focused on allowing developers to take these modular components we're building for compliance and putting them very easily into their pipeline so that they have very fine tuned control over what kind of data they are collecting and fine tuned understanding of what it is that's happening before it hits their systems. So previously what you'd have is a complete mess on the other end of data in a data lake or companies that have no idea what kind of personal information live in their environments. And you've got a lot of issues with that because one, you don't know what kind of risk you're incurring. Two, when you need to have organized data in order to be able to use it effectively as well. And so what these data protections regulations did is basically an overhaul of internal systems within organizations to put order in complete disorder. And now it's time for the developers to move that order that has been put in the back end to something that can actually come into play directly at the front of the processing, so directly on device, directly within the products that they build and anywhere where there's a data pipeline that's ingesting information, that's interesting.

Karthik Ramakrishnan [00:25:15]

Now in terms of switching gears a little bit, right? A lot has happened in the last 16 months where AI has suddenly become the topic de jour. I mean, every ten year old is talking about AI now, so obviously something's changed and we know what that was like. OpenAI coming up with ChatGPT, I think it just brought AI to life in a practical, consumable format as to what this technology can do. My mom knows now what AI could do. She finally figured out what I've been doing for a living for the last ten years. But the question is, how has that changed the business dynamic for you? Right. Pre-GPT era to the last 16 months of the generative AI era?

Patricia Thaine [00:26:02]

A lot less outbound, a lot more inbound.

Karthik Ramakrishnan [00:26:05]

Amazing.

Patricia Thaine [00:26:08]

Which is wonderful. That's what companies want. But also it's a lot less education because people are starting to understand that they need it. There's also a lot more understanding about the need for accuracy, and that is growing. There's a lot more that can be done also with regards to people's understanding of what personal information is in the first place. But overall, there has been a massive shift in the education level of the people who are across the board working on innovation within organizations, regardless of vertical.

Karthik Ramakrishnan [00:26:49]

Amazing. Wow. And have you seen new verticals come now testing with GPD that you did not see before? Could you mention what they would be?

Patricia Thaine [00:26:58]

I think everyone's testing every single one. I don't think there's anything surprising there. There's a lot more going on in legal and HR than before, but it's not surprising that they are looking into that. And then there's always been a focus on privacy in healthcare. There's always been a focus on privacy in insurance. So the level of education there, I'd say was already very high. The level of education within banks was already fairly high as well. But the surprising thing is that more traditional industries like legal, that are slower to adopt technologies are doing so at a faster pace.

Karthik Ramakrishnan [00:27:41]

Okay. Yeah. And I think the last batch of YC companies, I think over 50% of them were AI company, which means that and this cohort might be even higher. So I think it feels like, and this is. I don't know how to say this without sounding negative, how much of this is hype versus how much of this is here to stay. Right. And I fear that we almost want a little bit of let the air out a little bit because we might be an overhyped territory. Because when technology doesn't deliver and quickly enough in what people want, then the reaction is always more extreme on the other side. Right. So thoughts on that?

Patricia Thaine [00:28:29]

Yeah, to a certain extent. So I think that hype happens when you try to apply a technology to everything when it shouldn't necessarily be applied to everything. Take blockchain, for example. We brought distributed databases to places that needed distributed databases because of ransomware attacks being rampant, for example. But it didn't need to be applied to everything. And as time goes on, there's more of an understanding of the community, of what it's good for and what it isn't good for. I think we're going to see that with large language models and AI in general, it's hype around. Let's try this on everything without necessarily looking at the fundamentals and seeing what are these fundamentals actually going to make a big difference for. And then post this, what's going to happen is an understanding of what it's actually good for and then maybe a bit of a burst, but then growth of the actually efficient uses of the technology. And I guess invest with caution, not just is the main thing to learn there, which, yeah, I guess the previous hype cycles can have taught us what it is that you should be looking for within a technology. I think one big thing is whatever you're building, the fundamental component of your product, whatever you fundamentally need to build your product should not be entirely reliant on somebody else's company. And so one thing I'd ask if I were investing in such companies is what's your data strategy? Number one, how is that going to be a good moat? Because that's still always going to be the data moat. I don't see that going away anytime soon, especially with industries that have a highly specialized data. And then what kind of technologies are you reliant upon? Do you absolutely need to use GPT-4 or can you diversify in terms of the LLM provider that you're using? With that diversification comes the ability to switch over if prices go crazy, if a company goes bankrupt, et cetera, et cetera. And what kind of fundamental value are you bringing on top of this technology that isn't easily reproducible, of course.

Karthik Ramakrishnan [00:30:50]

Yeah, I think that's fantastic. You put on the investor hat very quickly. I think you're right. There is a smattering of companies. Let me ask you this as an opinion question, rather than me stating my own. There's a slew of companies that are now built on top of OpenAI, like the agents, the ChatGPT agents. Would you consider them a bit weaker because they're not building their own core tech, but are building on top of an existing platform? Is that what you're saying? I completely agree on the data mode, particularly when you're building for specificity. If you're going to train on Internet data, you're trying to solve for a very different problem versus something in the insurance for claims management. You're doing something very specific and data modes will matter. But technology modes, right? These days, models are all mostly open source, and more and more the generative AI models, well, there's two camps. There's the closed source models and the open source models. Do you think building on top of one of those is a disadvantage?

Patricia Thaine [00:31:54]

So the nice thing about building on top of open source models is that you can still make a lot of improvements around the accuracy of the models themselves. There's a lot of tweaking you can still do, and you can also make a lot of improvements in terms of deployment efficiency. And efficiency in itself can be a really good moat to have as well. You can't do that with closed source models. But yeah, let's take your example of if you're building a chat agent on top of GPT. I think it really depends on what kind of data you're capturing still. And are you building something that is your UI exceptional? Do you have some sort of network flex that's going into play that's going to give you that moat? In a lot of these cases, it might just be some folks who want to make a few hundreds of thousands of dollars and then call it a day. And that is what it is. But I don't know if I'd call it a venture scalable business without some other component to it.

Karthik Ramakrishnan [00:32:57]

Yeah, I guess the long term sustainability of the business model may not be what we think it is. That's going to be interesting. I'm really looking forward to seeing how this plays out. I take the stance of that, I think to your point, if you can figure out what your competitive advantage is, even if you're using a ChatGPT for, whether it's a data advantage, whether it's a UX or UI advantage or a problem set, that you have some unique insight into that no one does. I think there's advantages to that, but then there's also going to be a lot of the me too apps of the App Store world and our mobile world.

Patricia Thaine [00:33:34]

Or even a services advantage. It doesn't have to be a SaaS business for it to be a profitable, very good business.

Karthik Ramakrishnan [00:33:43]

Yeah, I think from that context, I think risk management for the enterprise, there's a whole industry that's waiting to blow up right now, which doesn't exist, but barely. I think there's a few handful of companies, including us. We played it, but I expect there's going to be hundreds of companies, just as we have for cybersecurity. You have all of these companies that do cyber assurance and security services and attestation certifications help you red team and lock down your systems, et cetera, et cetera. You're going to need the same in a cottage industry of companies doing that in the AI space. And I think to your point, services, not a bad thing. I think if you do it well, I think there's a whole opportunity there too. Very cool. Rapid fire. I think we are on time. It was a fascinating conversation, but rapid fire. Your prediction for the next breakthrough in AI next year?

Patricia Thaine [00:34:37]

Oh, yikes. The next breakthrough or the next public awareness of the breakthrough?

Karthik Ramakrishnan [00:34:44]

Actually, let's do both. Thank you. That's two questions in one.

Patricia Thaine [00:34:47]

All right, the next breakthrough. That's a tough one. I think time series data is one that is really in line for doing for a breakthrough soon. And then public breakthrough, probably self driving cars. I know people have been talking about it for a while, but I'm mildly optimistic that next year might be the self driving car year. Let's check back in a year and see if I was totally wrong.

Karthik Ramakrishnan [00:35:21]

Yeah, I know Tesla's already around, it's not full self driving, but there was a beta version of it which I use and it does well some of the time, like, most of the time. Sorry. And some of the time it is a bit, you know, take control.

Patricia Thaine [00:35:34]

Yeah, it should. Yeah. Let me take a step back. Let's avoid the self driving aspect of it, but let's emphasize that if you've got routes that are pretty straightforward, from a city a to a city b for example.

Karthik Ramakrishnan [00:35:49]

Point a to point b with smooth traffic environment. But I think what you're saying is that has to go beyond Tesla.

Patricia Thaine [00:36:00]

It has to go beyond Tesla. I don't think there's enough of a public awareness yet of what this can mean. I think Waabi in Toronto is doing a fantastic job at working on this problem.

Karthik Ramakrishnan [00:36:13]

Yeah. I mean, Raquel, who better than Raquel to work on this? Fantastic. Okay, that's two questions. I'll do a final third one. If you weren't doing private AI, what would you be doing?

Patricia Thaine [00:36:25]

Another company.

Karthik Ramakrishnan [00:36:27]

Okay, so entrepreneur, that's what you'd always be. Excellent. Great. Patricia, thank you so much. Great conversation. I think lots of great advice. I think we really talked about prior to this, how do we share our experiences to this new generation explosion of AI startups that are coming through, and what advice can we share, having been in this space and having done this for the last two years? So I think on point. Thank you so much. And look, we are big fans of what you do with Private AI. I think it's an essential backbone for adoption of AI to be able to build more solutions, more products around it within the enterprise context. So more power to you. I think that it's great work and yeah, looking forward to seeing all your success. Maybe we'll check back in in the year and check out a prediction too.

Patricia Thaine [00:37:13]

Yeah. Really big fan of what Armilla is doing and what you're doing as well. So mutual.