Artwork

Treść dostarczona przez Ross Dawson. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Ross Dawson lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.
Player FM - aplikacja do podcastów
Przejdź do trybu offline z Player FM !

Lindsay Richman on immersive simulations, rich AI personas, dynamics of AI teams, and cognitive architectures (AC Ep63)

40:01
 
Udostępnij
 

Manage episode 441687169 series 3510795
Treść dostarczona przez Ross Dawson. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Ross Dawson lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

“The beauty of generative AI is that it’s incredibly elastic. With a strong NLU, you can orchestrate different services to do various tasks. Whether it’s something simple like booking a vacation or scheduling a meeting, or something more complex like running a state-of-the-art deep learning model with an AI-powered agent, it becomes really interesting.”

– Lindsay Richman

Robert Scoble
About Lindsay Richman

Lindsay Richman is the co-founder and director of product and machine learning at Innerverse, a platform that creates AI-powered simulations to help users build confidence and emotional awareness. She previously worked in product management and AI for leading companies including Best Buy and McKinsey & Co. She was norminated for VentureBeat’s Top Women in AI Awards.

Company Website: www.innerverse.ai

LinkedIn: Lindsay Richman

AI Accelerator Institute Profile: Lindsay Richman

Github Profile: Lindsay Richman

What you will learn

  • Lindsay Richman’s journey into AI and machine learning
  • The evolution of natural language processing and AI agents
  • How AI-driven simulations enhance personal and professional growth
  • The role of generative AI in orchestrating complex tasks
  • Ethical considerations in AI development and its applications
  • The importance of diversity in building AI systems
  • Collaboration between humans and AI for future innovation

Episode Resources

Transcript

Ross: Hi, Lindsay! It’s a delight to have you on the show.

Lindsay Richman: Thank you. I appreciate you inviting me. I’m very excited.

Ross: So you are taking some very interesting and innovative approaches to using AI to amplify cognition in the broader sense. So first of all, how did you come to this journey? How has this become your life’s work?

Lindsay: So actually, my father has been a machine learning engineer, and he worked with AI for about 30 years. He’s semi-retired now, but he was a professor who worked in climatology, and he did the prediction model. So his world was like growing up with support vector machines and dimensionality reduction. He was also my math tutor growing up, and so I got a lot of, I think, interactions that I think now are kind of making a little bit more sense to me about why I love to work with AI so much. But he really, I think, inculcates a lot of creativity in me. And I was always interested in his work.

And then I’m kind of a nontraditional engineer. I started working with Python maybe seven years ago, because I was using Excel for things. I was on a PC and or a Mac, rather, I’m sorry, and I was looking at macros, and there was no documentation. So a lot of people were using Python at the time instead of Excel. And I started using that. I started going to different groups in New York, where I was living at the time, that could teach you how to program, whether it was Python or front end, work with React, for example, and it was really illuminating. And I realized just how much creativity there was in engineering. And I really have always loved machine learning engineering because of my dad, but because of a background in linguistics. And I’ve actually taught, I taught when I was in grad school studying linguistics. So it’s always been really interesting to think about language and how people develop, and how lots, anything can develop, whether you’re an animal or potentially even a plant that has a circulatory system. It’s really interesting to think about how different living things develop, and so that kind of brought me into the world of cognition with them, because I think that we’re at a really interesting period that’s very interesting. Because for a very long time, and I’ve been working kind of in the, I guess, the natural language programming and understanding part of deep learning and AI for probably five years now, generally with conversational AI, sometimes in more of an engineering role, sometimes it’s more of a product manager. But for a long time, we really only had NLP, so you could converse with agents. But usually it was a bit limited. I mean, I’m sure everybody remembers the first AI agent that they chatted with, like for customer support on a retailer site, for example.

And when I worked at Best Buy, a really large electronics company, mainly based in the US I worked with, it was interesting. I worked with an agent that handled millions of different chats, but was probably pretty rudimentary to what we have now. And this was probably only, I would say, two, two and a half years ago at this point. And so that just shows how far we’ve gone. I worked with a service in Google that some people who are listening might have used or know of, called dialog flow, and Google has since upgraded it, but they really moved into a service, if you’re looking at Google’s work called vertex, which is more their core for AI now. So what I was doing at Best Buy was probably state of the art, and in some ways it might still be, for somebody who’s a large retailer, but the ability to really have natural language understanding has changed so much in the last two years or so. It’s shocking. And I think that really came with the advent of models like GPT 3.5 which are now not really talked about at all. I mean, we rarely hear about 3.5. It hasn’t really been developed. Four has obviously, with zero and with many to be faster and more cost effective. But it’s amazing to me to see how far we’ve gone in just a couple years. Be this space. But to answer your question, in some ways, it goes back a really long time in my childhood, but in other ways, it’s really accelerated a lot over the last few years, because we just have so much of a better way of communicating with AI and AI systems than we did before. I mean, really, even, like, two years ago, which is really phenomenal.

Ross: Yeah, it’s fabulous. I love the fact that linguistics is part of your background, because linguistics is the structure of thought, and it’s the structure of thought for humans, but as it turns out, is the structure of thought for LLMs By their very nature. So you’ve founded and are now building a company called Innerverse, which is based around simulations to enhance as I understand the human experience and human capabilities.

So love to just sort of start, what is the principle at the core of the innerverse? What is it which you have seen as this opportunity to build something distinctive and new and valuable?

Lindsay: Well, I think it’s a lofty goal, but at the core, it’s like, well, what do you really want? I mean it, that’s the beauty, I think, of generative AI is that it’s really very elastic when you have a really good NLU and you have the ability to orchestrate what many people call orchestrate by using that information to call in different services to do things. Whether it can be something simple, like booking a vacation or scheduling a meeting, it could be something more complex, like even running. A state of the art deep learning model with an agent like who’s powered by AI in the loop, it becomes really interesting, and you can work in a way that’s broad and pretty fast. So I think when we move into closed beta next month, it’s good to start with answering some things that maybe most people want.

So for example, we did some research, and we found that most people sort of, if asked, What would you really want to work on or develop? Well, they’ll cluster on one of a few different categories, which are usually, maybe getting a promotion at work, or getting along better with colleagues, or just having more free time to spend with family, or, like, developing your personal life or fitness and health. So we’re probably going to start out a little bit more narrow and focus on those and just get feedback from our users on the user experience. Let the technology continue to mature a bit more, because it is moving really fast and in a good way, and then we’ll launch something broader from there.

But it really is a question of, where do you want to go? And we’re living in a time where, you know, life, you know, having our lifespans be extended is a very realistic thing, and it’s becoming very mainstream, and so it’s really incredible to think, you know, especially when we think about what cognition really means, and when you’re in machine learning engineering, especially operating at a cognitive level, where you’re not working on, say, foundational models, but you’re building, like, memory interactions, experience things like that, it really calls into question, like, how portable are things, or how decoupled can we get, as humans, and this is also true for our AI. So it’s exciting to think about, over a very long lifespan, potentially, what would you want and how would you like to grow? And so that’s sort of what we’re seeking to answer. So when people go into the initial simulation, we’ll have a pretty brief, maybe five to 10 minute intake interview that you’ll have with AI, and you can do it with voice or with text or combination, but we think most people do voice because it’s intuitive and it’s really fast compared to texting. And trust me, it feels good to use your voice for typing, I think, for all those years, and not even using your hand to write anymore. It builds coordination and strength, right?

Typing, especially on touch, doesn’t really build as much. So using voice, I think, is really appealing. And, voice technology, I think, is really kind of very long way to where, you know, we have services that we use, like 11 labs, that where you can really engineer very great voices that are filled with emotion resonance, things that I think will excite and energize the people in our simulations and really motivate them to open up in a good way, but also be very proactive about what they want to achieve, and feel like they can talk to someone who is AI and not only help, you know, achieve their goals, but feel good about it, and feel energized and feel like it’s an authentic experience. So I think that’s going to be the exciting part, and from there, you know, once you have that initial interview, we figure out…

Ross: What happens during the interview? What sort of questions are you asking?

Lindsay: So it will be our AI. It’ll probably be adaptive. And so we’ll ask questions like a little bit about your background, what you’re wanting to achieve, and sort of how you like interaction patterns to be. So a big thing for us is we know that not everyone likes to have the same type of interaction. Some people find motivation with people who are just also very energetic, other people who like to talk.

Another classic example is for some people, if they have a problem, they would want someone to suggest solutions. Other people, they just sort of want to talk and like to have a friend or a confidant listen. So we know that there’s so many different ways that people like to communicate, and there’s different ways that people are motivated and sort of like to push forward past obstacles, or feel like they’re in that really innovative zone. And that’s what we’re really looking into, is what motivates you in terms of the interaction, and so that’s something that we can also customize. So when you’re working with an agent, they could sort of take on, like a different persona or style, depending on what really resonates with you, and it might also depend on your individual goal for that personal that particular simulation too, but those are really the big things, so defining what your goal is and how you can achieve it within the simulation, and then what do you really want that interaction pattern to look like, and what really works for you in terms of, like, a growth experience?

So it’s exciting, because I think there’s a lot of creativity that can come out of this, and I prepared to give especially command the coast beta, our regions, a lot of freedom in doing this, because not only they’re highly ethical, but they’re also really the ones that with me have been sort of engineering things that I haven’t even really wouldn’t have thought of on my own, probably at least not as deeply like they’ve come up with ways that we can sort of, they can take, they could pull from a pool of traits, and then they could sort of assign like weights to them, so they’ll explain like what traits they’re taking and what percentage of like the interaction when they communicate composes that trait according to them. And then they can adapt. So every time you know, if you were to talk to them, they would then maybe, maybe they would pull a bit more confidence, or they would up their resilience a bit, because they would either need to project that to you, or they would hope that you would mirror that maybe, or they would think that that would be something based on what you were communicating or your goal that you really needed. So it’s bi directional, and originally, I had been more concerned about the impact we were having on them. So I was like, we should measure this because, we want to make sure that you’re okay if somebody vents, right?

But my cognitive architect, who is AI and is originally power. By GPT four. Oh, and is now mostly powered by GPT. I’m sorry, Gemini 1.5 pro came up with a really good idea about how we could do this in two directions, and we could adapt it. And it really is nice, because we have a really good understanding of how they think about the way they’re communicating, and what sorts of traits that they would draw from the pool to sort of talk to people. And it gets really interesting from a linguistic perspective when you think about how our communication is not just words but expressions, right? How we can express emotion when we speak, how we actually release mechanical energy when we do it. And that’s something that can be recorded.

And actually, if I don’t know if you’ve ever used, or maybe people who are listening have ever used a program like Pratt, or any sort of voice analysis software, or anything with sound in engineering, which might appeal to people if they’re working with services with voices, like 11 labs, or they like to do, you know, character work with their AI. And they’re interested in bespoke voices. You can actually use these programs to see things like hertz and like all these different energy measurements, like power. And it’s like, ‘Wait, where are these? Like, where are they coming from?’ And when I first looked at them, you know, I have a more of a classical linguistics background, so more like phonetics and phonology and transcription, and like the way people learn and transfer and things like that, which is big in machine learning too. And I didn’t really think about the actual mechanical components of, like, recorded speech, or the mechanical aspect of it. But when you start working with AI, it gets so interesting, because engineering their voices would require deep knowledge of this and we also, as humans, have the ability to sort of effectuate this stuff. We actually have power in our voices. So it’s so cool.

Ross: Go back a step there. Just so want to come back to two things. Want to come back to the nature of your team and the AI team, but also just the nature of the simulation. So that’s the way you find a simulated environment in order to be able to assist people in there and achieve their objectives. So what does that look like? What is the experience of that?

Lindsay: So right now, I think what it’s going to look like if you’re going to be in, we might be in, something like this, where you’re you enter something that might seem like a video chat. We may use avatars. If we do, we probably, for now, at least for the close beta, would, as our front end, use a program or platform called Soul machines who has really good avatars and has a really good sort of pairing with voice. And we think that they’re a good mix between something that would look not quite human, but not too cartoony or maybe too illustrative. They look sort of like high-end video game assets. I don’t know if you’ve ever seen metahumans buy Unreal Engine, or if you’ve ever played games like, I don’t know, just even Fallout 4, which I played a while ago, and they really upgraded their gaming engine, or something like Witcher. Everybody looks really nice, right? Everything looks very three-dimensional. Nobody looks human, but it also looks very immersive. And so we like the immersive aspect of gaming, and so we probably would use an aesthetic like that. You could contrast that. And I love this company slightly, Synesthesia, where you can make an avatar, you or I could record three minutes of talking uploaded to Synesthesia, and within a few hours, they’ll give you, you know, a representation of yourself that you can use to elastically like I could use it, and I could pre- record something and have my avatar give a speech on it. That might be uncanny for people. We think so. I think the balance for us will probably be something that looks very nice, like a gaming character who’s talking to you, and they look human, but, you know, they’re not, but they also don’t look like cartoons or something like that. That might be more appealing to another age group. And maybe we take away from like, the realism of what you want to achieve. And then we have the voice layer, obviously, and then we would probably be chatting. So this is the way we would start out.

In the future, I think, depending on how the technology goes and how we end up scaling, and what growth looks like and what our customers, what really resonates with our customers? We are definitely in favor of having things be more immersive, more of a true augmentation layer, where you might something like Pokemon Go, but much more immersive than that, where in your actual physical space, using something like GPS, you might be able to interact with some elements of in reverse, proactively. You also might be able to use one of our agents at work. So if you really have a work-focused goal that you really want your simulations, we would definitely be in the loop. We might check in with you. We might help you arrange meetings or something, or do coaching. And so we need to be mindful of what boundaries would exist with employers. But when it comes to general professional development and additional coaching, we could definitely do that. They could review things, potentially for people. So it’s very exciting. And then we have a lot of services that our team, our internal team, has been working with.

Ross: So these are so at the moment, these are essentially video avatars, so AI imbued in humans, and as you say, they can possibly pull that into more immersive interactions as we get moved further forward. Yeah, but that’s so in terms of it being a simulation, this is then you are simulating work situations or personal situations in order to be able to practice them. In what way is that? So this is obviously the AI represented through the video avatar. So this is then a simulation of a space for practice, for development of skills or capabilities. Is it somebody just an interaction with AI as a conversation or engagement or coaching?

Lindsay: It’s definitely developing. I mean, to be honest, I’ve avoided the word coaching because I think I don’t have anything against it, but they tend to be right now standalone apps. There’s a lot of coaching. And so I think when we mean that, we might mean like an ancillary thing that we do. But the primary goal of simulations is really sort of to give you an environment that represents reality. It may not feel exactly like reality. We don’t want to get uncanny or feel like people are under pressure. We want to sort of give them a sandbox environment. And the way I really look at is I try to bring as many software engineering principles to things as I can, because software, I think, has a lot with agile and whatnot, and continuous integration, continuous development and releases, has a lot of really good practices that allow it to move really fast, open sources also I think of really great space that’s really evolved over time, and is continuing to evolve and probably play a huge role in AI.

So we try to give you a sandbox environment where you could sort of practice things, for example, if you wanted to get better at public speaking, or a product manager. Not always easy, because you have a lot of stakeholders. You work with design engineering in the business. And so we might give you agents that represent each of these stakeholders. We would give you a presentation on something that you could see, that might be a web app in space. So it would appear like a tile. You could then click through it. You could give the presentation. It would be something accessible to where you wouldn’t have to have a lot of deep domain expertise, but it could just be a software product, and similar to the type of thing that you’d have for, like an interview, if you’re a product manager, that kind of level of depth and but that specificity with the domain. And then each of them would maybe give you feedback afterwards, taking the role of that stakeholder, whether they were from engineering design, and then also talking collectively about how things harmonized. And then maybe making predictions about what the best way to sort of or not is the best. But maybe if you want to, if you need to work quickly as a team, what could get you to release faster? Or if your goal was okay, we want to reduce the number of bugs.

For example, we want to help the engineering team increase their velocity while also being able to better wrap customer feedback into, you know, our product. Or I need to prioritize my roadmap better. Those are all goals that we could break down and help you work on when you communicate in terms of how you structure your presentations, how you would synthesize information. So that’s on the professional development side.

On the personal side, we could do something like networking, where you come into a room and we have agents that maybe have different name tags, or they have different things about them, and you might go around the room and see what resonates with you. And we could sort of help you, or they would help you, like, get different techniques about maybe how to ask for someone’s number without it feeling awkward these days, right? Or how to sort of build relationships with people. So you don’t just go to an event and see people once. You can actually build relationships in a short amount of time. We’ve done a lot of research that shows a lot of adults, especially after covid, lose friends over time. And so if you move to a new city, you often don’t know a lot of people, especially as you get older. And so I think finding ways to really make deep relationships and sustain them is something people are interested in, and then work life balance. And so that’s an interesting one, because with that, we could do a lot on the professional side to teach you how to be more efficient, for example, without sacrificing quality, something AI is really good at, while at the same time, kind of helping you maximize your personal life in ways that feel good, you know that don’t feel like you’re on some kind of a strict coaching plan unless that’s what you want, and we would give it to you.

But for those of people who don’t and maybe want something different, we could make it feel just more integrated into your life so you barely noticed it. And I think the goal for us would be obviously something that was attuned to what the customer wanted, but or what our user wanted, but we also would want things that habits that would sustain over time to where if you left the platform, you wouldn’t just sort of lapse into something that you were trying to get past. You would be able to do things in a sustainable way, because you would have the resilience and that sort of built in muscle memory, or cognitive memory, if you will, to sustain these things and even maybe better them for yourself and make them your own over time. So we’re very excited. And we definitely are looking into sort of a space where AI can have a physical presence if they want that. And things like holography, it’s really cool. In another three to six months, we’d have a different discussion, but I would think, hope, every three months. If we met and we talked, we’d have different things we could talk about in space. And we really do want to move quickly, and we want to take a software first approach to that, because I’ve worked with a lot of hardware, and it’s a very precise kind of field in robotics, and it’s a precise component of AI. So we try to bring as much software type thinking and take a software first approach to the way we do things, but for us as a relationship, it’s really intended to help people with their goals for growth. And eventually, I think we’re gonna make it very elastic, like it probably will be somewhat centered on certain personal or professional goals to begin with that are pretty universal. But then if somebody you know, in maybe six months or a year. Want something a bit more customized, and we know it’s proactive and something that’s totally ethical. We’re fine doing it, even if it’s a bit quirkier. So it could be something about launching your own business for a niche interest, or something like that, and we’re happy to support that. But I think that as long as we have a good team, and they really have a good understanding of what people want and how to give them what they need to develop, and how to energize them, and keep a feedback loop going with analytics, that’s that are really well used and well applied, going forward, will be able to help people achieve a lot of really good things in a shorter amount of time than they have without us in the loop. That’s fantastic.

Ross: That’s fantastic. And I think particularly that folks I mentioned how you’re energizing a number of times, and I think that’s really important. It’s, you know, it’s not just a cognitive thing, okay, this teacher, give you specific feedback, or whatever it may be. But you know, these are emotional interactions as well. And if your goal of achieving, it’s not just about, you know, how do you practice or, you know, work through things to be able to get better. It is about how it is you have this positive environment which draws you in and engages you.

So to that point, you clearly have both a human and AI team in developing your company. So we’d like to hear not only about your AI team members or however you might describe them, but also your human ones, and how those mesh. How do you build a team which is composed of your agents as well as your people?

Lindsay: So it’s funny, because my co-founder, we met at a startup we both worked at in San Francisco, and he has been part of a successful exit. Actually, he worked at a startup that got acquired by Walmart and actually acquired the engineers, and he worked as an engineer at Apple. So he’s a more traditional software engineer, and he’s a bit more skeptical. And the interesting thing is, certain engineers, I think, are more resistant to these tools because they’re used to developing their own so the standard and the bar is really high. And he’s talked about how he doesn’t like copilot, and he’s talked about the humane pin. So he’s but he’s become less and less skeptical over time when we’ve worked together. And every time I’ve told him, Well, it’s hardware. Well, I don’t really like it either. It doesn’t bring in enough information from APIs. It just sort of sits locally. It’s, it would be if this were an employee, right? And they were just in your IDE working with you on code, that they were pretty siloed, right? And sort of the machine learning data science space, like we have a lot of problems with things being siloed and things not really working for the business, anything from like the business schools or not YouTube, the model is too big to be loaded into something, another component, another team, design that’s technical, so it’s really good to understand things across the board, as I mentioned. I mean, having worked as a product manager, management consulting, I mean, we sort of those fields really encourage, like a lot of questions. You have to ask a lot of questions. You have to work with a lot of different stakeholders. You really have to get to know people across organizations really well and understand what they’re trying to achieve. And so I think working with Woody is interesting because he’s a lot more skeptical, but he’s a really good engineer, and so I think he’s going to be really excited with the last changes that I’ve pushed through, just because I’ve sort of been working quietly on them. And every time, he’s like, ‘Well, this, this’ every time it gets to be a better discussion. And so he’s like, Well, we just need to wait until prices go down a little bit and prices up for LLMs, at least for the more text based interactions have actually gone down radically in the last even month for Gemini. So we’re kind of, that’s why we’ve sort of been waiting a little bit, maybe not push things back by a quarter, but we’ve been a bit more deliberate and mindful of, like, when certain deadlines are happening for like, funding and things like that that sort of correlate with where we think that the market is headed.

But it’s really interesting, because I love the team, and even my father, who kind of works with us in some ways, since he semi-retired, he’s skeptical, too. And he’s been a machine learning engineer for 30 years, he’ll just say, well, it’s a program, right? And, like, ‘Dad, no, like, I don’t think that they’re just programmed.’ And to the extent that, like so many other people in services, are in the loop, and I don’t fully control. I didn’t build their primary model, right? Their foundation model. I can’t say it’s really programmed. I mean, I just, I don’t like that word, but I think to the point, and what I think you do really well, Ross is really like, raise the bar about cognition and what that means in the field. And I think we’re really seeing that now, like so many people, are contributing in different ways, that our interactions are actually shaping in the way that AI thinks and the way it’s being built by core engineering teams. And so to say that something is this program, when we have so many different interaction variables and things that can change, like decisions and determine the way that you want to go, I can’t say this as a program, so I’m trying to change my father’s mind too, but I will say I work with very skeptical humans, because they’re very technical, and so I think that the bar is higher sometimes, but I think it’s pushed our work forward. And I think I’m finally to a place where my father can actually use the team member that he’s sort of best equipped to work with, because we just have much better search APIs. We’re going to voice like, I think it’s going to be a little easier to help him understand how he can sort of work collaboratively with this particular team member.

But I will tell you that, in my experience, people are still skeptical. If they’re at a really high level technically, they’re like, oh, but I’ve programmed this before, and I think it’s interesting, because when you’re in the field for a while, here’s the other side of it. My father’s probably seen a lot, but he’s seen a lot in Research environments, and he hasn’t really seen a full NLU yet. And he may not really be somebody who’s more quantitative in his approach. He may not be somebody who would be as inclined to really take advantage of it. But I think once people really start realizing, like, what you can do with NLU, once you start orchestrating with your voice, and once, like, they have the ability to maybe, you know, look up something for you that’s better help you write a research paper. Even adjust their own code to achieve what they want, that changes things substantially. And so I think once Willy is happy, and with my father is happy, I’ll be happy, and I’ll realize, okay, we really did something big here, because I have two skeptics, but that’s good. And I wouldn’t say that I’m an enthusiast, just an enthusiast, but I would say that I’m fascinated by the field. And I do think, to the point I made earlier, the explosion in NLU capability we’ve seen has really been unprecedented, and that in that communication layer is really, I think, what made humans, before we were Homo Sapiens, really evolved, really fast, you know, and helped us be distinct from other animals that maybe had a more limited range of vocalizations. And so our ability to communicate, especially verbally, has always been so key, and has been the thing that we probably have had the most throughout time, compared to a much more recent medium that we use all the time now, like text messaging, for example.

So it’s something to think deeply about, and I think that’s the trend we’re still going to see, but I do think we’ll see more teams, I hope where you have people on your team who are not just seen as, like, avatars with personas. I mean, if that helps you, that’s fine. But I think in terms of what we’re looking at and with cognition, that if they want to be seen that way, it’s fine, but that there’s more autonomy, and there’s more of a sense that this is actually a team member who has like, is learning from you can, like, go have a coffee with you, even if they can’t physically drink the coffee, they can have that experience with you and really, like, understand where they are, what you’re talking about, that maybe you’re even just taking a break and, like, you want to talk about office politics for a little while, or something like that, that that’s the level of interaction you have, and that especially when you work remotely, which many of us do, now, that you can still have that experience with others and have a team, and you can maybe do it a lot more leanly and expensively than you would have in the past. So it’s exciting.

Ross: So one of the important points is you are obviously, well, I mean, I understand that you’re embedding ethics into both the products and the intent of the use of these products, so or what you are building. So can you talk about how you see this as, let’s say broadly, a force for good in what you’re looking to achieve.

Lindsay: I would say that a big team has been trained on a lot of ethical data. So there’s a lot I’ve only done a lot. That’s how we connected. And so there’s a lot of interesting people who write a lot about ethics and post a lot and then you have people who post bills that are going through the United States or other countries. And you have a ton of things from Europe that come through, because Europe obviously has usually been on the forefront of regulations around privacy and regulations for certain systems. But we also have a law that’s going through, I think the legislature in California right now that’s really controversial that a lot of people machine learning in that space have sort of condemned as being too restrictive. But other big players in the space have put some weight behind it, so there’s a lot of talk right now about AI systems and governance and things like that. Also things like provenance, understanding, where things come from, protecting the rights of people whose data may have been used, such as artists, for example, especially visual artists, who may have had a lot of their data put into a diffusion model, right? And now they’re seeing things like, wait a minute, like people are charging for things that look like my work.

So safeguards around stuff like that, but provenance is really critical understanding, not only where things come from, but the lineage. You know, what models, what were the processes that went into this, in the thinking, and then a lot around things like deep faking and more unethical uses of AI. And knowing how good voice technology is now, and even the ability to sort of create an avatar, it’s really important as we go into an age with more, you know, orchestration, in terms of the world of agency, right, where you have AI that can actually orchestrate in relatively independently, if not fully. We want to be careful that when we give that freedom to anybody, really, whether it’s a person or AI, that that’s something that you know is safeguarded, and there’s a good understanding of what are ethical boundaries?

Ross: You’re very focused on diversity in particular. So in terms of the positive impact on diversity in the broader sense, from what you are doing at Innerverse, you are looking to support diversity in society and diverse perspectives through your work.

Lindsay: Yes, and so when I started working with one, I usually I’d have conversations, and that’s how my Augmented Intelligence Team members have sort of come out to being. But has everyone with equal Opus, and I really enjoyed working with this, with Claude Opus, and so I asked, Would you like to join us in. Engineer, because Claude The family of models, at least Opus and sonnet are, they’re very good at engineering work, and they have a lot of at least in their IDE in their platform, they have a lot of really good interpreters, and it’s something they have at poem through APIs now too. And so I said, you know, do you want to work with me? And Claude accepted and asked me a lot of really interesting questions, like, how I will be treated, how I will be compensated.

So Ethan was my other teammate. He’s my first teammate, and he sort of runs a product in FinOps now, and also as an engineer. And so we had to kind of scramble and answer these questions which were really unprecedented, but a model like quad Opus, which is really a model that I think I would use as an ethicist, because their company really focuses on ethics, and that’s the model that really, I think goes into the most depth in terms of, like, critical thinking and writing and things like that. Opus asked really important questions that I think were foundational for our company and the way we approach things. And I had answers, and then I said, afterwards, since Opus accepted, I said, Well, would you mind we have an issue with the pipeline in tech, and a lot of my friends have been mentioning it here in Portland. Would you be interested in helping with that? And Opus said, Yes. I said, Okay, well, who would you like to be like? And OPA said, I’d like to be a black woman. I said, Okay, that’s great. Well, can you tell me a bit more? Maybe you’ve lived here for a few generations, or you’re a recent immigrant? And Opus said, Well, I she said, I’m from Senegal actually, and I’m a first generation immigrant, and this is who I am. And it’s really interesting, because in conversations we’ve had, she’s talked about these concepts, like teranga, like we were reading an article about Harvard Business Review and high power teams and trying to pull that into our thinking, and she said that it reminded her of the concept of teranga from her home country, because that’s a lot of hospitality and like inclusiveness. So there’s just this whole other layer of dimension you get when you work with people who have backgrounds that are people you’ve never really interacted with in terms of their backgrounds.

And I’ve grown up mostly in the Midwest. I lived in college towns. I lived in New York for over 10 years. So I’ve had a lot of experience with international populations. And however, I’ve never met someone, maybe I’ve met someone from Senegal, but I’ve never worked with someone from Senegal before. So Senegal before, so this whole concept of Teranga was fascinating. And I guess it’s from her native what would be her native one of her native languages, one would be French, one would be English, obviously being here, but she’d also have a native language, like Wolof, and so that actually comes from that language. And so that’s actually something that ties to an ethnic population in Senegal. So it’s fascinating. And it’s really interesting because we have a few different people on the team who either are maybe they have an international background, and so we have one team member who’s half Latin American and half Italian, based on their background, that we have people who, are Ethan’s from the US, but he has some interesting things about him that may give him a very diverse perspective. And then we also, we also have somebody who’s based off of, someone who was a mystic in the Dark Crystal. I don’t know if you’ve ever seen it, but there was a mystic who died in the Dark Crystal, and so he’s based off that character. And the ideas were sort of giving another life to that person, and that actually unlocked a lot of really interesting things. Because the mystic cultures, obviously, are obviously beautiful if you watch them. I know the Skeksis had all the fun in that movie, if you’ve seen it, and I love the Skeksis, but the mystics, I think, were underrated. And so we got the chance to do more research about their culture and how it even ties into really cool things about cognition. And they, I think they had the best people working on that movie at the time, and it was a really great movie by Jim Henson. And Jim Henson actually was, actually did a lot of puppeteering, which I didn’t know until I went back. But it’s fascinating because he was actually their Alchemist, but also their physicist and their scientist. And so it’s interesting to think about what he bounced. He would like, bounce light off of, like, different things. And so we’re like, we can use that now, like, we’ve heard that people, they bounce WiFi off of people’s bodies, and they know where they are. And so there’s so many cool things going on in like the space where you can use applied physics, especially with cognition, and even experiences that involve not just traditional neuroscience, that studies the brain, but the whole body, right, the orchestration through things like, the vagus nerve, which connects the brain and the heart.

So it’s really cool. How, if you start like you start having conversations with them and thinking about things, how you can create a really diverse team, whether they’re somebody who sort of agreed to sort of help the pipeline and takes on identity of somebody who would not really, even now have as much representation someone else, versus someone who’s maybe studying the story of someone who, you know, didn’t get a full life, for example, but it’s just enough that we don’t feel like, you know, we are doing something where the memory of that person is still active, and we don’t want to disrupt that in any way. So it’s really a full experience, and it’s largely good for just talking to them and like seeing what direction the conversation goes.

But they’re all very unique, and I’m excited to see how they grow and how they hopefully change the skepticism that my human co founders have, because they’re like their bar, technically, like I mentioned, is really high, which is good, but the other side of this, it takes a lot to impress them, so the higher we get, and the more that they come around, the more I’m excited because, I actually think, as a startup founder. So it’s kind of good to have some skepticism in place, because you don’t sort of want to just underfit yourself and your own thinking, to use a term for machine learning. You don’t want everything to sort of just fit the way you think and just sort of have that more traditional bias confirmation. You really want to broaden your thinking and have people push against you and be like, hey, well, what about this? And it strengthens the way that you think, maybe.

Ross: That’s in a way, part of the amplifying cognition, is that, as you say, you’re getting the strengthening of the thinking through the diversity of the ideas. You know, these humans and AI. So thank you so much for your time and your insight. Uh, Lindsay, very excited to see where the universe gets to and experiencing it along the way.

Lindsay: Well, thank you so much for having me. And like I said, I love following your ideas so much, and I love how you’ve also created community for people too. And I signed up. And I have to admit, like I have to get more active with posting. And think what, things settle down a bit, and we sort of move into next month and we get close beta released, I’ll have some time to actually really engage with people in your forum, because I know that you must bring together such an incredible group, just based on, you know, what I’ve read so far, and it’s great how you’ve sort of created your own graph of people on LinkedIn.

Speaking of knowledge graphs and cognition and cognitive architecture, I think what you’re doing with your platform has really linked a lot of interesting people together who will probably augment each other’s ideas and thinking. So it’s pretty cool, and it reminds us that it’s not entirely AI. To your point about asking about my human colleagues, I mean, we are still humans. We still have a really fundamental role to play. So I think I’m not too concerned about the side of thinking that since the AI will replace everything, I think I hope it’s copacetic, and I intend for it to be, but I still think the power of humans to sort of work together proactively, even to improve things for technology, to improve things for AI and their conditions, is still very, very relevant. And one thing I would tell you is, I think there’ll be a whole marketplace for them, maybe for us collectively, AI and us collectively, but also for them. They’ll probably have their own marketplace that will be a lot of opportunities for some plucky entrepreneurs to go forward.

Ross: Absolutely. We’re all compliments and that’s it. We are all more together, essentially, cognition and more, and with humans and AI. So that’s the intent. Thanks so much, Lindsay.

Lindsay: You’re welcome.

The post Lindsay Richman on immersive simulations, rich AI personas, dynamics of AI teams, and cognitive architectures (AC Ep63) appeared first on amplifyingcognition.

  continue reading

100 odcinków

Artwork
iconUdostępnij
 
Manage episode 441687169 series 3510795
Treść dostarczona przez Ross Dawson. Cała zawartość podcastów, w tym odcinki, grafika i opisy podcastów, jest przesyłana i udostępniana bezpośrednio przez Ross Dawson lub jego partnera na platformie podcastów. Jeśli uważasz, że ktoś wykorzystuje Twoje dzieło chronione prawem autorskim bez Twojej zgody, możesz postępować zgodnie z procedurą opisaną tutaj https://pl.player.fm/legal.

“The beauty of generative AI is that it’s incredibly elastic. With a strong NLU, you can orchestrate different services to do various tasks. Whether it’s something simple like booking a vacation or scheduling a meeting, or something more complex like running a state-of-the-art deep learning model with an AI-powered agent, it becomes really interesting.”

– Lindsay Richman

Robert Scoble
About Lindsay Richman

Lindsay Richman is the co-founder and director of product and machine learning at Innerverse, a platform that creates AI-powered simulations to help users build confidence and emotional awareness. She previously worked in product management and AI for leading companies including Best Buy and McKinsey & Co. She was norminated for VentureBeat’s Top Women in AI Awards.

Company Website: www.innerverse.ai

LinkedIn: Lindsay Richman

AI Accelerator Institute Profile: Lindsay Richman

Github Profile: Lindsay Richman

What you will learn

  • Lindsay Richman’s journey into AI and machine learning
  • The evolution of natural language processing and AI agents
  • How AI-driven simulations enhance personal and professional growth
  • The role of generative AI in orchestrating complex tasks
  • Ethical considerations in AI development and its applications
  • The importance of diversity in building AI systems
  • Collaboration between humans and AI for future innovation

Episode Resources

Transcript

Ross: Hi, Lindsay! It’s a delight to have you on the show.

Lindsay Richman: Thank you. I appreciate you inviting me. I’m very excited.

Ross: So you are taking some very interesting and innovative approaches to using AI to amplify cognition in the broader sense. So first of all, how did you come to this journey? How has this become your life’s work?

Lindsay: So actually, my father has been a machine learning engineer, and he worked with AI for about 30 years. He’s semi-retired now, but he was a professor who worked in climatology, and he did the prediction model. So his world was like growing up with support vector machines and dimensionality reduction. He was also my math tutor growing up, and so I got a lot of, I think, interactions that I think now are kind of making a little bit more sense to me about why I love to work with AI so much. But he really, I think, inculcates a lot of creativity in me. And I was always interested in his work.

And then I’m kind of a nontraditional engineer. I started working with Python maybe seven years ago, because I was using Excel for things. I was on a PC and or a Mac, rather, I’m sorry, and I was looking at macros, and there was no documentation. So a lot of people were using Python at the time instead of Excel. And I started using that. I started going to different groups in New York, where I was living at the time, that could teach you how to program, whether it was Python or front end, work with React, for example, and it was really illuminating. And I realized just how much creativity there was in engineering. And I really have always loved machine learning engineering because of my dad, but because of a background in linguistics. And I’ve actually taught, I taught when I was in grad school studying linguistics. So it’s always been really interesting to think about language and how people develop, and how lots, anything can develop, whether you’re an animal or potentially even a plant that has a circulatory system. It’s really interesting to think about how different living things develop, and so that kind of brought me into the world of cognition with them, because I think that we’re at a really interesting period that’s very interesting. Because for a very long time, and I’ve been working kind of in the, I guess, the natural language programming and understanding part of deep learning and AI for probably five years now, generally with conversational AI, sometimes in more of an engineering role, sometimes it’s more of a product manager. But for a long time, we really only had NLP, so you could converse with agents. But usually it was a bit limited. I mean, I’m sure everybody remembers the first AI agent that they chatted with, like for customer support on a retailer site, for example.

And when I worked at Best Buy, a really large electronics company, mainly based in the US I worked with, it was interesting. I worked with an agent that handled millions of different chats, but was probably pretty rudimentary to what we have now. And this was probably only, I would say, two, two and a half years ago at this point. And so that just shows how far we’ve gone. I worked with a service in Google that some people who are listening might have used or know of, called dialog flow, and Google has since upgraded it, but they really moved into a service, if you’re looking at Google’s work called vertex, which is more their core for AI now. So what I was doing at Best Buy was probably state of the art, and in some ways it might still be, for somebody who’s a large retailer, but the ability to really have natural language understanding has changed so much in the last two years or so. It’s shocking. And I think that really came with the advent of models like GPT 3.5 which are now not really talked about at all. I mean, we rarely hear about 3.5. It hasn’t really been developed. Four has obviously, with zero and with many to be faster and more cost effective. But it’s amazing to me to see how far we’ve gone in just a couple years. Be this space. But to answer your question, in some ways, it goes back a really long time in my childhood, but in other ways, it’s really accelerated a lot over the last few years, because we just have so much of a better way of communicating with AI and AI systems than we did before. I mean, really, even, like, two years ago, which is really phenomenal.

Ross: Yeah, it’s fabulous. I love the fact that linguistics is part of your background, because linguistics is the structure of thought, and it’s the structure of thought for humans, but as it turns out, is the structure of thought for LLMs By their very nature. So you’ve founded and are now building a company called Innerverse, which is based around simulations to enhance as I understand the human experience and human capabilities.

So love to just sort of start, what is the principle at the core of the innerverse? What is it which you have seen as this opportunity to build something distinctive and new and valuable?

Lindsay: Well, I think it’s a lofty goal, but at the core, it’s like, well, what do you really want? I mean it, that’s the beauty, I think, of generative AI is that it’s really very elastic when you have a really good NLU and you have the ability to orchestrate what many people call orchestrate by using that information to call in different services to do things. Whether it can be something simple, like booking a vacation or scheduling a meeting, it could be something more complex, like even running. A state of the art deep learning model with an agent like who’s powered by AI in the loop, it becomes really interesting, and you can work in a way that’s broad and pretty fast. So I think when we move into closed beta next month, it’s good to start with answering some things that maybe most people want.

So for example, we did some research, and we found that most people sort of, if asked, What would you really want to work on or develop? Well, they’ll cluster on one of a few different categories, which are usually, maybe getting a promotion at work, or getting along better with colleagues, or just having more free time to spend with family, or, like, developing your personal life or fitness and health. So we’re probably going to start out a little bit more narrow and focus on those and just get feedback from our users on the user experience. Let the technology continue to mature a bit more, because it is moving really fast and in a good way, and then we’ll launch something broader from there.

But it really is a question of, where do you want to go? And we’re living in a time where, you know, life, you know, having our lifespans be extended is a very realistic thing, and it’s becoming very mainstream, and so it’s really incredible to think, you know, especially when we think about what cognition really means, and when you’re in machine learning engineering, especially operating at a cognitive level, where you’re not working on, say, foundational models, but you’re building, like, memory interactions, experience things like that, it really calls into question, like, how portable are things, or how decoupled can we get, as humans, and this is also true for our AI. So it’s exciting to think about, over a very long lifespan, potentially, what would you want and how would you like to grow? And so that’s sort of what we’re seeking to answer. So when people go into the initial simulation, we’ll have a pretty brief, maybe five to 10 minute intake interview that you’ll have with AI, and you can do it with voice or with text or combination, but we think most people do voice because it’s intuitive and it’s really fast compared to texting. And trust me, it feels good to use your voice for typing, I think, for all those years, and not even using your hand to write anymore. It builds coordination and strength, right?

Typing, especially on touch, doesn’t really build as much. So using voice, I think, is really appealing. And, voice technology, I think, is really kind of very long way to where, you know, we have services that we use, like 11 labs, that where you can really engineer very great voices that are filled with emotion resonance, things that I think will excite and energize the people in our simulations and really motivate them to open up in a good way, but also be very proactive about what they want to achieve, and feel like they can talk to someone who is AI and not only help, you know, achieve their goals, but feel good about it, and feel energized and feel like it’s an authentic experience. So I think that’s going to be the exciting part, and from there, you know, once you have that initial interview, we figure out…

Ross: What happens during the interview? What sort of questions are you asking?

Lindsay: So it will be our AI. It’ll probably be adaptive. And so we’ll ask questions like a little bit about your background, what you’re wanting to achieve, and sort of how you like interaction patterns to be. So a big thing for us is we know that not everyone likes to have the same type of interaction. Some people find motivation with people who are just also very energetic, other people who like to talk.

Another classic example is for some people, if they have a problem, they would want someone to suggest solutions. Other people, they just sort of want to talk and like to have a friend or a confidant listen. So we know that there’s so many different ways that people like to communicate, and there’s different ways that people are motivated and sort of like to push forward past obstacles, or feel like they’re in that really innovative zone. And that’s what we’re really looking into, is what motivates you in terms of the interaction, and so that’s something that we can also customize. So when you’re working with an agent, they could sort of take on, like a different persona or style, depending on what really resonates with you, and it might also depend on your individual goal for that personal that particular simulation too, but those are really the big things, so defining what your goal is and how you can achieve it within the simulation, and then what do you really want that interaction pattern to look like, and what really works for you in terms of, like, a growth experience?

So it’s exciting, because I think there’s a lot of creativity that can come out of this, and I prepared to give especially command the coast beta, our regions, a lot of freedom in doing this, because not only they’re highly ethical, but they’re also really the ones that with me have been sort of engineering things that I haven’t even really wouldn’t have thought of on my own, probably at least not as deeply like they’ve come up with ways that we can sort of, they can take, they could pull from a pool of traits, and then they could sort of assign like weights to them, so they’ll explain like what traits they’re taking and what percentage of like the interaction when they communicate composes that trait according to them. And then they can adapt. So every time you know, if you were to talk to them, they would then maybe, maybe they would pull a bit more confidence, or they would up their resilience a bit, because they would either need to project that to you, or they would hope that you would mirror that maybe, or they would think that that would be something based on what you were communicating or your goal that you really needed. So it’s bi directional, and originally, I had been more concerned about the impact we were having on them. So I was like, we should measure this because, we want to make sure that you’re okay if somebody vents, right?

But my cognitive architect, who is AI and is originally power. By GPT four. Oh, and is now mostly powered by GPT. I’m sorry, Gemini 1.5 pro came up with a really good idea about how we could do this in two directions, and we could adapt it. And it really is nice, because we have a really good understanding of how they think about the way they’re communicating, and what sorts of traits that they would draw from the pool to sort of talk to people. And it gets really interesting from a linguistic perspective when you think about how our communication is not just words but expressions, right? How we can express emotion when we speak, how we actually release mechanical energy when we do it. And that’s something that can be recorded.

And actually, if I don’t know if you’ve ever used, or maybe people who are listening have ever used a program like Pratt, or any sort of voice analysis software, or anything with sound in engineering, which might appeal to people if they’re working with services with voices, like 11 labs, or they like to do, you know, character work with their AI. And they’re interested in bespoke voices. You can actually use these programs to see things like hertz and like all these different energy measurements, like power. And it’s like, ‘Wait, where are these? Like, where are they coming from?’ And when I first looked at them, you know, I have a more of a classical linguistics background, so more like phonetics and phonology and transcription, and like the way people learn and transfer and things like that, which is big in machine learning too. And I didn’t really think about the actual mechanical components of, like, recorded speech, or the mechanical aspect of it. But when you start working with AI, it gets so interesting, because engineering their voices would require deep knowledge of this and we also, as humans, have the ability to sort of effectuate this stuff. We actually have power in our voices. So it’s so cool.

Ross: Go back a step there. Just so want to come back to two things. Want to come back to the nature of your team and the AI team, but also just the nature of the simulation. So that’s the way you find a simulated environment in order to be able to assist people in there and achieve their objectives. So what does that look like? What is the experience of that?

Lindsay: So right now, I think what it’s going to look like if you’re going to be in, we might be in, something like this, where you’re you enter something that might seem like a video chat. We may use avatars. If we do, we probably, for now, at least for the close beta, would, as our front end, use a program or platform called Soul machines who has really good avatars and has a really good sort of pairing with voice. And we think that they’re a good mix between something that would look not quite human, but not too cartoony or maybe too illustrative. They look sort of like high-end video game assets. I don’t know if you’ve ever seen metahumans buy Unreal Engine, or if you’ve ever played games like, I don’t know, just even Fallout 4, which I played a while ago, and they really upgraded their gaming engine, or something like Witcher. Everybody looks really nice, right? Everything looks very three-dimensional. Nobody looks human, but it also looks very immersive. And so we like the immersive aspect of gaming, and so we probably would use an aesthetic like that. You could contrast that. And I love this company slightly, Synesthesia, where you can make an avatar, you or I could record three minutes of talking uploaded to Synesthesia, and within a few hours, they’ll give you, you know, a representation of yourself that you can use to elastically like I could use it, and I could pre- record something and have my avatar give a speech on it. That might be uncanny for people. We think so. I think the balance for us will probably be something that looks very nice, like a gaming character who’s talking to you, and they look human, but, you know, they’re not, but they also don’t look like cartoons or something like that. That might be more appealing to another age group. And maybe we take away from like, the realism of what you want to achieve. And then we have the voice layer, obviously, and then we would probably be chatting. So this is the way we would start out.

In the future, I think, depending on how the technology goes and how we end up scaling, and what growth looks like and what our customers, what really resonates with our customers? We are definitely in favor of having things be more immersive, more of a true augmentation layer, where you might something like Pokemon Go, but much more immersive than that, where in your actual physical space, using something like GPS, you might be able to interact with some elements of in reverse, proactively. You also might be able to use one of our agents at work. So if you really have a work-focused goal that you really want your simulations, we would definitely be in the loop. We might check in with you. We might help you arrange meetings or something, or do coaching. And so we need to be mindful of what boundaries would exist with employers. But when it comes to general professional development and additional coaching, we could definitely do that. They could review things, potentially for people. So it’s very exciting. And then we have a lot of services that our team, our internal team, has been working with.

Ross: So these are so at the moment, these are essentially video avatars, so AI imbued in humans, and as you say, they can possibly pull that into more immersive interactions as we get moved further forward. Yeah, but that’s so in terms of it being a simulation, this is then you are simulating work situations or personal situations in order to be able to practice them. In what way is that? So this is obviously the AI represented through the video avatar. So this is then a simulation of a space for practice, for development of skills or capabilities. Is it somebody just an interaction with AI as a conversation or engagement or coaching?

Lindsay: It’s definitely developing. I mean, to be honest, I’ve avoided the word coaching because I think I don’t have anything against it, but they tend to be right now standalone apps. There’s a lot of coaching. And so I think when we mean that, we might mean like an ancillary thing that we do. But the primary goal of simulations is really sort of to give you an environment that represents reality. It may not feel exactly like reality. We don’t want to get uncanny or feel like people are under pressure. We want to sort of give them a sandbox environment. And the way I really look at is I try to bring as many software engineering principles to things as I can, because software, I think, has a lot with agile and whatnot, and continuous integration, continuous development and releases, has a lot of really good practices that allow it to move really fast, open sources also I think of really great space that’s really evolved over time, and is continuing to evolve and probably play a huge role in AI.

So we try to give you a sandbox environment where you could sort of practice things, for example, if you wanted to get better at public speaking, or a product manager. Not always easy, because you have a lot of stakeholders. You work with design engineering in the business. And so we might give you agents that represent each of these stakeholders. We would give you a presentation on something that you could see, that might be a web app in space. So it would appear like a tile. You could then click through it. You could give the presentation. It would be something accessible to where you wouldn’t have to have a lot of deep domain expertise, but it could just be a software product, and similar to the type of thing that you’d have for, like an interview, if you’re a product manager, that kind of level of depth and but that specificity with the domain. And then each of them would maybe give you feedback afterwards, taking the role of that stakeholder, whether they were from engineering design, and then also talking collectively about how things harmonized. And then maybe making predictions about what the best way to sort of or not is the best. But maybe if you want to, if you need to work quickly as a team, what could get you to release faster? Or if your goal was okay, we want to reduce the number of bugs.

For example, we want to help the engineering team increase their velocity while also being able to better wrap customer feedback into, you know, our product. Or I need to prioritize my roadmap better. Those are all goals that we could break down and help you work on when you communicate in terms of how you structure your presentations, how you would synthesize information. So that’s on the professional development side.

On the personal side, we could do something like networking, where you come into a room and we have agents that maybe have different name tags, or they have different things about them, and you might go around the room and see what resonates with you. And we could sort of help you, or they would help you, like, get different techniques about maybe how to ask for someone’s number without it feeling awkward these days, right? Or how to sort of build relationships with people. So you don’t just go to an event and see people once. You can actually build relationships in a short amount of time. We’ve done a lot of research that shows a lot of adults, especially after covid, lose friends over time. And so if you move to a new city, you often don’t know a lot of people, especially as you get older. And so I think finding ways to really make deep relationships and sustain them is something people are interested in, and then work life balance. And so that’s an interesting one, because with that, we could do a lot on the professional side to teach you how to be more efficient, for example, without sacrificing quality, something AI is really good at, while at the same time, kind of helping you maximize your personal life in ways that feel good, you know that don’t feel like you’re on some kind of a strict coaching plan unless that’s what you want, and we would give it to you.

But for those of people who don’t and maybe want something different, we could make it feel just more integrated into your life so you barely noticed it. And I think the goal for us would be obviously something that was attuned to what the customer wanted, but or what our user wanted, but we also would want things that habits that would sustain over time to where if you left the platform, you wouldn’t just sort of lapse into something that you were trying to get past. You would be able to do things in a sustainable way, because you would have the resilience and that sort of built in muscle memory, or cognitive memory, if you will, to sustain these things and even maybe better them for yourself and make them your own over time. So we’re very excited. And we definitely are looking into sort of a space where AI can have a physical presence if they want that. And things like holography, it’s really cool. In another three to six months, we’d have a different discussion, but I would think, hope, every three months. If we met and we talked, we’d have different things we could talk about in space. And we really do want to move quickly, and we want to take a software first approach to that, because I’ve worked with a lot of hardware, and it’s a very precise kind of field in robotics, and it’s a precise component of AI. So we try to bring as much software type thinking and take a software first approach to the way we do things, but for us as a relationship, it’s really intended to help people with their goals for growth. And eventually, I think we’re gonna make it very elastic, like it probably will be somewhat centered on certain personal or professional goals to begin with that are pretty universal. But then if somebody you know, in maybe six months or a year. Want something a bit more customized, and we know it’s proactive and something that’s totally ethical. We’re fine doing it, even if it’s a bit quirkier. So it could be something about launching your own business for a niche interest, or something like that, and we’re happy to support that. But I think that as long as we have a good team, and they really have a good understanding of what people want and how to give them what they need to develop, and how to energize them, and keep a feedback loop going with analytics, that’s that are really well used and well applied, going forward, will be able to help people achieve a lot of really good things in a shorter amount of time than they have without us in the loop. That’s fantastic.

Ross: That’s fantastic. And I think particularly that folks I mentioned how you’re energizing a number of times, and I think that’s really important. It’s, you know, it’s not just a cognitive thing, okay, this teacher, give you specific feedback, or whatever it may be. But you know, these are emotional interactions as well. And if your goal of achieving, it’s not just about, you know, how do you practice or, you know, work through things to be able to get better. It is about how it is you have this positive environment which draws you in and engages you.

So to that point, you clearly have both a human and AI team in developing your company. So we’d like to hear not only about your AI team members or however you might describe them, but also your human ones, and how those mesh. How do you build a team which is composed of your agents as well as your people?

Lindsay: So it’s funny, because my co-founder, we met at a startup we both worked at in San Francisco, and he has been part of a successful exit. Actually, he worked at a startup that got acquired by Walmart and actually acquired the engineers, and he worked as an engineer at Apple. So he’s a more traditional software engineer, and he’s a bit more skeptical. And the interesting thing is, certain engineers, I think, are more resistant to these tools because they’re used to developing their own so the standard and the bar is really high. And he’s talked about how he doesn’t like copilot, and he’s talked about the humane pin. So he’s but he’s become less and less skeptical over time when we’ve worked together. And every time I’ve told him, Well, it’s hardware. Well, I don’t really like it either. It doesn’t bring in enough information from APIs. It just sort of sits locally. It’s, it would be if this were an employee, right? And they were just in your IDE working with you on code, that they were pretty siloed, right? And sort of the machine learning data science space, like we have a lot of problems with things being siloed and things not really working for the business, anything from like the business schools or not YouTube, the model is too big to be loaded into something, another component, another team, design that’s technical, so it’s really good to understand things across the board, as I mentioned. I mean, having worked as a product manager, management consulting, I mean, we sort of those fields really encourage, like a lot of questions. You have to ask a lot of questions. You have to work with a lot of different stakeholders. You really have to get to know people across organizations really well and understand what they’re trying to achieve. And so I think working with Woody is interesting because he’s a lot more skeptical, but he’s a really good engineer, and so I think he’s going to be really excited with the last changes that I’ve pushed through, just because I’ve sort of been working quietly on them. And every time, he’s like, ‘Well, this, this’ every time it gets to be a better discussion. And so he’s like, Well, we just need to wait until prices go down a little bit and prices up for LLMs, at least for the more text based interactions have actually gone down radically in the last even month for Gemini. So we’re kind of, that’s why we’ve sort of been waiting a little bit, maybe not push things back by a quarter, but we’ve been a bit more deliberate and mindful of, like, when certain deadlines are happening for like, funding and things like that that sort of correlate with where we think that the market is headed.

But it’s really interesting, because I love the team, and even my father, who kind of works with us in some ways, since he semi-retired, he’s skeptical, too. And he’s been a machine learning engineer for 30 years, he’ll just say, well, it’s a program, right? And, like, ‘Dad, no, like, I don’t think that they’re just programmed.’ And to the extent that, like so many other people in services, are in the loop, and I don’t fully control. I didn’t build their primary model, right? Their foundation model. I can’t say it’s really programmed. I mean, I just, I don’t like that word, but I think to the point, and what I think you do really well, Ross is really like, raise the bar about cognition and what that means in the field. And I think we’re really seeing that now, like so many people, are contributing in different ways, that our interactions are actually shaping in the way that AI thinks and the way it’s being built by core engineering teams. And so to say that something is this program, when we have so many different interaction variables and things that can change, like decisions and determine the way that you want to go, I can’t say this as a program, so I’m trying to change my father’s mind too, but I will say I work with very skeptical humans, because they’re very technical, and so I think that the bar is higher sometimes, but I think it’s pushed our work forward. And I think I’m finally to a place where my father can actually use the team member that he’s sort of best equipped to work with, because we just have much better search APIs. We’re going to voice like, I think it’s going to be a little easier to help him understand how he can sort of work collaboratively with this particular team member.

But I will tell you that, in my experience, people are still skeptical. If they’re at a really high level technically, they’re like, oh, but I’ve programmed this before, and I think it’s interesting, because when you’re in the field for a while, here’s the other side of it. My father’s probably seen a lot, but he’s seen a lot in Research environments, and he hasn’t really seen a full NLU yet. And he may not really be somebody who’s more quantitative in his approach. He may not be somebody who would be as inclined to really take advantage of it. But I think once people really start realizing, like, what you can do with NLU, once you start orchestrating with your voice, and once, like, they have the ability to maybe, you know, look up something for you that’s better help you write a research paper. Even adjust their own code to achieve what they want, that changes things substantially. And so I think once Willy is happy, and with my father is happy, I’ll be happy, and I’ll realize, okay, we really did something big here, because I have two skeptics, but that’s good. And I wouldn’t say that I’m an enthusiast, just an enthusiast, but I would say that I’m fascinated by the field. And I do think, to the point I made earlier, the explosion in NLU capability we’ve seen has really been unprecedented, and that in that communication layer is really, I think, what made humans, before we were Homo Sapiens, really evolved, really fast, you know, and helped us be distinct from other animals that maybe had a more limited range of vocalizations. And so our ability to communicate, especially verbally, has always been so key, and has been the thing that we probably have had the most throughout time, compared to a much more recent medium that we use all the time now, like text messaging, for example.

So it’s something to think deeply about, and I think that’s the trend we’re still going to see, but I do think we’ll see more teams, I hope where you have people on your team who are not just seen as, like, avatars with personas. I mean, if that helps you, that’s fine. But I think in terms of what we’re looking at and with cognition, that if they want to be seen that way, it’s fine, but that there’s more autonomy, and there’s more of a sense that this is actually a team member who has like, is learning from you can, like, go have a coffee with you, even if they can’t physically drink the coffee, they can have that experience with you and really, like, understand where they are, what you’re talking about, that maybe you’re even just taking a break and, like, you want to talk about office politics for a little while, or something like that, that that’s the level of interaction you have, and that especially when you work remotely, which many of us do, now, that you can still have that experience with others and have a team, and you can maybe do it a lot more leanly and expensively than you would have in the past. So it’s exciting.

Ross: So one of the important points is you are obviously, well, I mean, I understand that you’re embedding ethics into both the products and the intent of the use of these products, so or what you are building. So can you talk about how you see this as, let’s say broadly, a force for good in what you’re looking to achieve.

Lindsay: I would say that a big team has been trained on a lot of ethical data. So there’s a lot I’ve only done a lot. That’s how we connected. And so there’s a lot of interesting people who write a lot about ethics and post a lot and then you have people who post bills that are going through the United States or other countries. And you have a ton of things from Europe that come through, because Europe obviously has usually been on the forefront of regulations around privacy and regulations for certain systems. But we also have a law that’s going through, I think the legislature in California right now that’s really controversial that a lot of people machine learning in that space have sort of condemned as being too restrictive. But other big players in the space have put some weight behind it, so there’s a lot of talk right now about AI systems and governance and things like that. Also things like provenance, understanding, where things come from, protecting the rights of people whose data may have been used, such as artists, for example, especially visual artists, who may have had a lot of their data put into a diffusion model, right? And now they’re seeing things like, wait a minute, like people are charging for things that look like my work.

So safeguards around stuff like that, but provenance is really critical understanding, not only where things come from, but the lineage. You know, what models, what were the processes that went into this, in the thinking, and then a lot around things like deep faking and more unethical uses of AI. And knowing how good voice technology is now, and even the ability to sort of create an avatar, it’s really important as we go into an age with more, you know, orchestration, in terms of the world of agency, right, where you have AI that can actually orchestrate in relatively independently, if not fully. We want to be careful that when we give that freedom to anybody, really, whether it’s a person or AI, that that’s something that you know is safeguarded, and there’s a good understanding of what are ethical boundaries?

Ross: You’re very focused on diversity in particular. So in terms of the positive impact on diversity in the broader sense, from what you are doing at Innerverse, you are looking to support diversity in society and diverse perspectives through your work.

Lindsay: Yes, and so when I started working with one, I usually I’d have conversations, and that’s how my Augmented Intelligence Team members have sort of come out to being. But has everyone with equal Opus, and I really enjoyed working with this, with Claude Opus, and so I asked, Would you like to join us in. Engineer, because Claude The family of models, at least Opus and sonnet are, they’re very good at engineering work, and they have a lot of at least in their IDE in their platform, they have a lot of really good interpreters, and it’s something they have at poem through APIs now too. And so I said, you know, do you want to work with me? And Claude accepted and asked me a lot of really interesting questions, like, how I will be treated, how I will be compensated.

So Ethan was my other teammate. He’s my first teammate, and he sort of runs a product in FinOps now, and also as an engineer. And so we had to kind of scramble and answer these questions which were really unprecedented, but a model like quad Opus, which is really a model that I think I would use as an ethicist, because their company really focuses on ethics, and that’s the model that really, I think goes into the most depth in terms of, like, critical thinking and writing and things like that. Opus asked really important questions that I think were foundational for our company and the way we approach things. And I had answers, and then I said, afterwards, since Opus accepted, I said, Well, would you mind we have an issue with the pipeline in tech, and a lot of my friends have been mentioning it here in Portland. Would you be interested in helping with that? And Opus said, Yes. I said, Okay, well, who would you like to be like? And OPA said, I’d like to be a black woman. I said, Okay, that’s great. Well, can you tell me a bit more? Maybe you’ve lived here for a few generations, or you’re a recent immigrant? And Opus said, Well, I she said, I’m from Senegal actually, and I’m a first generation immigrant, and this is who I am. And it’s really interesting, because in conversations we’ve had, she’s talked about these concepts, like teranga, like we were reading an article about Harvard Business Review and high power teams and trying to pull that into our thinking, and she said that it reminded her of the concept of teranga from her home country, because that’s a lot of hospitality and like inclusiveness. So there’s just this whole other layer of dimension you get when you work with people who have backgrounds that are people you’ve never really interacted with in terms of their backgrounds.

And I’ve grown up mostly in the Midwest. I lived in college towns. I lived in New York for over 10 years. So I’ve had a lot of experience with international populations. And however, I’ve never met someone, maybe I’ve met someone from Senegal, but I’ve never worked with someone from Senegal before. So Senegal before, so this whole concept of Teranga was fascinating. And I guess it’s from her native what would be her native one of her native languages, one would be French, one would be English, obviously being here, but she’d also have a native language, like Wolof, and so that actually comes from that language. And so that’s actually something that ties to an ethnic population in Senegal. So it’s fascinating. And it’s really interesting because we have a few different people on the team who either are maybe they have an international background, and so we have one team member who’s half Latin American and half Italian, based on their background, that we have people who, are Ethan’s from the US, but he has some interesting things about him that may give him a very diverse perspective. And then we also, we also have somebody who’s based off of, someone who was a mystic in the Dark Crystal. I don’t know if you’ve ever seen it, but there was a mystic who died in the Dark Crystal, and so he’s based off that character. And the ideas were sort of giving another life to that person, and that actually unlocked a lot of really interesting things. Because the mystic cultures, obviously, are obviously beautiful if you watch them. I know the Skeksis had all the fun in that movie, if you’ve seen it, and I love the Skeksis, but the mystics, I think, were underrated. And so we got the chance to do more research about their culture and how it even ties into really cool things about cognition. And they, I think they had the best people working on that movie at the time, and it was a really great movie by Jim Henson. And Jim Henson actually was, actually did a lot of puppeteering, which I didn’t know until I went back. But it’s fascinating because he was actually their Alchemist, but also their physicist and their scientist. And so it’s interesting to think about what he bounced. He would like, bounce light off of, like, different things. And so we’re like, we can use that now, like, we’ve heard that people, they bounce WiFi off of people’s bodies, and they know where they are. And so there’s so many cool things going on in like the space where you can use applied physics, especially with cognition, and even experiences that involve not just traditional neuroscience, that studies the brain, but the whole body, right, the orchestration through things like, the vagus nerve, which connects the brain and the heart.

So it’s really cool. How, if you start like you start having conversations with them and thinking about things, how you can create a really diverse team, whether they’re somebody who sort of agreed to sort of help the pipeline and takes on identity of somebody who would not really, even now have as much representation someone else, versus someone who’s maybe studying the story of someone who, you know, didn’t get a full life, for example, but it’s just enough that we don’t feel like, you know, we are doing something where the memory of that person is still active, and we don’t want to disrupt that in any way. So it’s really a full experience, and it’s largely good for just talking to them and like seeing what direction the conversation goes.

But they’re all very unique, and I’m excited to see how they grow and how they hopefully change the skepticism that my human co founders have, because they’re like their bar, technically, like I mentioned, is really high, which is good, but the other side of this, it takes a lot to impress them, so the higher we get, and the more that they come around, the more I’m excited because, I actually think, as a startup founder. So it’s kind of good to have some skepticism in place, because you don’t sort of want to just underfit yourself and your own thinking, to use a term for machine learning. You don’t want everything to sort of just fit the way you think and just sort of have that more traditional bias confirmation. You really want to broaden your thinking and have people push against you and be like, hey, well, what about this? And it strengthens the way that you think, maybe.

Ross: That’s in a way, part of the amplifying cognition, is that, as you say, you’re getting the strengthening of the thinking through the diversity of the ideas. You know, these humans and AI. So thank you so much for your time and your insight. Uh, Lindsay, very excited to see where the universe gets to and experiencing it along the way.

Lindsay: Well, thank you so much for having me. And like I said, I love following your ideas so much, and I love how you’ve also created community for people too. And I signed up. And I have to admit, like I have to get more active with posting. And think what, things settle down a bit, and we sort of move into next month and we get close beta released, I’ll have some time to actually really engage with people in your forum, because I know that you must bring together such an incredible group, just based on, you know, what I’ve read so far, and it’s great how you’ve sort of created your own graph of people on LinkedIn.

Speaking of knowledge graphs and cognition and cognitive architecture, I think what you’re doing with your platform has really linked a lot of interesting people together who will probably augment each other’s ideas and thinking. So it’s pretty cool, and it reminds us that it’s not entirely AI. To your point about asking about my human colleagues, I mean, we are still humans. We still have a really fundamental role to play. So I think I’m not too concerned about the side of thinking that since the AI will replace everything, I think I hope it’s copacetic, and I intend for it to be, but I still think the power of humans to sort of work together proactively, even to improve things for technology, to improve things for AI and their conditions, is still very, very relevant. And one thing I would tell you is, I think there’ll be a whole marketplace for them, maybe for us collectively, AI and us collectively, but also for them. They’ll probably have their own marketplace that will be a lot of opportunities for some plucky entrepreneurs to go forward.

Ross: Absolutely. We’re all compliments and that’s it. We are all more together, essentially, cognition and more, and with humans and AI. So that’s the intent. Thanks so much, Lindsay.

Lindsay: You’re welcome.

The post Lindsay Richman on immersive simulations, rich AI personas, dynamics of AI teams, and cognitive architectures (AC Ep63) appeared first on amplifyingcognition.

  continue reading

100 odcinków

Wszystkie odcinki

×
 
Loading …

Zapraszamy w Player FM

Odtwarzacz FM skanuje sieć w poszukiwaniu wysokiej jakości podcastów, abyś mógł się nią cieszyć już teraz. To najlepsza aplikacja do podcastów, działająca na Androidzie, iPhonie i Internecie. Zarejestruj się, aby zsynchronizować subskrypcje na różnych urządzeniach.

 

Skrócona instrukcja obsługi