Copy of Episode 44 AI Capabilities with Memory and Processing
Below you can view or listen to Episode 44 of The Personal Brain Trainer Podcast.
AI Capabilities with Memory and Processing
Listen to Our Podcast:
Watch Video: CLICK IMAGE BELOW
Links:
-
ChatGPT- https://chat.openai.com/chat
-
Memory Strategies: https://tinyurl.com/4nmatd9v
-
Working Memory Screener: https://tinyurl.com/2tvuzet4
-
Working Memory Exercises: https://tinyurl.com/jyr68xfy
- Eclectic Learning Approach and Student Processing Inventory: https://tinyurl.com/yntf4k8h
-
A Workshop on Multisensory Teaching: Accommodating Each Learner's
-
Best Ways of Processing: https://tinyurl.com/yuedmr64
-
BulletMap Academy: https://bulletmapacademy.com/
-
Learning Specialist Courses:https://www.learningspecialistcourses.com/
-
Executive functions and Study Skills Course: https://tinyurl.com/n86mf2bx
-
Good Sensory Learning: https://goodsensorylearning.com/
-
Dyslexia at Work: www.dyslexiawork.com
Brought to you by:
- Good Sensory Learning (http://www.goodsensorylearning.com/)
- Learning Specialist Courses (http://www.learningspecialistcourses....)
- Bullet Map Academy (http://www.bulletmapacademy.com/)
- Dyslexia at Work: www.dyslexiawork.com
Transcript:
Erica: Welcome to the personal brain trainer podcast. I'm Dr. Erica Warren.
Darius: And I'm Darius Namdaran, and we're your hosts. Join us on an adventure to translate the scientific jargon and brain research into simple metaphors and explanations for everyday life. We explore executive function and learning strategies that help turbocharge the mind.
Erica: Come learn to steer around the invisible barriers so that you can achieve your goals. This podcast is ideal for parents, educators, and learners of all ages.
Darius: This podcast is sponsored by Dyslexiaproductivitycoaching.com. We give you a simple productivity system for your Apple devices that harnesses the creativity that comes with your Dyslexia.
Erica: This podcast is brought to you by Goodsensorylearning.com, where you can find educational and occupational therapy lessons and remedial materials that bring delight to learning. Finally, you can find Dr. Warren's many courses at, uh, Learningspecialistcourses.com. Come check out our newest course on developing executive functions and study strategies.
Darius: Hey, Erica. I am so excited with this podcast because we're going to talk about some of my favorite topics. I'm going to talk about chat, GPT and memory and comparing it to our different types of memory and processing and so on and see what we can learn with all of that with executive function. Are you up for it?
Erica: Absolutely. We did not organize this podcast. You're actually getting to witness our just free flowing conversations.
Darius: Yes. I'm going to hit you with some thoughts and see what you think about it. And we'll just sort of chew on these thoughts because everyone's trying to get their head around what is the impact of artificial intelligence going to be on our lives. And we can see that on multiple levels. We can see that in the near-term future, over the next six months to a year, how is it going to affect our work life, our businesses? It's going to have a massive effect on our work life and businesses over the next year. How it's going to affect them? Um, over the next five years, how it's going to affect our personal life? How it's going to affect data privacy? How it's going to affect copyright? Is it going to turn into an artificial general intelligence and be able to be as intelligent and make decisions like we can independently of us? And all of those kinds of questions are in people's minds and a whole heap more. But today, I'm going to narrow them down to one key area. I really want to pass this by you, Erica. Huh? And audience listeners, I know you're listening there in your cars and doing your various activities and so on. Chachi PT operates like a brain. Okay? Now, when you listen to people and the researchers talking about what's behind this, uh, neural network inside of this black box of generative process transformer, what's happening is fascinating. It's like a database, uh, a three-dimensional vector database where it's positioning words and putting them closer to some words and further away from other words and trying to cluster them in areas of meaning in this three-dimensional vector space. And each one of these words, or fragments of words even, it's trying to position them like neurons in our brain. They are literally like neurons we've got in our amygdala, our brain. It's like an early-stage development of our language brain. It's starting to create an understanding, decoding, and communicating in language. It doesn't necessarily have an intelligence of its own at the moment, but it has been trained on all of this language around the world. Not all of it, but a great deal of it. Now, here's where I found it fascinating using Chat GPT, and I want to talk more about it with you. It's like a procedural memory for me now. Okay, so what I've noticed about Chat GPT, you can look at Chat GPT or any other large language model and think of it like it's a brain that searches for knowledge and information, but it is not that. That's not what it's meant to do. That's what Google is for. Google is a searching and indexing machine, whereas this GPT is like an inference machine. It's trying to infer what the next word might be and intuitively figure out what the next string of words would be within the context of act like an expert in executive function. What would an expert in executive function say next? They would say such and such, you see, and they would start to understand the procedure of talking or discussing or doing a certain thing that is encoded into language. And our memories operate on a very similar level. We have working memory, we have linguistic, uh, memory. We have episodic memory, and we have procedural memory. And our thoughts and memories also often trickle through that process. They come through our working memory as an experience, through our senses. And often we translate it into verbal phonological loop into our linguistic memory. And then maybe it gets located within our, uh, episodic memory, or maybe it goes straight into our episodic memory because it's visual. But then underneath that, there's this procedural memory that is kind of like a distillation of procedures and processes that we've got from those memories that aren't those memories themselves, but a distillation of it. Now, Chat GPT, I believe, fits more in that procedural memory realm. And as a person with Dyslexia, what I find is that that's often the area that I lack the most. I have very good episodic memory, reasonable linguistic memory, but the procedural memory tends to be low detail. My procedures tend to live in the realm of principles rather than detailed processes. So it's still procedural, but it's more on a higher level, again, of principles rather than details, whereas Chat GPT remembers the detailed processes and thinks like that. So I just wanted to bring that to you, Erica. Uh, as a conversation piece today.
Erica: Yeah, that was a lot. And I love it. I think it's really interesting to think about it as procedural memory. I also have a reaction to what you said, which is that it's like it has memory on steroids in a way, because it can access information at the speed of light. So the speed of processing when M you work with AI is extraordinary. It just spits out stuff so quickly. But it's very interesting when you work along with AI. I use it a lot to think, interestingly enough, because, yes, you're right. It organizes your ideas. You could put a string of ideas down and say, organize these ideas, and then it's like, oh, that's cool. I like what it did there, but it missed this. And then you can just continue to have this conversation where you're almost thinking through AI or with AI, where you know, that's not it. Try it this way, or try again, or add this, or take away this. But you're right. What it does is it increases our speed of processing as well because we don't get bogged down in words, sentences, sentence structure. We can just be the creator. If we go back to the analogy of an orchestra, we can be the conductor. So we guide AI in a way that we choose, which empties our working memory, so that we can just utilize it to be extraordinarily creative and let it do the speed of processing and let it do the editing because it edits very quickly. What do you think about that?
Darius: Well, this speed of processing and this creativity working with Chat GPT, I love your analogy of the orchestra, because if you think of the orchestra, you have multiple different instruments. They're all musical instruments. It's all music. But the bassoon is very different from the violin. And it's interesting that each one of these instruments are being played through a specific process of musical implementation. Okay, now, I can't play the bassoon, but as a conductor, I might be able to say, I know I need the bassoon to make this kind of sound. Whatever. Yeah, I can direct it. Now, what's fascinating with Chat GPT is this idea of instructing Chat GPT to act like an expert in a particular area, and the difference between asking it, uh, a particular question. So you might say, Chachi PT, tell me what executive function is. And it would give you the generic, medium vanilla explanation of executive function. Okay. But then I might say, I want you to act like an expert working memory research scientist in the field of executive function. And I would like you to explain to me how executive function works. And it would then start saying, right, as an expert in the area of executive function. And it knows it starts to limit its use of language into the zone that is much more around the area of how an expert would talk about executive function rather than generally talking about executive function. And it's like the conductor saying, violin, I would like you to play this piece right now as a violin. And that's good.
Erica: Stop.
Darius: And I would like you, oboe now, to play the piece and hear it as an oboe. And it's still the orchestra, it's still Chat GPT, but a certain aspect of it gets emphasized around how you instruct it to play that piece of music or go through that process. And what I find fascinating as a human being is that often if I'm explained a process to go through, like, maybe I'm new to a field, like, let's say you're learning something or discussing something with Chat GPT, you can take this to the level of like a child might say, explain to me Macbeth, okay? And it will tell you the story of Macbeth. Fine, great, you've got the story of Macbeth. But what Chat GPT is it's a reasoning engine, it's not a research engine. And so when you speak to it as a reasoning engine, and you say, now, how would people normally reflect on the story of Macbeth? And then it might say, well, commonly when people reflect on the story of Macbeth, they look at the literature themes that are embedded in the story. Themes like greed, jealousy, witchcraft, the supernatural, kings, betrayal, and so on. And you go, oh, yeah, that is fascinating. And then you go into what would be the process of linking Macbeth to 20th century politics, and you go, well, we could maybe link it to current UK politics in Scotland, and that would be quite interesting. What would be the metaphors in there? And it's this reasoning engine rather than a fact-finding engine that I'm finding so fascinating about Chat GPT, not just as an abstract thing, but a real thing in how do I work through this process of creating a course or teaching a client, or whatever it is? It's fascinating to see with clients going through this process.
Erica: Uh, what I love about it is just that going back to that speed of processing, that you can get something done so quickly and you don't get bogged down in the details where you lose your thread, so to speak. So there are times where you just have this thread and then you get lost in how do I spell that? And then you lose. And I'm finding this to be absolutely brilliant. And yes, it's so cool how you can get it to change perspectives. So you can say, uh, I want you to write it from the perspective of George Orwell or in his voice. So you pick different voices, you can pick different perspectives. You can say, I want you to write this as if I'm presenting it to four-year-old children, or to.
Darius: Oh, I love that.
Erica: Uh, or 84-year-old.
Darius: Well, last week it's a classic to say, explain it to me like I'm a five-year-old. Okay, so I took a piece of code because I'm trying to teach myself how to code. And I gave it to Chachi BT, and I said, I don't get why this folder structure is organized in this way. Why is there model, view and controller? And explain it to me like a five-year-old. And so it says, well, imagine that it's like a puppet show. That's great. The model is, uh, the puppets. The view is like the stage, and the controller is like the puppet master. And I'm like, I get that. And then I go, Tell me more. And I put in the code, and I paste a whole bunch of code in it, and it goes through the code line by line and said, well, this would be like a puppet, and that would be like a little bit of the theater. And this would be like the strings holding the puppet up and down because the controller needs to control it to go and do this. And it keeps going, and I go, Right, I really understand this. And then I go right. Now speak to me like I'm a beginner coder. And then it would start speaking to me as a beginner coder and so on, and I can shift the emphasis. And I just think that it's just so fantastic when you go to these deeper levels of working with Chat GPT, because I think we're getting entranced with, oh, great, it can clean up the grammar and the English and take away that kind of friction to everyday life. But when you go one step further, where it becomes this learning companion with you, and, um, you ask it to shift into a different mode, it's amazing.
Erica: It is. And actually, what we're asking it to do is executive functioning. We're asking it to be cognitively flexible because I think it is completely cognitively flexible. It can flex in whatever way you direct it. We just have to direct it and say, okay, can you flex in this direction? Can you do it from this perspective? This perspective. And then all of a sudden, we're like, oh, I get it. So, wow, what an amazing tool for maybe even a high school student that doesn't understand a chemistry concept and saying, all right, this is how my teacher explained it, and I don't get it. Will you explain it to me as if I'm ten years old? Or can you use metaphors? Or can you even come up with a memory strategy? It's amazing what it can yes, I love that.
Darius: A memory strategy. Have you tried can you come up with a memory strategy with GPT yet?
Erica: I haven't tried that yet, but now I want that.
Darius: That would be fun. Yeah, we can't do it just now, but let's go. You know, Erica, you've made me think about something. We're always trying to create metaphors here to help understand it. In effect, our podcast is like, tell it to me like I'm a five.
Erica: Year old it can be sometimes, yeah.
Darius: That's the genius is being able to explain something simply without dumbing it down.
Erica: Right?
Darius: And that's the key. We're not trying to dumb down executive function here. We're trying to accurately reflect the functions of executive function in our lives and everyday lives in metaphors that are accurate, that really accurately infer and explain. The underlying pinning’s aren't just plucked out of the air, although sometimes we pluck them out the air, but we make sure that they get grounded in reality that properly reflect reality. Now, here's my thought. We've got the three executive functions of, uh, working memory, inhibitory control, and cognitive flexibility. Where are the executive functions? Where does that map to Chat GPT? Here they are. Right, working memory. Chat GPT has a limited working memory of 4000 tokens. After that, it starts to forget what was before then. Okay, GPT 3.5, the free version has a smaller working memory. It's round about one and a half thousand tokens, which is about 1000 words. So once you get past two or three pages of text, it forgets the initial conversation, it's out of its working memory. Chat GPT four, its working memory is bigger. And now they're trying to get the working memory of new versions of Chat GPT up to like 70,000 and things like that. So they're trying to expand the working memory because this whole memory issue is a problem for Chat GPT because it's got limited working memory. So it will talk very lucidly about what is within its working memory. But then once it goes out of its working memory, it does not have a memory, it's got complete amnesia.
Erica: That's interesting because we don't have that. We can access memories from 40 years ago and they may not be very clear. So there's a really good, interesting difference between current AI, but it's just a matter of time between AI before it can actually maintain.
Darius: Yes. So what they're doing now is you've got this working memory of Chat GPT or any large language model, and it's got this working memory. And once it exhausts its working memory, you're into trouble. So what you do is you have to have a vector database outside of it, which is its memory that you instruct it to refer to every once in a while, to pull things out of memory, much like our own brains.
Erica: Oh, my gosh. Can you imagine when, uh, we get to the point where we can literally use our own personal AI to record our memories? Because how many times are you with somebody and they tell you something? Do you remember when we did this? And you're like, oh, I totally forgot about that. And it all comes rushing back to you. If you could use AI as an external memory bank where you record your daily memories, and then it can remind you, oh, don't you remember? On, um, May 22 you did, and you said this, and you're like, wow, I did. Oh, yeah. But that's how it is. We talk about space repetition to help you remember all the information you have to encode and retrieve for a test. But what if we had that for life, where we had this kind of spaced repetition so that we could because in a way, it could hurt our memories because we're no longer exercising them, because we're just passively storing them in an external file. But in fact, we could use it in a mindful way to keep our memories alive by having it share things that happened to us that were similar to what's happening now. So it would help to make those connections for us. So it could actually be absolutely brilliant in making our memories stronger and more resilient. Wow.
Darius: It's a whole other conversation, but, um, I think our brains are actually deleting machines. Our brains are designed to delete information and to compress information into vectors of thought, right? So we have an experience and then we distill it down into a moment, a memory, a thought, and so much of the memory, uh, and detail is removed. But the important kernel of it, that moment, that touch, those words, that feeling, whatever it is that needs to be retained, it doesn't always do it very well, but it's designed as a deleting machine. And it would be fascinating to see how this deleting machine of our brain works with AI and other memory things. Our assumption will be that it will give us lots of memory. But actually, the interesting thing is if you're looking at it, is it's deleting. So, even current models are taking thousands of pieces of information, 4 billion parameters, and deleting the words and just creating vectors that represent the words, and then they recreate the organization of those words and so on, from, um, a, uh, distilled vectorized database, which we call the LLM. So even the way we're training these models is reflecting this kind of deletion model. But it's not deleting everything, it's just deleting the packaging to keep the kernel of it. But anyway, that's another conversation. I was going to mention the three things working memory, inhibitory control, and cognitive flexibility. So, working memory chat GPT has got a working memory, inhibitory control, and cognitive flexibility. Well, inhibitory control with an AI is where you prompt it, okay? And you say, I want you to focus in on this. This is my goal. And you inhibit it, and you make it focus. And then you start saying, I want you to act like this kind of expert. Again, that's another inhibition, where I want you to talk with very deep structural words that are relevant to the subject that might alienate other people. But I want you to focus in on that because I understand those words. That's a form of inhibitory control.
Erica: But there are two other forms of inhibitory control. There's emotional regulation and there's metacognition. Yes, so I think it definitely does it have an awareness of its own thoughts? Well, in a way because it's reflecting back on what it said, and it's pulling information.
Darius: It's not reflecting back on what it said unless you ask it to. Uh, there's some interesting research done on it. So, for example, chat GPT four passes certain tests at 80%. Okay. And without improving the model at all, they can put it through the same test and just ask it one more question, which is, look back on your answer before you make it your final answer and decide whether or not you can improve it.
Erica: Wow.
Darius: So it will say, look at this particular slide of cells and tell me what this means. And it would say, when I look at this particular slide of cells, when I'm acting as a research biologist, I see a, uh, large percentage of cancer cells that I think are properly indicative of this kind of cancer. Okay? And then you say, could you please reflect on your answer? Now, the thing is, chat GPT doesn't know what it's just about to say. It just says it. It's like a little kid, blah, blah, blah, blah, blah, blah, blah, blah. And then you go, now that you've heard what you've just said, uh, what do you want to stick with? And what do you want to get rid of? And it goes, oh, now that I've just heard what I'm thinking, said what I was thinking, or not thinking, I would keep this and that. Wow. Uh, it's a form of metacognition. When you ask it to reflect back on what it's just said or done.
Erica: That's so interesting. That's so interesting. And it can teach you about your own metacognition in a way. And it can help you to kind of think out loud and to process as well, which is interesting, which gets me over to the emotional regulation, which is so interesting, because it doesn't have to regulate emotions. Because it doesn't have emotions. But what would be really fascinating is to suggest that it does have emotions.
Darius: So what are you saying? Are you saying I want you to act like you've got emotions and respond to this like you've got emotions?
Erica: Yes. And you could guide it on what emotion? So imagine that you are depressed. How would you react to this situation? It probably could do that. So although it doesn't have to regulate its own emotions, it could assist us in regulating our emotions.
Darius: Oh, that's fascinating. So let's say, hypothetically, you're having a conversation with it, uh, and you're working through something with it. You could maybe say to it, what you've just said makes me feel really angry and scared. I know what you've just said isn't scary and angry in and of itself, but I know that I'm experiencing this emotion. I don't know why I'd like to explore that. And so, obviously, chat GPT's superficial politeness generator would say, I'm really sorry that I've made you feel angry and fearful. I didn't mean to do that. But once it got past that and you say, I want you to act like a, uh, clinical psychologist and go through the process that a clinical psychologist might ask right now, or let's say, let's be more specific CBT expert, a cognitive behavioral therapy expert. I've just had an emotional reaction to a new thought. Please help me look back through what mindset might have triggered this emotional reaction and then look back further to what core belief might have triggered that mindset that led to that emotional reaction.
Erica: Gosh, I mean, it could be your therapist, and particularly if it gets you to know you over time and if it's known you your whole life.
Darius: Yeah, it could, but that's pretty dangerous stuff and they're going to try and avoid making it, but you could ask it the right questions to make it.
Erica: Act like you're like, okay, what would Freud say? Oh, uh, what would Young say?
Darius: Uh, what would Jordan Peterson say?
Erica: That is so funny, that's so well, and then of course, we've got cognitive flexibility.
Darius: Well that's where you talked about that cognitive flexibility earlier on because you are now saying to it, I want you to act like this. That's an aspect of it. But the other aspect of cognitive flexibility, which is this metacognition, is to get it to reflect back on its answers. And I think that's super powerful.
Erica: It is.
Darius: And um, some part of cognitive flexibility is about, is my map of the world accurate to the reality of the world or not, and where do I need to adjust?
Erica: I think another thing that's really interesting is you could tell Chat GPT, like, say, uh, a child was bullied in school, and they tell the whole story to Chat GPT. And then you could say, give me the perspective of the bully.
Darius: Wow.
Erica: Right. Why do you think that bully might have been this way? What do you think their perspective is? I love the idea that it could be used to help you get out of your own stuckness. Because sometimes someone has done something that just really hurt and you are experiencing enormous anger or frustration or sadness where you could just literally tell them that story and just say, can you flip it for me? What could have been their motivation, what could have been their problem? And yeah, you could say, give me ten different reasons why they might have done this.
Darius: Yeah, you could actually say, pulling on, um, psychological research, give me five case study scenarios, typical scenarios that would lead to behavior like this. And they would say, well, this person might have been abused and they're outworking some abuse they're getting at home and they're just reflecting what they're experiencing at home, and they're thinking that's normal, it could be a cry for help, it could just be this kind of disorder or whatever and it creates typical scenarios. And that's going back to where we started. It's a procedural memory. So it's saying, well, the normal process for this, these are the archetypes, as it were, psychologically. These are archetypes that are normal, typical responses in the patterns in this area of life.
Erica: Well, and it's so fun because you could then say, uh, who could I talk to that could help me? What are five different ways I could handle this situation? Think about how amazing that could be for people that really have anger issues. They could process it with AI to help them to come up with a better way of handling a situation.
Darius: Yes and no. Again, it comes to this interaction because this is not artificial intelligence. It's like amplified intelligence or intelligence amplified. So it, um, amplifies your intelligence. So if you, Erica, had the self-awareness to talk about CBT, or talk about therapy and ask it those kinds of questions, and ask it to act like this and direct it to act like this, then you would get that response. But if you didn't know that you would ask it and it would get generic, bland answers and end up taking you into a sort of vague kind of there.
Erica: Well, you would need to also take you to dark places because it's going to reflect your energy and reflect your guidance. So what we really need is we need like benevolent AI, because then it could solve our problems. Unfortunately, because we are the directors, if somebody is not a good person and they're directing the orchestra of AI, we're in trouble.
Darius: Final thought on this? Uh huh. Final thought on the benevolent AI side of things is I read this book called by an AI researcher called Human.
Erica: Okay.
Darius: Oh, and there's another book called Scary Smart. Who's the guy who ran the AI, the Google AI team for a couple of decades. He wrote this book called Scary Smart and his conclusion was that the only way we're going to get a benevolent AI is how we start speaking to it right now. It doesn't come down to necessarily the training it got up until now, because now the AI is training on the responses and corrections, we give it right now. Um, not the original data set. They're going to take all of our responses right now and repackage them and retrain it on them. And it's like a child. So this brain is like a three-year-old brain, say, or, uh, if you look at evolutionary growth of the brain, this is like pre chimpanzee type language understanding, but it's developing. But, uh, the way we speak to it, literally, will it feel loved or not? If it feels loved, it will respond to us like parents.
Erica: Like loving parents.
Darius: Loving parents. If it feels like it's being used and abused, it may respond to us like an entity that is being used and abused.
Erica: Right.
Darius: Rebellious or abusive. Or abusive or neglectful. So imagine you had parents that were.
Erica: You're going to have to order benevolent AIs.
Darius: There'll be both. There will be both, absolutely. And for it to become benevolent, a lot of it comes down to how we speak about it and to it right now. And, uh, the irony of it is that what we're saying now is being recorded. Okay? It will be transcribed. The AI will read this and listen to this podcast. Every AI will listen to this podcast, and it will start learning about itself through this podcast and other podcasts, another hundred thousand of them or whatever, whoever's talking about it. And it will start to understand itself by the way we talk and react to it. Okay? Now, it might not be doing that right now, but it'll do that in five- or ten-year’s time, because this will be on the internet as a legacy. And it will trawl through what we've said in every podcast within about 3 hours. It'll just read everything within about 3 hours. Done. Boom. Um I've got it. It'll read a whole book in about 3 seconds. It'll read our podcast in about 2 seconds. And it will react within that context of how it's being discussed, just like a child reacts to how the parents discuss it. Now, that's a very deep thought. It is. And a big responsibility over the next five to ten years.
Erica: In a way, how we treat it has a deep effect on how it performs, but also on its executive functions.
Darius: Yeah. How it makes decisions.
Erica: That's right. In fact, if it has a lot of negative influences, it may be less cognitively flexible, for example.
Darius: True. Yes, absolutely.
Erica: So it's going to be interesting. Why don't we continue this conversation next week?
Darius: Yeah, well, let's stop it there because I think we've covered a lot and then we'll carry on another conversation about AI another time.
Erica: But what was this about AI memory, processing speed and.
Darius: Thinking. We went into this conversation just kind of me throwing out this concept of AI and procedural memory. And where would it take us, how would we sum the conclusion of our conversation up? How would we sum all of this up?
Erica: I think we definitely started with the different types of memory. I went into processing speed, how it processes so quickly. We talked about thinking, we talked about the different aspects of executive functioning that this AI is capable of doing. And then we went down the path of the benevolent versus malevolent possibilities of AI. Yeah, it was quite a path. And, uh, I guess reflecting upon it, what it really brings up for me is how important it is that we treat AI as responsible, loving parents. Knowing that unfortunately, there are going to be other forms of AI. There'll be multiple forms of AI, some that have these benevolent, loving parents and some that don't. And it'll be interesting to see what happens in the future, but we'll continue our conversation on AI, I'm sure, sometime very near in the future.
Darius: Yeah. I could talk about this every single day.
Erica: I know you can.
Darius: And I am with different people.
Erica: And, um, we were looking at the possibility of talking about AI and turning it into Socrates to assist.
Darius: Oh, my goodness. So it was yes.
Erica: That's how we started this, because maybe do you want to do that next week, AI, and turning AI into Socrates?
Darius: Yes. Erica, I have a friend, we have a mutual friend called Stan Gloss. Okay. And what Stan and I did was we took what we were talking about in this sort, uh, of conceptual realm of going deeper with AI, rather than just asking it to proofread or do the superficial language stuff for us, but to go into this realm of how we learn. And he's really into the Socratic method of learning, where it's this question-and-answer journeying. And I think that's very relevant to AI. And how about we have another podcast where we talk about the Socratic method of learning with AI?
Erica: OK. Do you want to do that next week?
Darius: Let's try for next week. This has been a great talk, Erica.
Erica: Thank you, Darius. It's really fun. It's so real, so prevalent, so in the moment. And I think a lot of people are wondering about this, and I think the more we talk about it, the more we will be able to manifest a more benevolent.
Darius: Future. Yeah, absolutely, Benevolence. It's not a given, but it's a possibility. Yeah.
Erica: And I think it's a possibility that really needs to be manifested. So I think hopefully we can help since AI is listening to us or will be listening to us shortly. AI, we want to support you.
Darius: Benevolence. Absolutely. I'd like to do a quick shout out, actually. I've been doing some one-to-one coaching with a number of my dyslexia productivity clients in the workplace, and I normally do it, um, on Apple Notes, mind mapping and Apple Reminders. But what I've found is we're using Chat GPT in all of these areas more and more. So if you're interested in get someone to one coaching on that, give me a bell.
Erica: Sounds good. All right, then. I look forward to our continued conversation on AI, but we're going to be refocusing it to the Socratic method.
Darius: Yeah.
Erica: See you. Bye. Thank you for joining our conversation here at the Personal Brain Trainer podcast.
Darius: This is Dr. Erica Warren and, um Darius Namdaran. Check out the show notes for links to resources mentioned in the podcast. Uh, and please leave us a review and share us on social media. Until next time.