Episode 65: The New CHAT GPT 4.0: Impact on Processing

Below you can view or listen to Episode 65 of The Personal Brain Trainer Podcast.  

How Sleep Powers Executive Functions 

Welcome to the Executive Function Brain Trainer Podcast with hosts Dr. Erica Warren and Darius Namdaran. This episode dives into the new release of ChatGPT 4 O and its significant potential to revolutionize support for individuals with dyslexia, ADHD, and executive function challenges. The discussion includes how AI's advanced multimodal capabilities can provide personalized, real-time feedback and scaffolding for various learning functions. They emphasize the importance of leveraging AI to make educational processes easier and more accessible, while also calling for investment and development in AI tools tailored to specific neurodiverse needs. The episode envisions a future where AI acts as an intelligent co-pilot, facilitating efficient and effective learning and problem-solving for all users.





      Brought to you by:


      Erica: Welcome to the Personal Brain Trainer podcast. I'm Dr. Erica Warren.

      Darius: And I'm Darius Namdaran. And we're your hosts. Join us on an adventure to translate scientific jargon and brain research into simple metaphors and explanations for everyday life. We explore executive function and learning strategies that help turbocharge the mind.

      Erica: Come learn to steer around the invisible barriers so that you can achieve your goals. This podcast is ideal for parents, educators, and learners of all ages. This podcast is brought to you by goodsensorylearning.com, where you can find educational and occupational therapy lessons and remedial materials that bring delight to learning. Finally, you can find Doctor Warren's many courses at, learningspecialistcourses.com dot. Come check out our newest course on developing executive functions and study strategies.

      Darius: This podcast is sponsored by dyslexiaproductivitycoaching.com. we give you a simple productivity system for your Apple devices that harnesses the creativity that comes with your dyslexia. Hey Erica, what we going to talk about today?

      Erica: Well, we had something on the docket that we just threw out the window because of the new release of, chat GPT 4.0. And you are all over this and I'm ready to go for the ride. So, yeah, I'm jumping up and on stage.

      Darius: Let me go. Let me go. So it's this chat GPT four o. The o is for omni, and I believe that AI will solve dyslexia's challenges and amplify its advantages or can solve its challenges and amplify its advantages if it's trained and used in the right way. People don't really understand what just happened last week because OpenAI are having this kind of a to releasing products, which is actually to undersell and overperform under promise and overperform. They've been doing that all the way along. They just slide something in there and then it blows up and people realize, oh my goodness, I didn't realize how powerful this thing was. And so if you contrast that to what Google did the day after, which was a very similar release, they did it much fancier and so on. but let me just share with you what I think has just happened. I think chat GPT has just had their iPhone moment. What Steve Jobs did was he announced the iPhone, and if you've ever watched the BlackBerry movie, it's a fantastic movie on Netflix. I can't remember what it's called, but I'll put it in the show notes. But basically when Steve Jobs held up the iPhone and it had no keyboard on it, and it was basically an iPod that you could speak into and make phone calls from with an app store. Nobody quite realized what he had just done. They thought it was a phone, okay, that had been announced with a weird keyboard. But what he had created was a platform of apps for other people to use and create that completely piggybacked on the phone and made the phone immensely valuable to people's lives because the apps that were on it now, chat GPT, has been trying to do that, but they haven't managed it yet, okay? So for the last year, they had the chat model, okay? Regular chat GPT, the free version was 3.5, which is rubbish compared to four, okay? Only the paid people got four, and the four is where the real magic is. Okay? 3.5 is a little bit frustrating when all these four people are talking about what you can do. You go on and you try 3.5 and you're like, oh, I don't know quite what they're going on about because it doesn't quite do what I want it to do. Well, here's what the iPhone moment is for. OpenAI, they made four free for everyone to use last week, and it's just about to be free for everyone, okay? So everyone can use four for free. We all had to pay 20 quid a month for everyone can now use four for free. Four is now two or three times as fast to send it to people, okay? But that's just the beginning. The next thing that they did was over the last year, there's been lots of people creating their own


      Darius: specific GPTs, like story GPT, or LinkedIn GPT or something that is specifically tailoring GPT four to do something specific.

      Erica: You know, like a good example is Einstein. I remember seeing one that it knows everything about Einstein, so that you can really ask questions and almost feel like you're getting all that content. So it's almost like an encyclopedia of, specific content.

      Darius: Yeah. So when you go to Einstein, you know it's going to speak to you like Einstein. It's going to speak to you from Einstein's frame of reference. It's not going to speak to you like chat GPT's general frame of reference. You could have the Erika Warren GPT, or the Darius Namdaran GPT, or the executive podcast GPT, and it would have all of the transcripts of every single podcast that we had done. You could ask it a question; it will say the answer within that. It will show you references to the podcast episodes with a link to the podcast, et cetera. So people have these GPTs already behind this paywall of, GPT for subscription. But now it's going public to everyone, and everyone can use them, as well as these apps, as it were, that are being built on top of chat GPT. So when BlackBerry saw the iPhone, they went, oh, well, so it's a different form factor for the phone. But then if you watch the movie that all the developers are going, yeah, but it's got an app store. And he goes, well, what's the big deal about an app store? Who cares about the app store? It's about the device, isn't it? No. We look back and we realize it was all about the app store. Your iPhone is all about the apps. Facebook, LinkedIn, Instagram, very clever apps or whatever, okay? We've got the basic phone apps and so on, but maps and so on, it's all about the apps. Now, what do apps mean? Apps mean predictable, specific results. I want to go into the maps. I want you to think like a map. I want the buttons to be map like buttons. I want to get somewhere. That's why I open it up. my focus is to get somewhere. I open up another app, and that is about my flights. My focus is to get onto the right flights or track a flight or something. It's all about focus, okay? So you can have all this power. People don't want all this power. They want a result. Now, chat GPT has just made it so that you've got this super intelligent agent, but with these specialized GPTs, you'll get a result. Now, what's this got to do with dyslexia or executive function or ADHD? What's it got to do with that? There's so much. GPT four o stands for GPT four omni, and it's a completely different model from chat GPT four. Okay? It has been trained on text, pictures, audio, and all sorts of multimedia. So it's a multimedia model, okay? Now, within the educational realm and the executive functional realm, it is multimodal. It's multi-processing. It processes on multiple levels simultaneously, just like we have auditory, phonological loop. We have a spatial visual working memory loop. We take in information. We process it in different ways, kinesthetically, visually, musically. GPT four o can do all of that. If you asked it to sing back the response, it will sing the response back to you. If you asked it what emotions you're experiencing, and to respond back with the same kind of verbal emotion, it will speak back and mirror your emotions. If you switch on your camera phone and get it to look at something, it will understand what it's looking at and tell you. If you put the selfie mode so it looked at your face and you said, how am, I feeling? What do you think? It will tell you what you're feeling, and maybe even more than you realize. If you open up your desktop screen and ask it to look at something on your desktop, you don't even need to take a photo, you can just say, hey, chat GPT, look at my desktop. It will look at your desktop. If you have code on it, it will read the code and tell you what it means. If you have a document you're writing, it will tell you, talk you through whatever you're writing. You don't need to cut and paste it. It's there sitting conversationally beside you. What's more, the speed of response is at


      Darius: the speed of a human being, if not faster. So, for example, at the moment, if you use the voice mode, it takes three to 5 seconds to send the message and come back. And you sitting there waiting for a response right now. With the new version, which will come out within the next week or so, it will answer within one third of a second. So faster than the human being responds to a pause, it will be able to answer you straight away with voice and tone and intelligence. That is tonal. So what does this all mean for someone with executive functions, challenges, or dyslexia or ADHD? It means you've got instant verbal and visual feedback. It means you've got someone to bounce ideas off verbally rather than through slow moving text. This dynamic of having something respond back to you intelligently when you need it is often something that we need to process something actively. especially if you've got dyslexia or ADHD, you often need that sort of external bouncing back, like an echo sounder. You say something, you want to get some feedback, you adjust to it, you say something, you get some feedback. A lot of other people can just do it inside their own head, but often we need to do it outside and get that sort of bounce back. And often the people around about us are kind of like, I'm all talked out, or you're going on about this thing again. I just don't have the space for it or whatever. You've got something that you can bounce ideas off and just process something in you. It's not like googling something and finding information. It's about this dynamic process, it's an interactive process.

      Erica: You're no longer alone.

      Darius: Yes.

      Erica: You can. You process with whoever, whatever you want.

      Darius: Yes.

      Erica: And get the advice from whatever perspective you choose.

      Darius: Absolutely. And you're now at the point where instead of having a generalized intelligence, you could choose to discuss it with the executive function podcast. Okay. although we've not created this, Erica, but we could. Okay, we could take all 50 6100 episodes, whatever we've got, by the time it gets released, the transcripts, put them in, give them instructions, ask them to speak in a certain way, and so on. So when you go and you open up chat GPT, you decide to go into executive function GPT, and you say, right, I want to talk in an executive function mode. If I was teaching this to my child, or if I was thinking about this from my workplace, and I'm thinking about a second monitor, or I'm thinking about working memory, would this be right? And it would come back and say, well, maybe, but actually what they said was such and such. You may be along this, and you've got something to wrestle with and sort of bounce off ideas. And so people have specialized it, and that's the beauty of what's going to happen. That's just one of the things that's going to happen.

      Erica: So you have at your fingertips the expertise of anybody you choose, as long as that information has been uploaded.

      Darius: Yes.

      Erica: Wow.

      Darius: And you don't need to create a special prompt saying, you're an Einstein expert, and you've read all of Einstein's books, and you need to answer, someone else has done all of that for you because they're obsessed with Einstein, or we've done it for you because we're obsessed with executive functions. And so it's all set up, ready to go, just like an app for a map or your flight departure is all tuned to one specific goal.

      Erica: One of the problems with the old versions of chat GPT is sometimes it gave you incorrect information. So it sounds to me like this is a fix for that, because if you have, they call them apps like the Einstein app, what do they call?

      Darius: They're called plugins.

      Erica: Okay, so now that you have plugins, then you know that you're getting reliable information now.

      Darius: Yes and no. Okay, so a lot of people are thinking about chat GPT like it's an encyclopedia. Okay?

      Erica: Right.

      Darius: But it's not. You need to treat AI like it's a person. So when I speak to you, Erica, I work on the basis that 99.5% of what you say is on point. But I also know that it's probably half a percent where you might talk confidently about something, but it's not quite right. Okay. Other people, I might say, I know when I speak to them, you know, 80% of it is on point, 20% of its maybe a bit of hyperbole and exaggeration, and you get a gauge with different people as to how much you rely on it and how much you don't. Yes.


      Darius: And we've kind of been through that with Wikipedia, for example, there was a time where you said, oh, you don't trust Wikipedia, and now it's a very robust source of peer reviewed knowledge. And so we are going to go to the executive function, GPT, for example, and we're going to say, right, well, this will give me a lot more direction on this, but it's not necessarily 100% perfect. Often you go to people not for facts, but for results. Okay, so we're going to go to Google for facts and we're going to go to AI for results. And the result isn't a fact, the result is some sort of result. Get me to a destination, get this email proofed for me, write this in a particular style, or look at this photo, or create a book cover for me that says these sorts of things and ideate some ideas, or give me some feedback on what you think my landing page looks like. And you take a screenshot of it, and it goes, well, that's quite interesting. But the button, I don't see where the action button is. It's not on the page. Oh, gosh, I forgot about that. It's underneath the fold. And it's more this kind of dialogue you would have with an intelligent entity.

      Erica: It's not really coaching or therapy, because I think that does require a human being. Yes, but the interesting thing that it does do is it creates something that's maybe not as biased as a person can be. It's less biased. But of course, if it's programmed with biases, it can't get away from biases entirely.

      Darius: Now, as you know, I'm completely obsessed by dyslexia, more or less. Like, that's my big focus, is dyslexia and solving dyslexia. I want to permanently solve dyslexia for the next 120 years, because for the last 120 years we've known about dyslexia. And it's been this constant puzzle for people, parents, educators, individuals with dyslexia. Why is it I can speak to this person, they're so eloquent their vocabulary is great. They're really solving creative ideas and problems. And then I ask them to write a few bullet points on it, and they can hardly do it if they're a child. They can hardly do it if they're an adult. It takes two weeks for them to follow up with an email, because they just don't do the follow up email in the same kind of way unless they're really well trained. And so what is this gap? What is this disconnect between what you experience verbally and what you get in writing? And dyslexia has these gaps, and these gaps are normally processing gaps, because dyslexia isn't a reading difficulty, it's a processing difference that shows up as a reading difficulty, and it also shows up in 50 other skill sets, at least in your life. Not every for every person, but for many people, it shows up in lots of different processes, like the process of tying your shoelace, or telling the time, or doing a stick shift when you're learning to drive a car.

      Erica: And of course, there are so many flavors of dyslexia, and no two dyslexics are quite the same. What we do have in common is that we process differently, and some of us share these similar differences.

      Darius: Yes. And you can always track it back to something to do with processing. I mean, even dysgraphia comes down to DYs processing, dyscalculia processing, all these disses all come back to a processing theme. Now, for many people, processing is this vague kind of. I have no idea what that means. But basically, processing is how we do a process. It's as simple as that. How we do the process of decoding text, how we do the process of moving, and if you break it down to something, as how we do a process. Now, why is this important? These AIs are expert processing machines. That's their domain of excellence. They're not knowledge machines, they're not counseling machines, they're not therapists. They're process machines. They love a good process.

      Erica: Interesting. Yeah. So processing, processing, processing is really how you make sense.

      Darius: Yes.

      Erica: How you make sense of the information, tell us more. So, yeah, that's the way a lot of people will say to me, oh, you have dyslexia, so you see things backwards? No, I see things the same way, but the way I make sense of it


      Erica: is, and it's interesting, there is some interesting research that they've discovered that the neurons, when the information crosses the corpus callosum, the neurons are in slightly different orders, which is why we sometimes mis order things.

      Darius: And the corpus callosum is the joining bit between the right and the left brain. Yeah.

      Erica: Yes. It joins the two hemispheres and information travels across them. So it's interesting, but I agree with you, and I move more and more away from these disses as I get older, because it's really just a matter of the concept of neurodiversity. We are all very neurodiverse, and many, even people that don't have dyslexia can have some dyslexic qualities. And what comes down to is that, I mean, how beautiful is that? That as similar as we are, we're all completely different. When you start to zoom in on our capabilities, our abilities, and it would be lovely to live in a world where we can celebrate those differences, because if we were all the same, it wouldn't give us the capacity to grow. And, we have this extraordinary capacity to grow. But I love the whole concept of AI really assisting with our processing, and that it really releases the inner creativity. So you have to have that capacity.

      Darius: I think of it as, we've used the car analogy before. Okay, so you've got a manual car and an automatic car. The manual car that, the manual car going through the stick shift is like having dyslexia. You process more manually. The automatic thinking, you just go into drive, and you concentrate on the driving. 95% of the activity is exactly the same. You've got to understand the road, the road conditions, the rules, dynamic, and 5% of it is that gear shifting and so on. For one person, it's hardly anything. It's just drive the other person, it's 5%. But if you get that 5% wrong, at certain moments, you can stall your car or do something that isn't quiet, that interrupts and interferes with the other 95%. Are, you in agreement with me on that?

      Erica: You're onto something. What it really does is it upgrades the old stick shift.

      Darius: It does. That's.

      Erica: And it gives you a way to be an automatic.

      Darius: It does. That's it exactly.

      Erica: Because it does the shifting for you.

      Darius: Yes. Yes and no. Okay, so there's some things. So, yes, now let, this is where the excitement, this is what gets me so excited. Right. One of the biggest problems with dyslexia, 10% of the world has got dyslexia, okay? That's just dyslexia. That's not dysgraphia, dyscalculia, adhd, whatever, but just dyslexia. At least 10% and 8% of those don't know it. Only 2% get identified in school, if they're lucky, and those are normally the really extreme ones. So the moderate to mild dyslexia, which still gets in the way of life, day to day life, if it's not identified and given the right skills, gets in the way big time. Okay? And we hear all the great stories about people with dyslexia having superpowers and so on. And, you know, people with dyslexia achieve amazing things, but there's also a ton of people in jail, unemployed, and on minimum wage that have got dyslexia. Way too many, proportionally, have got dyslexia because they've not been given that skillset. Now, there's a gap here. How do they get the skills that they need? Because the existing educational system is not set up for them. It's set up for the middle 80% of people, 10% on this side, 10% on that side, don't get. They're just waste. And dyslexics are, like a waste byproduct, sometimes out of schools. They're just. Unfortunately, we can't help you. The key to someone with dyslexia, in childhood, is getting one to one feedback, okay? And we all know that it's blooming expensive to give your child the tutoring and the one-to-one feedback. And it's more expensive. It costs you an extra 1020 thousand pounds to educate your child because of the extra tutoring they need. And not everyone can do that. But if an AI could come into your car, okay, and you could switch on your phone, let's just say this in our analogy, you switch on the phone, and you say, hey, car GPT, watch my stick shift and my gear shifts, and just talk me through it, would you? You start moving into first gear and you're driving along, and the AI says, oh, you're travel, I can see you're traveling at 20 miles an hour. You're in second gear right now. And are you going to think about going up into third gear and you go,


      Darius: oh, third gear. Is it time for third gear? Well, it will be if you go a bit higher and you're like, oh, okay, well, I'll go a bit faster. Right. Are you remembering to put your clutch in? Oh, yeah, I'm, putting my clutch in are, ah, you remembering that trick I showed you to go sideways with your hand, so it goes into third gear rather than first gear? Oh, gosh, yes, yes, yes. I'll do that, you know, it can do that now.

      Erica: So it can provide enormous scaffolding if you need it.

      Darius: Yes, yes.

      Erica: And basically, it can be tailored to offer the scaffolding that you need. So if that's not an issue, but writing a five-paragraph essay, it can walk you through the steps and you can be like, wait a minute, I just finished the introduction. Can you look at it? Am I missing anything now? Where do I go from here? Oh, right, I have to create, perhaps I should create my topic sentences, right, so, but it will kind of guide you through the process so you can kind of. Whatever you struggle with, it will provide that scaffolding.

      Darius: Yes. Whatever process you struggle with, it will provide you with that scaffolding and it will provide you with that kind of type of feedback, which is, I will demonstrate it for you. I will step back and talk you through it. I will step back and just watch you and be here when you need me. I will step completely back and let you do it on your own and be here just as a safety net, just like a mentor would do. This is the capability that GPT four o just released last week, because it can speak to you, it can watch you and observe what you're doing and understand it. It can hear the sounds that are going on in the environment. It could hear a truck going by or the revs of the engine. It understands the text. There's just so many senses that it's got now that it's making sense from and adding it into a process. So it can give you the feedback in real time that you need to repeat a process confidently and consistently, which is what you need with dyslexia. You need feedback so you can practice it over and over again until you learn that process.

      Erica: What we really need are people out there teaching others how to use this resource. So I know, a lot of people that are like, I've never tried that. I'm a little bit scared of it. It's intimidating. But you can have amazing tools, but if people don't know about them or they fear them, then they’re not really supportive. Right.

      Darius: Well, this is the future that I see with GPT, okay? And this is the future I want to see with GPT. Okay? Let’s say you take the 50 different skills that often get affected by dyslexia, okay? And they often get affected by ADHD, but for different reasons. So, for example, the skill of learning to read, or the skill of answering a question accurately and understanding it, or the skill of presenting a talk, giving a talk, or the skill of time management, estimating time, or there's so many different little skills that someone with dyslexia can do, but they don't necessarily do it the same way that the school teaches you how to do it.

      Erica: You know, what's interesting is those are like, little meta skills. And I'm wondering, because you can also use AI as kind of a larger skill of, for example, working memory, looking at executive functioning, being able to sustain attention. Right. Those are kind of more macro skills versus the very specific micro skills. And I think it can work in both ways, because it can help to focus your attention. It can also help to support your working memory so that you don't forget things. It can also help you to be cognitively flexible. So it's interesting. So it can help with the macro skills of executive functions, or it can zoom in on the micro skills that an individual with dyslexia or dysgraphia or dyscalculia may have.

      Darius: And you've just described that overlaying thing that we talked about here. There's one layer of executive function sitting on one layer of processing, and things go through the executive layer down to the process, and come back up, down to the process and come back up. There's this interrelationship. So here's kind of where I was going with that processing side of things. If we think of the two layers, that executive function layer is very much so. If you think about your brain having these different functions, the processing function, the executive function, even within executive


      Darius: function, you've got working memory. We have tools for working memory, note taking tools. We've got tools for inhibitor control, goal setting, reminders, calendars. We've got tools for cognitive flexibility, mind mapping, maps, adaptive discussions, meetings, et cetera. And then underneath that, we've got all these processing layers where we process things, all the different styles we've talked about in the podcast. Now, why am I saying all of that? Our brains have like, different apps within them, okay? Different segments of the brain are like different apps that communicate information. So we've got the optical app. We've got the decision-making app. We've got the working memory, note taking app. We've got. All of these are kind of. You could map them onto your brain, these apps, we've got the directional function in our brain that is like a map. We've got an emotional part. These are all segmented, have clusters in our brain. Now, when you're thinking about AI, it's my opinion, instead of having this generalized AI that does everything, we're going to have this generalized intelligence, like GPT 40, but it will be GPT six or seven, and it will be more intelligent, but it will be omni, and it will be trained to focus in on a certain function when information is brought to it. Okay? So there will be very intelligent, specific apps that are very specific towards learning how to read, very specific towards learning how to give a good presentation to a TED type level, very specific to how do I manage my time, specific to how do I ideate a project, to how do I write, answer an essay and, a long question, a short question. And these will be deeply attractive to people with dyslexia, or anyone, actually, but very much to dyslexia, because it's like, I want to really focus in on this. I want expert feedback, and I want you to treat me like you're an expert in this area. And so we will end up having go to versions that will be like plugins on, OpenAI or the equivalent, whether it's Google or whatever. But OpenAI have done this killer function, okay, that no one else is doing, okay, Google aren't doing it. Microsoft isn't doing it. What OpenAI are doing is they're saying, here is our iPhone, here is the app store. You can upload an app and use the power within the phone. Use RGP for intelligence for free, okay? No one has to pay for your app when they use your app. We will pay you per minute of usage. Even though they're using our AI as a creator, they use your AI more than ours. We will pay you revenue for it. This is going to be killer. But the key for me is people within the world of dyslexia have got to take seriously how we train this AI and the tools around to actually help children, students, and adults with dyslexia and ADHD how to do it. So it's not just concentrating on the middle 80% like we have done for the last 120 years. We've now got the opportunity of something to give us the one-to-one feedback that we need, but it needs to be directed towards those 10% on the edges.

      Erica: Have you seen the YouTube video of Sal Khan and his son? This is the guy that. That did Khan Academy, and they were using the new chat GPT to tutor. I think it was trigonometry. It's wonderful. And I have to say that my first thought was like, wow, you can't get a better tutor because you can. Basically, what's so great about it is you can tailor the tutoring so you can say, like, oh, you're going too fast, and it will slow down. Give it to me another way. It will give it to you another way. So it has a certain flexibility that most tutors don't have, and it has access to knowledge and teaching strategies that a single person couldn't possibly have. So it's really, it's a little scary for people in the tutoring field, I suppose. But I think it's absolutely amazing


      Erica: for what could happen to being able to tailor education.

      Darius: I think it's fantastic for people in the tutoring world. I think it's fantastic for educators. And I'll give you an example, I'll give you a reason why. There you have Salman Khan, the best tutor in the world, pretty much from Khan Academy, okay? One of the best in the world with his son using an AI to help tutor his son, okay? Now I tutor children and adults to use AI to make myself redundant. But the irony of it all is I don't make myself redundant. I just make them much more capable that they want to go to higher levels, that they come back for more. So, for example, the father can be sitting with their son doing the trigonometry together, learning how to use chat GPT. For that, the son goes off and uses chat GPT a bit more on his own, but then comes back for a higher level, challenging thing. Chat GPT is part of the conversation, but not the leader, but a supportive aspect of it. So chat GPT is going to move into the sort of the passenger seat, not the driving seat of the car. Like a rally car driver has this person going, okay, next left, first right, 100 clicks to such and such or whatever. You've got this kind of support structure beside you, supporting you to do what you really need.

      Erica: It creates the sequence of steps and the coach, tutor, whatever you want to call them, learning specialist, really becomes that conductor.

      Darius: Yes. That executive function level, right? I mean, you know what it's like, right? You're tutoring. Okay? And as a tutor, like, yeah, I don't know. I'm charging 100 pounds an hour for my workplace strategy coaching. I want to make sure. And that's just now. I mean, there'll be more in the future and you're charging a lot more than that even. But I say that because you want to make sure when you're coaching with somebody, they get the value, don't you? Okay, that's a problem. Okay. Especially with dyslexia, okay? Because often people with dyslexia, they're like, I don't need any more information. I just need to go through this process really slowly and be reminded of it again and again and again. And they feel embarrassed that they have to go slow. And there's some people that feel embarrassed that they have to go slow. Whereas if you've got the GPT listening to you, the GPT is not in any rush. They're like, hey, I'll sit back, I'll be silent for 20 minutes, half an hour and do absolutely nothing. Whereas sometimes as tutors, we feel like we have to do something, be active and contribute and offer value and move them on. And sometimes that person just needs, I, just need you there just in case I need you. But can I just practice this? I mean, there are times where I'm just kind of, look, I'm going to step back. I'm like, you just go through this a few times, help you slowly process this. This is what you need right now. You need somebody doubling is what they call it, isn't it? You know, where someone's just there, their presence is there to make sure that you're still on track and reassuring that if you do need some help at that moment, you've got it. That's what this entity is going to give us.

      Erica: I'm curious because I think what would be really helpful for the audience is because I think there are those people that are ambivalent, fearful, uncomfortable. Can you give them just a little bit of an idea of how to use this, how to use it, where to get it, how to access some of these features? What's your experience with that?

      Darius: So you go to chatgpt.com and you can use it straight away. You don't even need to log in now. You open up an account and you start speaking to it like you would another person. It's really as straightforward as that.

      Erica: How, do you access the conversational piece of it?

      Darius: The conversational piece? There's just a little typing in dialogue strip bar, and you just start typing straight into that. Or you can hit the microphone button and start speaking to it and it will speak back to you.

      Erica: But that's only available on the phone, is that correct?

      Darius: No, it's now available on the desktop and the phone. They've released a desktop app for Mac and then they'll release a desktop app for everything else. But basically that's the Omni thing. You just speak to it, and it will speak back. And if you don't want to speak back, you can say, stop speaking and it'll show you the what it would have said as text, and then, or you cannot even


      Darius: speak to it. You can open up a camera and say, what are you seeing? And turn it around and give me your opinion on this piece of art that I'm doing right now, or this sculpture. You've got any ideas? You know, it is utterly mind blowing what happens when you've got something as intelligent as this sitting by your side. you've just got to imagine. Imagine you could have an expert if you were a businessperson. And you said, I wish I had an expert intellectual property lawyer with me right now. I wish I had an expert accountant with me right now. I wish I had an expert contract lawyer that deals with IP. You just say, you are an IP lawyer, give me some opinions on such and such. And, it will speak and have the knowledge of an IP lawyer of the processes that are involved with IP law. Let's say you're an artist, and you then said, I wish I had the perspective of Leonardo da Vinci and his compositional perspective here. And I wish I had a pointless color palette, complementary color perspective, or a Scottish colorist’s perspective, like Joe Lamo looking at this. You just tell chat GPT, you are a Scottish colorist from the tradition of Joe La Mo. I want you to look at this painting and tell me what your opinion is on the way that I've done the complementary colors, what do you think? And it will talk to you back and go, fascinating how you did that. I really love the purple there and how you did the sun in that sort of orangey glow. But then the reflection was in purple, and you layered it underneath the reflections. And the. See, that was really cleverly done. The purple is maybe a little bit too green. We might need to put a little bit too red. It might be useful to put a bit of green in that, etcetera. That's the kind of conversation you would have with a real colorist. It will do that with you. So it's having someone on that process with you.

      Erica: What's interesting is it's going to speed up all of our processing and our output on a massive level. You know, I'm just thinking, like, if Thomas Edison had had something like this, he would have been able to create the light bulb in probably a few days instead of how many years did it take him? I mean, I think was like a thousand mistakes or something of that sort, or he would happen to be very resilient. So even people that aren't that resilient, but have, they have great ideas, can actually start to produce content at a very fast pace because they can have all of these tools and resources at your fingertips. And I hear what you're saying. This is like an incredible resource for a dyslexic individual. It kind of levels the playing field, so to speak.

      Darius: It does. And especially if you think about people with dyslexia who have real difficulty with processes, and that's where their gap is. They have this sort of this puzzling gap between being able to articulate their thoughts so clearly with voice. But then when it comes to writing it out, it's a whole new skill that they have to learn that isn't so automatic. They can definitely do it, but it's a much more laborious skill to learn that process, those gears, and, that process of connecting those two skill sets becomes either done for you or so much easier to learn that skill yourself by learning that process.

      Erica: Interesting. So it could really just hit any of those deficits. So if you have a hard time coming up with the word, you could be like, oh, what's that word that means this? It starts with a p, and then, boom, it'll give it to you.

      Darius: That's right. Yes. Or you would actually say, give me ten words that start with a p so I can choose, and you would say it, and you go, no, no, no. Those aren't the type of words I'm talking about. I'm actually thinking it would probably be. I do this all the time. It's like an expert thesaurus. But sometimes a thesaurus doesn't think in a dyslexic enough way. It's like, no, I wasn't meaning p. I was meaning th. You know, do it with a f sound. You know, and it's like, okay, I get the ph sound, right? I'll give you f sounds, and it'll do flame or phlegmatic or whatever, and it'll go for the f sound rather than f. Do you know what I mean?

      Erica: It's funny, because I have word finding issues from time to time, and I can tell you how many syllables, which is so bizarre, if I can tell you how many syllables are in the word. I know exactly how many syllables. Sometimes I know the first letter, but I just can't access the word. But it's going to be very, very interesting. But you could see how


      Erica: you can apply it to all sorts of. But what I love about it, too, is that say you're struggling in math, and you don't know what seven times eight is. Instead of saying, what's seven times eight. You could say, give me a strategy so I can memorize seven times eight. So that's very different.

      Darius: Yeah. And imagine this, Erica, right? You know your processing styles, right?

      Erica: That's, where processing.

      Darius: Twelve ways of processing that you do. Let's say you had done that, okay? And you had identified that the three ways of processing that you really like are. Let's go for the more obscure ones, like, musicality. What's it called? Musicality.

      Erica: So here, I'll pick three for you. Interactive, verbal and logical. Reflective.

      Darius: Okay. Interactive for. No, let's do something instead of reflective. That's too normal.

      Erica: Okay, so why don't we do rhythmic, melodic, melodic.

      Darius: So interactive. What was the other one? Verbal, verbal, and rhythm, and rhythmic, melodic. Okay, now the GPT knows those are your preferred styles. Processes, okay. Now I've got this kind of matrix that I use in my mind, important or unimportant, and then hard and easy. And I'm going for making things easy and important, okay? But most things that are important are hard, and most things that are unimportant are easy. And we want to swap that around and make unimportant things hard and make important things easy. Now, one way of doing that is by making sure you do important things in your way of processing. A simple example of this would be, it's hard for me to read a book from beginning to end in 3 hours, a novel or a research, book in 3 hours with my eyes. But it's really easy for me to read it with my ears and listen to it. So instead of saying, I'm m going to read that book, I'm going to say I'm going to listen to that book. So I flipped it into a process that I know I'm good at. Ah. So I've moved it into the important easy segment and then it gets done. Now imagine the AI knew your three processing styles were rhythmic, melodic, verbal and interactive. And then you said to it, I'm doing this times table, you know, eight times seven. How do I remember eight times seven? And it would go, well, you know how you really like rhythmical things. Why don't we try? And it sings out what is eight times seven, by the way, Erica?

      Erica: 56.

      Darius: Okay. Eight times seven is 56, you know, and you can you learn a little tune or something? Eight times seven is. You might say it in the same tones. Eight times seven is 56. I don't know. And so you can connect the way you naturally say eight times seven into 56. And it speaks your processing language do you see what I mean?

      Erica: Right. So for example, if you were a sequential processor, meaning that you like to see things in a sequence, you could say, oh, okay, well, if you take it seven, eight is 565678. So that would turn it into a sequential strategy. So seven times eight is 505-65-6780 I.

      Darius: See, so seven times eight is 56. Or you could flip it around. 56 is seven times 85678. And it shows it to you, sing it to you, and you're like, oh goodness, I get it. Sequential. Love it. 5678. Got it. And you've got it for life. That's brilliant. Now imagine the AI had that knowledge and ability that you have, okay? Plus another 30, 40 people like you around the world with your experience that you've had over the last 40 years built into it. Understanding your processing style at all times, understanding your preferred times of doing things. Your preferred ways of doing things. And is there at, a drop of a hat? That's what GPT OpenAI just released last week under the radar, and people have not realized what power it's going to have, especially for the world of dyslexia, if it's trained and directed in the right way.

      Erica: Well, and of course, what you're basically saying is that technically you and I could create that backdrop for OpenAI to be able to be. That's why Sal Khan. Very well, Mike, do that. Different educators


      Erica: can give it their strategies and approaches so that it can be utilized in that way.

      Darius: Absolutely. And I think I'm looking forward to teenagers doing it with their mums and dads. Okay, so what's going to happen? I want to see dyslexic people all around the world sitting down, tackling a particular little challenge that they keep coming up against, whether it's telling left from right or learning to read the time or whatever it is that is, that you can chart the processes that people with dyslexia are going to face throughout their life. I mean, I can give you off the top of my head, so it.

      Erica: Could have just a list of strategies. So everybody throws in their strategies and then it will be able to share it with everybody.

      Darius: Yes, and there will be a specific GPT for that chat. like, oh, you're learning to tell the time. You've got to do Philip's clock GPT. It's fantastic. You know, this little boy Philip learned how to tell the time and he used this technique, and it was really funny, and he does it with song and dance and so on, and he taught the GPT how to do it. And what you do is you put a paper clock in front of you, and you move the time, and the GPT says, what time is that, Philip? And, Philip says, it's 520. And the GPT goes, I'm not sure it really is 520. Is that not maybe 519? Oh, if you have to be so specific and you can have some sort of fun little dialogue with it, et cetera, we're going to have these specific ones for all sorts of different skills that will help you with all these processes.

      Erica: Think about when they start to gamify it, because that's my secret sauce, and that's what it, to me, makes it really fun. I think, as educators, we need to bring that piece of it to education, which is the gamification of it, so that kids find the learning process fun. That's so important, because if it's fun, it's memorable. If it's not fun, it's not memorable. It's just really, really important. So I'm curious to see how maybe you and I can help to add that secret sauce to OpenAI.

      Darius: I mean, take, for example, one of your screeners, Erica. Okay. And I'm giving away all the secrets. I'm giving away all the ideas here. I don't care, because I just want to see it happen. Right? But imagine you've got one of your amazing screeners, okay? I've got a screener on the app store, Apple app store, for dyslexia, and I go through it with people. It's one of the top. It's the top screener on dyslexia in Apple app Store. And when I go through it with people, I say, I ask the question, yes, no, maybe sometimes, et cetera. And we swipe up and we do the next one. And do the next one. What happens, which is fascinating, is they end up spontaneously saying, oh, yes. Whenever I do presentations, I tend to freeze up. I'm really great in meetings, but when I'm told I have to stand up and do a presentation, I freeze up. And I don't know why it is. I think it might be because I'm scared of writing, reading out my notes, and so on, and they say something spontaneous, okay. And there's so much value in that spontaneous reaction, and it's all lost, okay. I go through 50 questions with them, and it's such a moment, really, that you're having with someone when you're going through a screener, because it's not just answering the questions, they're processing something they're going, gosh, yes. That is so true. Oh, yes. Oh, yes. Oh, yes. What if the AI was listening while you took the screener, okay. And it transcribed what the person said with the questions. Okay. And at the end of it, it was trained to take their comments and insert it into the report that you create.

      Erica: So it's a combination or just to create a report? Yeah, yeah, yeah.

      Darius: It creates a report. It's got the questions, it's got their comments, it's got the report. So it would say something like, you know how Darius said he finds it hard to transfer information from one computer screen to another? This is a working memory issue. He said that this often happens when he's doing his expense reports. We would therefore recommend that he has a second screen so that he doesn't need to flick between windows. I mean, that's a common strategy for working memory and split screens. But what you would do is it would automatically incorporate what that person had said as a quote and so on, which takes assessors hours upon hours to do, because people do do this. But it takes 3 hours to 1 hour to do the assessment. Let's say a workplace needs assessment might take two or 3 hours in the UK for what challenges you have in the workplace. Then they go away, and they take your quotes and put it all together,


      Darius: and that's another two, 3 hours. And then it costs $450.

      Erica: Assessments are going to. We've talked about this. In the next year, are going to be just widely available for anything, and you can easily be able to generate a report. I mean, I've already been doing that, taking data, putting it in, asking it to generate a report. And it works brilliantly. And soon it'll just be a part of all of these assessments.

      Darius: Absolutely.

      Erica: It's. Assessments are going to change so much in the next year or two.

      Darius: And imagine this. Let's say you did, the quiz. Like, I've got the screener, and you've got the screener. It takes eight minutes to go through the screener. Right. But often people talk for about five to ten minutes of different experiences and so on. Okay. Then at the end of it, it's puts it all together, you know, in.

      Erica: Real time, but adds that personal touch that most people lose, which is really interesting. Those stories behind the items.

      Darius: Absolutely. In quotes, you know, Darius said such and such. Yeah. and then what happens is it then reads it back to you and says, Darius, you've just done this dyslexia screener. Okay, here's your quiz results. And here's what your report says. And it would read out what your report says instead of like a five-page report that you never properly read. It would just read it out to you in natural language in that moment while you're living in it, and then you can interrupt at any moment. Okay. And you go, oh, yeah, yeah, yeah, I did say that, but there was also this. All right, great, please say a bit more, blah, blah, blah, blah. And then you go, oh, I don't think you quite got me there. I actually meant such and such, you know. Oh, right. Okay. What do you mean? Like this, right? Yes, just like that. That's right. Oh, by the way, now that you mention it, this other thing happens, which is really hard and really tough. Oh, I'm glad you mentioned that because that's a big issue. I'll include that as well.

      Erica: Oh, neuropsychic assessments are going to change in such a profound way because. Yeah, I mean, being able to say, like, even after an assessment, were you focused or were you distracted? And what was this experience like for you? Were you using any strategies? So, for example, perhaps an assessment is measuring your auditory memory. And then if it asks you, were you using any strategies? You could say, well, actually, I was. I was using my visual memory to help me because I was visualizing everything I heard. Well, all of a sudden, that's no longer an auditory processing assessment. It's a. It's actually assessing your ability to use a visual strategy. And those are the types of things that we don't get. We don't really know the inner workings of what's happening. And by being able to ask individuals to give us more information about how they're processing, that's huge. And then they're going to be more engaged. Right. Because many times when individuals are tested, they're just getting through it. But if they're engaged more in the process and they are thinking more about what they're experiencing, it's very interesting.

      Darius: And imagine that was happening, but it included someone like you that was doing the assessment, and there was a third person in the room. So the AI is the third person in the room, like the copilot or the personal assistant, and you're like, what did we say? Oh, we said such and such. Imagine you had that in the doctor's room. You know, like the doctor has just said, I'm going to write a letter to the GP, to the specialist about this, and, And I would like. Could you just read out what that letter is going to say to me? So I know what's going to be said. The GP's kind of like, I can't do that. I have to do it afterwards and require another appointment. Well, you know, doctor GPT says, well, the letter would say something like this from this conversation. Dear, ear, nose and throat specialist Darius has had some blocked ears. We've identified it for so many years. Would it be possible to investigate a deep infection or something like that? and the doctor would say, does this sound like what we've agreed? And the person would go, yeah, that does, that sounds quite good. But did you mention the thing about the tinnitus in the other ear? Oh, gosh, I didn't actually. Let's put that in. You put it in and everyone's in agreement all at one time, while you're in that moment.

      Erica: It's so efficient and effective. It's efficient, effective and comprehensive and human.

      Darius: And human. And that's the beautiful thing about it. You know, sometimes we feel like this is going to dehumanize us. But the irony of it all is that because these are processing machines, so much


      Darius: of our life is dragged down by a bureaucratic process that if you smoothed off that process, the doctor can act more like an executive or a human being. And you can too, because you're not concentrating on these lower grade processes. And that's exactly what we're talking about. These are not Wikipedia pages, Google website pages. These are processing, intelligent processing machines. It's kind of like Steve Jobs said, the computer is the bicycle for the mind, okay? Bicycles make human beings very efficient. Computers make human beings very efficient. AI is like the electronic horse for the mind. So it's got its own power, it's got an intelligence like an animal's got intelligence, but it's got this partnership relationship where we are riding on top of the AI. It's its own entity, as it were, but we are guiding it, directing it. There's a symbiotic relationship. We are like the executive sitting on top of the horse, and the horse is doing this lower-level function of moving and adapting to the landscape in a way that maybe a bike doesn't. And with the power that a bike doesn't have, it's much more like riding a horse.

      Erica: Yeah. And horses have a certain intelligence.

      Darius: They do. So I'm super excited, not just by GPT for Omni, but by what Omni now represents, that this isn't just a bicycle, that's an electric bike that is powered to do something in a specific way with words. It's this much more dynamic, adaptable entity, which can take in pictures, it can take in sounds, it can take in words, it can take in music, it can take in art, it can take in words, and it can output in it can take in a picture, and it can output in words, it could output in songs. Like, I'm showing you a picture about my painting, sing a song about it. What does it make you feel? And it's like, right, I'll sing you a song with a tone and song. The sunset and the Scottish Highlands for, in some sort of Scottish style or whatever. It's like, this is how it makes me feel. Fantastic. Good to know. It's interesting that that was your first impression and it's literally mind boggling. And, you know, the people who are going to latch onto this first, dyslexic teenagers, hope so. They're going to jump all over this. And I would say, let's encourage them to parent of a child with dyslexia, help them go with them like Sal Khan is doing with his son. Go and ask them, how could this help you? How could this help me? Let's use it together. Let's see if we can do our maths with this, our English with this, our learning to tell the time with this, learning to write a story with this, learning to do a persuasive essay, learning to write a book title, whatever it is, help us understand the process of doing this.

      Erica: Yeah, this is a wonderful conversation, and we will continue to have more conversations as AI unfolds.

      Darius: I’ve got one final thought I know we need to finish. I would say if you're an investor, okay, invest your money in helping find dyslexic tools, versions of AI that are specialized to helping with AI. If you're, an inventor, you're a programmer, get into this and start finding a GPT that solves a particular problem. Like we've mentioned, you know, telling the time, learning to drive whatever it is, doing a presentation, like we've talked about those story people that created that story writing app, what's it called?

      Erica: By Nick Koznick, created an app called Storied, which storied dyslexic individuals write, or anybody write, because it allows you just to share your ideas and it helps you to write brilliant.

      Darius: And that, I think, is wherever you're at, if you're an investor, deploy your resources towards that. If you're a developer, deploy your resources. If you're a developer with AI, go in and start solving AI. Dyslexia with AI. With the skills you've got, if you're a tutor, start finding ways to get your knowledge into it.

      Erica: Solve your problems. Yes, solve your problems with AI.

      Darius: Yeah, solve dyslexia with AI. I really believe we can solve dyslexia's challenges with AI in the next three to five years. If people direct to, that, it's not just going to happen.

      Erica: Go beyond dyslexia, go to executive functioning challenges, go to dyscalculia, go to dysgraphia, go to attention, go to whatever your challenge is, and see if you can create a way through it,


      Erica: a way around it, a way out of it.

      Darius: That's right. And, if you feel like you're, oh, I'm 55, 60, 65 years old, and I'm kind of past all of this, do you know what I would say? Give your knowledge to someone who's younger than you, who's got these resources, and says, look, I'm going to take my whole body of knowledge, I want the AI to be trained on this, rather than the generic stuff in the middle 80%. We've got to get that knowledge out of your head so that it can go to the next 120 years generation, because what AI is being taught on right now will determine the direction it takes for the next hundred years. Okay? Like, I mean, we're talking about the next five years. What content it digests will then inform further models in the future because it will train on itself. Even AI is training on GPT 3.5 is training GPT four. GPT four is training GPT five right now. And so what goes in will change the directory. And, when you look at AI and what its reactions to dyslexia are, when you interrogate GPT four, it has a very simplistic view of what dyslexia is. Dyslexia is a reading difficulty with such and such. No. Are you stuck 2030 years ago? Have you not read the last 30 years of research that it's a processing difference, not a reading problem, and it's like, oh, well, I don't really know. And it's like, well, it's not been trained on that. It's not trained on absolutely everything. We need to make sure all the knowledge about just like is being put into this AI, these different AIs, so that the questions that parents put into it, they get those questions out that are well informed, not just generic. What was put on a blog ten years ago by someone who didn't really know what they were talking about.

      Erica: Right. AI is as smart as we enable it to be.

      Darius: Yes, and we have a great opportunity, but it needs to be taken. Needs to be taken. I'm going to stop.

      Erica: We'll continue another time. And thank you so much for listening to us.

      Darius: Thank you for listening to me. ramble on. And Erika, thanks for being so patient and listening to me, too.

      Erica: My pleasure. Until next time.

      Darius: Till next time.

      Erica: Thank you for joining our conversation here at the Personal Brain trainer podcast. This is Dr. Erica Warren and Darius Namdaran.

      Darius: Check out the show notes for links to resources mentioned in the podcast, and please leave us a review and share us on social media until next time. Bye.