What we’re doing to future-proof our course businesses
Will AI destroy our education businesses? In this video, I talk with Mark Shust. Mark has taught nearly 15,000 developers Magento through his education company M.academy. Mark is a member of The Lab and one of the first creators I saw locking arms with AI and implementing it into his business rather than running from it. We discuss the future of AI and creators, and how to stay ahead.
Full transcript and show notes
Mark's Website / Twitter / LinkedIn / YouTube
***
TIMESTAMPS
(00:00) Should You Be Worried? (04:01) How to Get Better at Using AI (07:42) Get Viral Video Ideas at the Touch of a Button (09:07) AI as a Teacher (18:05) Will AI Replace Educators? (21:45) How to Use AI to Your Advantage (27:12) SEO in the Age of AI (31:45) AI and Trust (37:51) How AI Changes Course Businesses (40:47) How AI Impacts Professional Development & Training (44:15) Community & Being Human is the Competitive Advantage
***
RECOMMENDED NEXT EPISODE
→ #184: Amy Porterfield – Her step-by-step process for MASSIVE product launches.
***
ASK CREATOR SCIENCE
***
WHEN YOU'RE READY
📬 Creator Science Newsletter
🚀 Get CreatorHQ (creator operating system)
🧪 Join The Lab (private membership community)
🧞♂️ Get a Personalized Offer
***
CONNECT
***
SPONSORS
💼 View all sponsors and offers
***
SAY THANKS
Jay Clouse [00:00:00]:
The more I use AI, the more I'm faced with the question, is AI going to destroy my business?
Mark Shust [00:00:06]:
That's like the first thing I asked and I got a funny like response like,
Jay Clouse [00:00:09]:
yeah, it's
Mark Shust [00:00:10]:
gonna pretty much replace it.
Jay Clouse [00:00:11]:
In this video, I talked to Mark Schust, one of the first creators I saw becoming an adapter two years ago. We compare notes and predictions about what a future of AI means for creators and how you can prepare for it. And now, enjoy this conversation with Mark Schust. You were somebody that got started playing with AI much earlier than I did, earlier than most folks in the lab were doing. And so, I'm curious if you were always like, This is opportunity, or if there was a point where you're like, This is scary. Let me see what I'm dealing with.
Mark Shust [00:00:39]:
Yeah. I've been following it since pretty much GPT Two, it caught my radar. Something just interesting. It was more of a toy, though. You can write, like, word limericks, but they didn't sound good. And then three came out. Okay. It's it's getting a little better.
Mark Shust [00:00:54]:
It's starting to feel a little more intelligent, but it's still making tons of mistakes. And, you know, it it can't really write write anything that well. And then I think three and a half came out, and that's when the game completely changed with chatGPT. It's funny because it's basically the same version as GPT three, but with a different UI. And that UI changed everything because it allowed you to interact over and over again with with the AI. Ever since then, it just morphed into it keeps improving. It keeps compounding. It keeps getting more intelligent.
Mark Shust [00:01:23]:
Now it has, there's reasoning models out there that can actually think and think through and process things before it generates a response. So, yeah, it's just amazing how how far those those have come, since the early days.
Jay Clouse [00:01:37]:
So was it like a a toy for you in the beginning? Because you're obviously an engineer. You have, like, a an engineer's mind, and so you probably could see some fun aspects of it. But I think a lot of people sitting here today teaching different disciplines in the creator space, they have a certain level of fear around it. And that's actually preventing them from trying it, I think, in a lot of ways. I think fear for a lot of people is actually this friction, this barrier that makes them kind of a denier that change is coming, which I don't think is useful or helpful. We'll talk about that. But it doesn't seem like at any point you were worried.
Mark Shust [00:02:10]:
I've been worried since I saw three get better and three and a half get better. Well, I wouldn't say worried, but it was, like, a cause of concern. Right? I don't think AI is gonna take over the world or anything like that and lead to total chaos and destruction. But it was definitely kind of a toy when it came out, but it was it was just, like, a fun toy. And that's usually, I think, how some of these innovative things start happening there. It's like, it's on the edges with developers, and they start tinkering with it. It seems like everything starts with developers and then progresses up, eventually, because our whole society is really based on tech. I do see this mindset, though.
Mark Shust [00:02:44]:
And maybe it's because it started as a toy. But, even with developers, I see a lot of pushback saying, oh, it's not that smart. It's not that intelligent. It can't think for itself. And, yeah, we've we've heard that over and over again ad nauseam. That may have been true a couple years ago, but it's not true anymore at all. They're sort of stuck stuck in this this limiting mindset of, yeah, maybe they're scared. Maybe they're afraid.
Mark Shust [00:03:08]:
Hey, it's gonna take my job. It is taking jobs. It's taking tons of developer jobs, at least on the lower end of the spectrum now. But now it's starting to move upward, on on the hierarchy. There's some weird, weird vibe going on with with developers. You're either all in with it, and you're letting it think for yourself, which probably isn't good at all. Or you're on the other end of the spectrum where you're completely avoiding it, and you're, you're just spending tons of time writing boilerplate code that would be better off spent thinking and solving about specific problems. How that that whole dynamic is is working out is really, really, interesting.
Mark Shust [00:03:46]:
I've been using it myself to code up tons of stuff. I'm building a custom course platform right now, and it's I know and I'm familiar with design patterns and architecture patterns. It's writing really good code if you know how to prompt and work collaboratively with the AI, and I think that's the key.
Jay Clouse [00:04:02]:
Which I'm guessing you've only learned to do through a lot of trial and error, because as you said, you've been doing this since GPT two. So I'm guessing a lot of your comfort in learning how to write with it has come from direct experience. Is there anything you pay close attention to outside of your own trial and error that's teaching you teaching you how to prompt and communicate with the AI?
Mark Shust [00:04:27]:
Yeah. It's it's purely it's purely working with it. I definitely have my ten thousand hours in in a couple years. I've been using it, constantly on the side. So I think that's how you have to sort of see things, sort of treat it like a toy, experiment with it, and find out what its limitations are, find out how you can talk to it, how it responds, look at the results, you know, parse it in your head, and really determine, hey, why is it giving me that specific result? It's usually because I haven't explained something properly. So if you can almost treat an AI, like, if if you don't feed it any context or any other information outside of your current discussion or conversation. You can almost think of it like a stranger on the street. I'm walking up, and I I ask it to do something.
Mark Shust [00:05:15]:
And it might just take the most generic, type of response that it thinks about rather than actually thinking about your own your own situation and scenario. So you you definitely have to work with the AI to figure out how how it responds and how you can get better results out of it.
Jay Clouse [00:05:32]:
I love that frame. Think about it as as if you're walking up to a stranger on the street. That's so good because you're right. You walk up to a stranger on the street. You ask it like a one line prompt. It's gonna give you some response, but it's not gonna have much depth or nuance and probably miss the point of what you're trying to do. But the more information you give it, this is something I've been learning. The prompt, there seems to be, like, next to no limit for how long and specific your prompts could be to get the result that you're looking for.
Jay Clouse [00:06:01]:
Curious to hear, as you've played with it, what has changed about your prompting? Have your prompts gotten longer? Have they gotten more specific, less specific? Maybe I'm wrong, and there is a limit, and there's kind of parabolic. But curious to hear your your experience.
Mark Shust [00:06:14]:
It's it's really interesting. The the answer is really nuanced. It depends on what model you're using, what temperature settings. I'm working directly with the API in a lot of my my conversations, so, which means you can control some back end functionality, like temperature and how long it thinks. I start off with really simple system system prompts or, system instructions, I think ChatGPT calls it. Played around with really simple ones. You know, you can, the models are getting smarter. So depending on the model, it may know how you how it should respond based on that really small system prompt.
Mark Shust [00:06:50]:
But other models aren't as intelligent or, like, one of the best use cases I've had is for a personal business and, personal and business coach, that I've set up an elaborate I had a 20,000 character, system prompt. Knew everything about me, everything about my business, all my goals and aspirations were, everything. Keep really good responses, but I I noticed as I kept adding context to that prompt, it started missing certain things in the prompt. And in AI, it's called, like, a needle in a haystack problem, where if you give it so much context, it'll sort of, and you ask it something, it won't completely understand everything in that prompt. So I actually reverted and made it more simple. And I actually wound up getting better responses out of it. And it definitely depends on on the type of LLM that you're using, though.
Jay Clouse [00:07:43]:
After a quick break, Mark and I talk about why AI may destroy online courses. So stick around. We'll be right back. And now back to my conversation with Marc Schust. You use AI for personal and business coaching. And I have also experienced either coaching or learning. I am so I I enjoy the process of talking to AI and talking through things because in my experience, AI is infinitely patient, judgment free, meets me where I am. And through that lens, it makes a perfect teacher if you as the student kinda know how to, interact with it because I think there's still kind of a gap.
Jay Clouse [00:08:28]:
Like, in in the real world today, great teachers kind of know how to judge what a student needs and teach to them. But I feel like at this moment in AI, we it's a lot of the responsibility of the user to help give the context of how we need to be taught. It's like an interface problem. But it strikes me that that's probably a solvable problem. And there's a future where AI is just an incredible bespoke personalized nearly free teacher to each individual. And for creators like you and I who deal in education, that's a confronting possible reality. How did what is your reaction to this? Are you feeling the same way, or or do you see it differently?
Mark Shust [00:09:11]:
Yeah. It's funny. Funny. Once I had that, coach system prompt, one of the first thing I asked is, is AI gonna replace my business? It's, like, the first thing I asked. And I got a funny, like, response, like, yeah. It's gonna it's gonna pretty much replace it. And I was kinda terrified since. You No, I think everything's evolving, and I think the whole educational system is evolving.
Mark Shust [00:09:32]:
As, as content creatorsI'm a, I'm a course creatorI'm used to developing really comprehensive curriculums with tons of lessons, and it's hours longI think that whole concept of the curriculum is either dying or changing. People's attention spans are getting shorter and shorter, even, not only on the learning side, but also on the teaching side. A lot ofeven school teachers have issues, and they might love their job, but they just, you know, can't, can't, progress in their field. So I think learning is going to start becoming integrated into everyday everyday work environment. So it's not gonna be a separate course platform that you go to or a certain curriculum that that you're following. It's going to be collaborative as you're working, directly with the AI. So, rather than having a different different course platform or different it's just going to be integrated in your in your working and doing. So, it's funny.
Mark Shust [00:10:27]:
Mark Twain had had a quote that it's like, I never let schooling interfere with my education. Right? And that's kind of how I see it. Like, I've always loved learning, but I've always hated schools for some reason. And I think it's just because I'm not working on on something that I'm going to be doing, like, either right now or even years from now. It might be something completely irrelevant. And, I I think AI is gonna morph into more rather than learning or doing, it's gonna be learning and doing. It's gonna be just mirroring sort of real life. So, as even a developer or programmer, as you're coding something, you may not know how to do a certain thing, and you could just ask the AI directly as you're coding, something live rather than needing to sort of, you know, learn all this and fill your brain with stuff that you may not may not even need to know.
Mark Shust [00:11:22]:
So it's more of a, you know, just like a just in time, learning
Jay Clouse [00:11:26]:
learning capability. To describe or to to try and illustrate what you're describing, I'm not an engineer, but I recently took, a course, ironically now, about how to code using Cursor, how to make an app using Cursor. And if you're listening to this, I want you just to imagine like you have a window and anything that you're trying to do in that window, you can highlight a piece of text and say, explain this to me. And then the AI explains what is going on in that window. So if you're coding and you write something and you get an error code, you can highlight it and say, what is failing here? And not only will explain what is failing, it will suggest improvements to make it better. I agree with you that I think that is the type of experience we'll see in a lot of the work that we do. Any any tool we use, if something isn't quite working the way it is, it's it's like having a co pilot or somebody sitting next to you who really understands this that you can just turn to and be like, What what is happening? Why isn't this doing what I want it to do? And that interaction though happens in private. You know, you're not sitting in an office environment and shouting out loud for your coworkers to hear, Why isn't this working? I'm bad at my job.
Jay Clouse [00:12:32]:
It's just like, Hey, this isn't doing what I want to do. Can you quickly give me some personalized, private, very patient help? And as a course creator, when we make curriculum, as you're describing, I think a lot of what we do is imagine all of the different starting points people may have or the different hang ups they may have. And we design our curriculum to try to overcome multiple objections for different avatars, anticipating different problems people might have. But to the individual taking that course, then there's there's this unnecessary sort of fluff of stuff that is not necessary to me because I don't have that objection. I'm not having that issue. And so it's not the most efficient or personalized experience. And I feel like AI education is kind of promising this very personalized based on exactly what you need, exactly where you are. Here is the most efficient means of knowledge transfer.
Jay Clouse [00:13:24]:
And by the way, you can respond to me directly, and I'll clarify anything that is confusing in this moment.
Mark Shust [00:13:31]:
Absolutely. I I think we're all familiar with course completion rates, right, and how poorly it could be. Someone might be a a text learner and learn by just reading tons of articles. They might wanna watch a video lesson, or they might wanna even see something just in a visual image. So there's there's different ways. And when AI starts getting integrated with with everything and how we how we learn, it can learn the style and preferences of students and adapt and respond in different ways. So not everything has to be even a chat GPT window or a chat window. Right? It can reply and, hey, do you understand this? And then send you an image explaining it.
Mark Shust [00:14:09]:
And and the the image generation of AI is getting incredible. It can create complex nuanced charts and diagrams, like infographs even, in seconds. It can cater the the exact lesson to every individual student. And like you said, it's infinitely patient. It's available twenty four seven. The cost is going to nothing. Just over time, I remember even getting started. I was spending probably over a thousand dollars a month when it got started.
Mark Shust [00:14:37]:
I was Wow. Kept prompting. And, yeah, I I froze to tier five really quickly. Right now, it's it's, like, under a hundred dollars. It's crazy for almost unlimited prompting. So the the cost is just going to nothing. So it's being even more accessible to everyone else around the world. That's what that means.
Mark Shust [00:14:55]:
Like, it's it's it used to be so expensive, and now it's just it can reach anyone across the Earth and learn its preferences and infinitely patient. It's just a tremendous teacher.
Jay Clouse [00:15:05]:
I'm going to give another example because my hunch is a lot of people listening to this episode have kind of kept things at a distance. So they haven't experienced the things you and I have. So this is going to seem obvious to you, but just to give someone else another example. I am a notoriously awful cook. I have next to no skills in the kitchen. And it's something that for years I've wanted to improve because especially now that we have a baby, my wife has a lot of pressure on her. And to cook every night, like, that's just not a fair expectation. It'd be really nice if I go in the kitchen and cook something.
Jay Clouse [00:15:35]:
But the the pressure I felt or the discomfort I felt in, okay, I've got to go to Google, I've got to search for a recipe, I've got to figure out what we have in the house, and I've got to go make a shopping list, go buy that stuff, then come back and cook it according to the recipe. But now what I've done, I found a custom GPT around cooking, and I will just talk to it and say, hey, I want to make dinner for my wife. Here are some of the things we have available. We have some ground beef. We have some cheese. What are some recipes that I can use these as the main ingredients and have a little bit of spice? Then it'll give me, like, five examples. I'll say, okay. This is great.
Jay Clouse [00:16:08]:
Can you give me a substitution for this ingredient? Yeah. Here's a substitution for that. Okay. Great. We don't wanna have this involved. Okay. Great. Then I'm going through the, the recipe, and it will say, preheat the oven to this degree.
Jay Clouse [00:16:22]:
Okay. I turned on the oven, but should that be bake or broil? And you can just get this real time interaction at the moment that you need it that you can't get from like a static recipe. It's just been a wonderful experience. And, one morning I made, a frittata. My wife came downstairs from bed and she was like, what happened? How did you do this? Did you do something wrong? Like, what are you asking for forgiveness for? And there's nothing. I just had the ingredients and I had, someone who could basically teach me to cook that morning in real time. It's just a wonderful, wonderful thing. So the question on my mind, obviously, on your mind, probably a lot of people listening to this, for people who do create education like you and I, what does that mean for us? Is it going to replace us? Have you found a way where you think, actually, I think I will work with the AI and continue to offer education? What does it mean for your education business? I'll
Mark Shust [00:17:13]:
see a question. How good was the meal? It was great. It was great. I tried that too and, had an awful Oh, really? Yeah. The ingredients just didn't I was trying to, make a copycat, like, Chipotle barbecue, thing. And it just it it gave me, like, so much, so much cumin. Wow. It's so overwhelming.
Mark Shust [00:17:37]:
So, but Well, good
Jay Clouse [00:17:39]:
to know. I've I've been keeping a log at Notion. We we rank every meal. Because part of this is I'm tracking this. So I like, in Creator HQ, I'm tracking how many meals have I cooked this month because it's part of my KPIs. And, we rate them. And everything has been rated super well. It's been, like, really good.
Jay Clouse [00:17:53]:
But I imagine it could it could miss. Just hasn't for me yet.
Mark Shust [00:17:57]:
But either way so that again, that was an objection that most people might have immediately. Like, oh, the meal wasn't good. Right? But it could have been great. And is it gonna get better over time? Like, is there any proof that these AIs or LMs are gonna get worse? That's that's the question you have to ask. And then how much better are they going to get? But look at the graphs of a year ago even. How has AI progressed from a year ago to now? Right? It's been tremendous, at least in coding coding wise. It's been unbelievable. In writing, like, even Grok, the XAI, AI is incredible, the responses that it gives.
Mark Shust [00:18:36]:
And it did that in just such a short period of time. So I I see things getting just tremendously better. And if you're avoiding this, you're probably gonna be in trouble. So if you're a cook, how you can use that, like, that that's probably a threat to your business, or you think it may be, but it's it's not. Right? As a world renowned chef going to a restaurant, right, we're not gonna want robots cooking our food anytime soon, but you can use it as a collaborative tool to improve your own recipes. I might give it let's say you're Gordon Ramsay and you give it, you know, ingredients and, hey, this might be missing a special ingredient. What what's something out of the box that maybe I didn't think of or something that's inspired by a certain cuisine or certain demographic in in the world. Right? You can you can collaborate and use the context from different industries even and apply it to your own industry, which I found really, interesting results when you take different vertical niches and combine them together.
Mark Shust [00:19:35]:
I haven't heard too many using that, but, you get sort of original insights, at least in your field. So it can't really think or or think creatively, too much. But it it's it's getting better and better, and it it is getting that point where it can start providing you some kind of creative original thought, based on at least different fields. Because I think even if the collective human knowledge isn't completely, it's it's there's very little original and creative thought in the world. Usually, something creative is inspired by something else or another industry. So, you can almost think of it like that, where maybe it will never have creative thought, how humans think about, but it may may be helping you to get inspired and and create something new or original that maybe you didn't think of.
Jay Clouse [00:20:33]:
The the thing that kind of broke my resistance was that as I started to talk to Chad just as, like, a thought partner and same with Claude, I realized, oh, I'm a small language model. I'm a very small language model. My brain is that, and this is a large language model. And there's some benefits to being a small language model, like, I think I am a very focused around a certain type of information. And so the connections I can draw are very contextual in that way, and that might take some massaging to get there with something that's a little bit more larger and generalized. But that's probably what we're going to see in AI anyway, was we'll have these, like, very vertical specific models that are trained specifically for cooking, trained specifically for engineering, trained specifically for content creators. It's just it's just undeniable to me. So the question, you know, I come across is, well, do I make more curriculum? Or do I change the type of products and services that I sell? I think the first step that you took is you built a chatbot into your existing course curriculum.
Jay Clouse [00:21:36]:
Can you talk about that?
Mark Shust [00:21:37]:
What's funny is the AI created all of this for me. It created the name Maggie Bolt. It's inspired by Magento Bot. So that's the name of the bot. Gave it a personality. And and basically, it scrapes, my lesson content and injects it into the context of the LLM. So, she's very context aware of what you're talking about, and you can, it's sort of inspired by Khan Academy as well. They have, interesting, bot.
Mark Shust [00:22:07]:
I think it's called Conmigo. And, it it's it makes you question, sort of everything. So it it's it's constantly prompting you and asking you, if you understand a certain way and it wants you to explain it in your own words. And it can detect and, of course, respond and tell you if you're incorrect or correct. So if you extrapolate that out so that was one lesson. To train it on the collective knowledge of everything is really, really difficult, at least in a specific vertical. But I've I've been thinking through, and I think just courses are too long. And, again, it's it's I think, learning is going to be more embedded and integrated in what we do every day.
Mark Shust [00:22:53]:
So even when I started building the building my, courses, the first one I want I had this grand idea to build a zero to hero course that has everything to do with Magento.
Jay Clouse [00:23:03]:
But the
Mark Shust [00:23:04]:
problem was I would still be building it right now five years later if I did do that. So what I realized is I need to break this down into multiple courses, and I need to break lessons down even, simpler. So, everything gets smaller and smaller. It's chunk by chunk, bit byte by byte. And then I I pretty much created an idea of, atomic lessons, and other other people have this idea as well. But if we wanna extrapolate this out further, where AI is going, if it's going to be integrated into our daily life and I want to ask it something specific, it needs to be able to connect and look up that information, that specific thing that you're asking about, and pull back data from it in some kind of model or knowledge graph. And that's why I'm seeing more education going towards more atomic units that are just individually consumed, and because they can be reassembled in different ways. So, if you ask it a certain question, it may be able to look up that knowledge graph and pull out two or three individual atomic lessons that are really small and concise and short and focused and combine them together and give you a reply that you normally wouldn't have gotten in a regular course.
Mark Shust [00:24:22]:
So I see just a big shift in this education of where it's going, and I think that may be, you know, one plausible solution that comes up.
Jay Clouse [00:24:32]:
The way you're describing, like, atomic lessons versus, like, a typical course chapter or something, it sounds like you're saying this would be short in terms of duration. It'd probably have some sort of metadata, like a title, that is hyper specific to what is included inside of this video rather than like a twenty minute video that might include the content of 20 atomic lessons, but would require the machine to kind of transcribe, understand all of the information within that video before it could connect that information. It's keying off of shorter, more specific bits. Is that kind of what you're describing?
Mark Shust [00:25:08]:
Yes. So you have almost the same needle in a haystack problem I talked before about before. So when you give the whole, like, LLMs are trained on everything in the world, and they aren't really good at niche specific topic, even part of building even, like you said, a small language model, something more nuanced, something very specific to a certain topic, If you can chunk that out into a way that can be categorized and filtered and searched upon and indexed, it may be able to give much better results. Because when you have specific lessons just on one specific thing and no preliminary or it doesn't assume any, pre existing knowledge, you'll have just a very focused answer that's correct, that won't hallucinate. And, it can be used to combine with different lessons as well to create something new and personal even for a specific user.
Jay Clouse [00:26:01]:
I see this user behavior with my wife actually because when TikTok was momentarily banned in The U. S, she is one of the people who decided to erase it from her phone and couldn't redownload it for, like, two weeks. And she was having a meltdown because TikTok had become her primary search engine. She found that she could search things like Ikea couch, Alden and Ivory, which is or Aden and Ivory, which is the brand of couch that we were thinking about buying at Ikea. And TikTok would serve up exact specific videos for that brand of couch, and she would get in a very short period of time exactly what she was looking for. And when TikTok was gone, she couldn't do the same thing on Google. She couldn't do it on, YouTube. She couldn't find the same bite sized contextual information reviews, teaching, you know, how to cut a pineapple.
Jay Clouse [00:26:55]:
You can just find a perfect tiny video for that on TikTok if you search for it right now. So I see people leaning that way because we are a convenience creature. We want convenience, we want accuracy, and we want it in the shortest period of time possible. So I do I do kind of see this world, but that sounds like that is counting on these LLMs indexing your content and bringing it back. And I'm I'm interested to hear what you think about SEO and indexing of content because we we hear a lot of people in this space also who have primarily had their traffic driven by SEO. They saw a big downward shift with the, the helpful update. Now Reddit and Quora are getting all getting all the traffic. But in general, if we move to searching for information through an LLM like Perplexity, how does our content get surfaced or indexed? Do you see the original author as being credited in those results?
Mark Shust [00:27:51]:
With large language models, that's that's sort of exactly what it is, right? It's not nuanced on specific topics. So, you're gonna get a a collective resolve of an entire world, and it's probably not gonna be as good as something as more of a small language model, something trained. So you can, of course, create your own model that has your own specific info. It's a little different than your own chat GPT because it would have its own knowledge graph and everything, sort of like LLMs. Yeah. That that whole shift with LLM search is interesting. It's definitely changing the SEO game a little bit, and and you could get surfaced in results, but it's it's a crapshoot. It's just like how search was at the beginning days.
Mark Shust [00:28:33]:
It was very hard to get searched or returned as number one in the search, And that's what everyone wanted. The top you know, it was top 10 and then top three. So its ability to to actually serve up specific content from an LLM probably isn't that good. So like you talked about with TikTok, that was specific because it was personalized to your wife's, browsing habits, what she watched on, what video she dwelled on. So TikTok knows all of that about you. And then it can serve up a very specific ad or piece of content depending on what you're what you're browsing. That's why even there's a lot of rumors that Google or Apple recorded your voice and microphone randomly at phones because people would see an ad on Facebook an hour later of something really weird that they were just talking about in a conversation with other people. And, the the Facebook engineers are adamant that their stuffy suits and lawyers would never ever allow anything like that.
Mark Shust [00:29:34]:
So they just said that's how good the algorithm is, that it knows pretty much exactly what you're looking for, and it knows when to serve it up. So similar to even the teaching, that could be applied, to even learning. So let's say I'm learning, and it's 9AM nine AM. I just had my coffee. And, maybe it learns that I I can learn best forty five minutes after I've had my first coffee. So it can ping me and say, hey, it's it's time to not even it's time to learn. Here's something you may want to learn. Right? It's just it feeds into your habits.
Mark Shust [00:30:10]:
So it's not like on a specific rigid schedule. It can be optimized for how your brain works. So it's could be continuously perfect and personalized in real time, too. So, that's sort of the magic of teaching, too. A lot of it is knowing when to teach, when to push hard on someone, when to back off, when they may not be ready to learn something new.
Jay Clouse [00:30:34]:
So when you talk about these atomic lessons that you want to create for your educational content, is the thought that you will continue to build a reputation as the Magento guy And so people will come to you and your specific language model around Magento because they trust you to give context specific good information around that subject in particular, and they will use a language model created by you that is trained on specifically your material?
Mark Shust [00:31:06]:
Yeah. So, pretty much. So, right now, like, I I think the most advanced code thing right now is called Cloud Code. It's a terminal editor, And you can ask it to code anything you want, and it does a really, really good job, to the point where one of the founders of OpenAI established a term called vibe coding, not too long ago. And he talked about, hey, you could just sit back and let let the editor code for you. But I sort of see that as nuance because you have to know how it replies back, and you have to know what's good and what can be improved from even an architecture level. So, similarly, when you're learning a specific topic about anything, the LLMs aren't really trained on that specific data. And I don't know if they ever will be.
Mark Shust [00:31:55]:
They'll ever be nuanced. So I think small language models will be on the upswing because, you can train it on specific data that's more nuanced. And the more complex topic, it is, the harder it is to return a result because it's, again, that needle in the haystack problem. It hallucinates and everything. So when you break things down and you can create your own knowledge graph on a really complex topic, you can just deliver much better results.
Jay Clouse [00:32:22]:
After one last break, we talk about how Mark is future proofing his education business. So don't go anywhere. We'll be right back. And now please enjoy the rest of my conversation with Mark Schust. Do we think with improving models, it will eliminate the needle in the haystack problem? Possibly.
Mark Shust [00:32:40]:
So the context and the context length, that we can feed into the LMs is getting larger and larger. That means the amount of data that you can have in a single conversation. So I think Google has something like a a 2,000,000 parameter context length in Gemini, which is the longest, but it does, again, have that needle in a haystack problem. But as that improves, it may be able to find those needles. It's just a wait and see game. It's it's, history is telling me that that they will be able to do it because they've been able to overcome all of these obstacles so far and advance it. Yeah, it's, it's really depends on really what you're working with and how, how better it can find that data within that large context length. There are rumors that a lot of the LMs, even OpenAI, has an unlimited context length.
Mark Shust [00:33:37]:
So I don't know how that's going to factor in. So maybe they've already solved it internally. It's more of a wait and see though, because there's a lot of just guessing of how this is going to affect everything.
Jay Clouse [00:33:46]:
It's really interesting in my own behavior. So I I work predominantly with ChatGPT and Claude. I've tried Gemini a little bit. I haven't given a good enough shake. For some reason, I'm biased against Grok, but people are saying good things, so I should give that another shot. I was gonna try DeepSeek, but I was gonna do it with an API into TypingMind. And ever since I've been trying to try it, it's like, we're over limits. You can't do that.
Jay Clouse [00:34:08]:
But, anyway, what's really been interesting is since OpenAI rolled out, what is it, o one, the longer thing where it takes a little bit longer to respond to you, I've noticed that with, Claude three point seven and with even o three Mini from from chat, when I get an instant response to my questions, I actually trust it a little bit less. Like, I it's trained me that I think the best responses come with a little bit of consideration from the product. And And it's interesting because I know Claude three point seven is probably better than 3.5, but it's faster, and it makes me feel like it's working less hard. And I don't know what to make of that, but I would have to think that the longer the context window, it should take more compute, more time to contextualize what I'm thinking. It's it's it's almost like hubris that I think. What I sent requires so much thought. Spend more time thinking about it.
Mark Shust [00:35:06]:
I think that delves in the delves. That's the first time I've used that word in real life.
Jay Clouse [00:35:11]:
You are an AI.
Mark Shust [00:35:14]:
See, after a while, you start just becoming an AI, I guess. But, yeah. It's, that's just human psychology, I think, because, even in programming, it's funny. Years ago, when you hit place order, sometimes that credit card order can go through in a second. Or maybe that's a bad idea, but maybe I'm customizing something for you. So, programmers put a loading spinner, had them wait thirty seconds before they got a response because they thought it was actually doing more work, when it could have happened almost instantly. They almost thought there was a bug in the problem when it happened instantly or really fast. So, I think that's just how our brains work.
Mark Shust [00:35:49]:
We think that something that takes longer is going to get a better result, and that's not always true. Three point seven with cloud has been amazing, from my experience. So and and they've actually built reasoning models, like, inside of of the model, like reasoning. So it knows internally to think be before it responds to something. It just does it so faster and cheaper. So that's definitely something to think about. And I it's it knows when to switch context in the background, which is probably the most intelligent AI right now, based on that. And, it's funny you mentioned that, you know, like, Croc.
Mark Shust [00:36:28]:
Yeah. It had I noticed something really weird with that too. I don't know if it's because Elon runs it. But when I asked it to explain something simply, it just can't. It it goes into extreme technical detail. And, maybe I'm not prompting it correctly as well. But, so it's it's funny how there's different models can respond in different ways. And it's best to just experiment with all of them and find out what works best.
Jay Clouse [00:36:53]:
Talk to me a little bit about language, marketing, positioning. As you think about the future of MDOT Academy, your education platform, Do you see yourself marketing courses? Do you see yourself messaging this as education or as a language model? Like, how are you thinking about a change in language if you're thinking about a shift in how education is delivered?
Mark Shust [00:37:17]:
I think right now, it's you can only go on what you know. And right now, we don't know where any of this is gonna go and how far it's gonna progress, how things are gonna change. The entire landscape might change in two, three years for all we know. So I have to deal with what I what I'm dealing with now. And right now, courses are still premium grade. Right? I I need to take a course to learn something and save time. And that's not changing this year or probably even next year. But it could change over time as as these LLMs and AIs develop.
Mark Shust [00:37:50]:
So what I do know though is that content abundance, like the ability to create and generate and consume content, is going to be ever expanding. And people's attention spans seem to be getting less and less. They're going the other way. So I think the value in a specific membership or course platform comes down to personalizing the educational content for specific students. So you actually learn their styles and, and you could do that today, right, with just asking and pulling your students, how do you learn? And you can cater your content to them. And then community is a huge aspect. So if you get stuck on an AI issue and it can't solve it, where do you go right now? Like, you're you're stuck. I I need to talk to an expert.
Mark Shust [00:38:39]:
I need to talk to someone. I need to get advice of, hey. It gave me this result. I don't understand it. It's not explaining it to me well. Right? I and a lot of students just won't even be in a terminal or want to use an AI like it is right now. It's a different tool. It's not embedded currently.
Mark Shust [00:38:56]:
So it's hardly anyone uses Cursor in PHP world. Like, at least in my in my industry, it just doesn't generate good code yet. So I think that may morph and change over time, but we have to wait and see. So there's there's that personal personalization in the community, and then there's the verification that you actually know what you what you know. So it's more of a a verification of knowledge. And you can't really get that with any AI. Alright? It's it's no one has a, hey, tell me tell me how Mark knows how much he knows and and what he knows about and what his experience is. Like, there's nothing to do that.
Mark Shust [00:39:34]:
So I think a course and and getting verified and certified, is definitely the way to go. And those three things, I think the value shifts to those, rather than the actual course content itself.
Jay Clouse [00:39:49]:
How much of your business comes from, like, professional learning budgets?
Mark Shust [00:39:54]:
From me? I used to buy courses all the time. That's that's ironic. I don't buy that much anymore, but I think that's because of just the phase of my life I'm in right now
Jay Clouse [00:40:05]:
at the moment. I mean more like your students, are they are there companies purchasing courses so that their people can take your your material and become a better employee?
Mark Shust [00:40:17]:
Yes. And I I see that actually growing with companies. It's hard to find knowledgeable workers. There's, I've heard I've even had a story just recently when I was over in Florida. They interviewed a senior developer, someone who's supposed to be senior developer. They had seven years of experience. And, they he just completely bombed the interview because he he stopped thinking for himself. And that's part of he was using AI for everything.
Mark Shust [00:40:47]:
So that's part of, the big problem. And there's another big problem with, junior developers. Right? It's it's very hard to onboard anyone who's junior right now depending on your industry. Because right now, cloud code, I can hop into terminal and tell it to code something, and it does a better job than than a junior developer would. So there's those two aspects. And, you know, one is like AI I call it AI atrophy where people stop thinking for themselves and can't can't think creatively and independently, which is a really big problem. And there's another problem where you have developers that may want to learn it, but they can't get a get a position somewhere because the the bots have taken over their job, at least at some level, and they can't upskill their their learning. So, companies, definitely will be onboarding, I believe, more new employees to get them onboarded and trained right now on course platforms and memberships because it's really the only way that they would advance their skills quickly and deliver a result that they can learn right away.
Jay Clouse [00:41:49]:
Yeah. I was thinking about this. I I think there are a lot of folks who they do tap into, like, these, professional learning and development budgets. And as much as, I feel like courses as a business model are changing, I feel like uncertainty is a time when if there is money available, there might be, like, new unlocks and budgets. So in a world where how people learn is changing, if you're at the front end of that, I feel like you might have an opportunity to unlock some of those learning and development budgets if you can help these companies, their employees, upskill in this very rapidly changing time. I could see a case to be made for that.
Mark Shust [00:42:33]:
I think you gotta be careful too of not going not taking too much of a change if you're a creator. Don't change your entire business because things usually progress over time very slowly. I've learned how to do that. My previous company was actually called Change, ironically. But it's, if you change too quickly and try to do too much, you'll probably fail because there's just there's that unknown that you get into and you don't know how how people are going to react to a giant change like that. So I think just just slowly and iteratively changing and and morphing your system to to get slowly better, sort of like how I added a chatbot to my learning system is probably the way to go.
Jay Clouse [00:43:18]:
I've seen so many content creators who just go all in now and talking about AI, which for a lot of them was like a complete, 180 degree change, you know, from whatever they're talking about before. So when you when you successfully build a reputation being known for something and you completely change in no different direction, that's a big risk. It's a big cost. So I agree with you that, I would be careful to jump in and just, like, grab the latest buzzword, for the sake of trying to capture some of the tailwinds because, yeah, one, it's it's getting increasingly competitive all the time. And two, people can own like, there are only so many people who are interested at staying at the absolute edge of this, constantly. Like, we want to get incrementally better. But you're right. Like, the pace of change in actual professional context is much slower than the pace of change of technology.
Jay Clouse [00:44:11]:
Like, things are going to be possible much sooner than they're actually implemented and experienced at the company level. You mentioned community, which also, you know, as I think about the future of this business and how I'm approaching this, I kind of see this dichotomy of, one, you could lean into how to, work with AI for a new model of education. Because as I shared earlier, my thought is the way to interface with this as a teacher is just not very intuitive yet. So building a better interface behind a trusted brand, I think, is a great path forward for education. I think a second path is focusing more on community and the human side of things. That's the path I'm more interested in following right now with the lab and doing in person events. And, even in my writing, I try to just be more human than, like, here's the 10 steps and here's the bullet point. What does community look like in your business and how is that changing?
Mark Shust [00:45:11]:
Yeah. I, I run a private, developer community for it's geared just for Magento developers. I have a contractor who helps out answer questions and who's been fantastic, very knowledgeable, like an encyclopedia. So, yeah, that that whole that whole community aspect is so important, and it's the main reason a lot of a lot of developers enroll in it because they get a response that's not a bot. Right? That is a human who's thought through things and can deliver an answer at a high level. And, you can't get that experience anywhere else other than a private community. Even the public communities are just just overloaded with AI bots and spam and comments. We noticed that on LinkedIn, I see posts all the time constantly.
Mark Shust [00:45:56]:
They're AI generated. And comments, I try to block everyone as much as I can. It drives me crazy.
Jay Clouse [00:46:04]:
I'm
Mark Shust [00:46:04]:
sure we've all, like, dabbled and experimented with that. But, I think people do want a more human connection than what they're getting right now. And, I think that's a great way to do it. Even in person events may be a good way to, add to your membership that if you can do something like that. But anything you can make a personal one on one connection with, and you're probably gonna get a a better result than, just someone who doesn't focus on that.
Jay Clouse [00:46:32]:
You also get very specific experiences. We had a post in the community yesterday that was somebody posted, Should I spend this many thousand dollars on this program? It was like a coaching program. And we had folks in the community who had experience with that. And they were able to weigh in and be like, here's my experience with it. Yes, no. Which you asked that to an LLM and it's gonna give you like a very strategic like pros and cons list, but it's not gonna be able to tell you like, this is my experience. It was very negative because of this and I would stay away from it. So I looked at that post, and I was like, wow.
Jay Clouse [00:47:02]:
That's like a that's a huge ROI.
Mark Shust [00:47:04]:
Yeah. Even yeah. Conversations I've had with you, the the LLM will say something, hey, you should go down this path. And you talk to someone else, you're like, no. That's a horrible idea because this, this, and this. Right? So you can't you can't get that with an AI, and you probably never will be. Right? It's just human experience, you can't really, you can't really touch. Right? You can't replicate a human spirit, their interactions, their experiences, what they've been through, and their advice.
Mark Shust [00:47:32]:
It's best to just take advice from wherever you can get and then make your own decision, of course. But, it's it's get great to get that feedback from an actual human, because it's it's just becoming less and less, available. You can't really touch. Right? You can't replicate a human spirit, their interactions, their experiences, what they've been through, and their advice. It's best to just take advice from wherever you can get and then make your own decision, of course. But, it's it's get great to get that feedback from an actual human, because it's it's just becoming less and less, available.