Scrum.org Community Podcast
Welcome to the Scrum.org Community podcast, a podcast from the Home of Scrum. In this podcast we feature Professional Scrum Trainers and other Scrum Practitioners sharing their stories and experiences to help learn from the experience of others.
Scrum.org Community Podcast
AI on Scrum Teams: Context, Consistency, and Collaboration - Q&A Part 3
Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.
In this episode of the Scrum.org Community Podcast, Eric Naiburg, COO at Scrum.org and Darrell Fernandes, Executive Advisor at Scrum.org continue to dive into how to make AI a true teammate in product development while answering questions from a recent webinar on the topic. They explore the importance of creating a context library to store domain-specific knowledge, enabling AI to provide accurate, efficient answers. Learn how consistent definitions, thoughtful prompt engineering, and regular updates to your AI context can improve team collaboration, reduce costs, and empower Scrum Masters to guide effective AI usage.
Welcome to the scrum.org community Podcast, the podcast from the home of Scrum. In this podcast, agile experts, including professional scrum trainers and other industry thought leaders, share their stories and experiences. We also explore hot topics in our space with thought provoking, challenging, energetic discussions. We hope you enjoy this episode.
Eric Naiburg:Hi and welcome today's podcast. My name is Eric naberg, and I'm the Chief Operating Officer here@scrum.org and I'm joined by Darryl Fernandez, who's an executive advisor, and he'll introduce himself in a moment. What we're doing today is we're talking about some questions that we received during our webinar, managing your AI teammate, turning AI from experiment to strategic partner. And throughout the webinar, we received a whole lot of questions, and we weren't able to get to all of them, so we thought, hey, why not? Let's talk about them here. Darryl, you want to introduce yourself real quick.
Darrell Fernandes:Sure appreciate it. Eric, looking forward to this. Darryl Fernandez been around technology since the late 80s. Recently jumped in here with scrum.org to look at some AI capabilities and where AI could play in not only the role of the scrum team, but also in product development and in within scrum.org itself. So really looking forward to talking a little bit more about AI as a teammate and how we might think about AI as we go forward. Great.
Eric Naiburg:Thanks, Darryl, and thank you for joining the podcast today. We're on to our next episode, and we're going to kind of focus a little bit more around kind of prompting, prompt engineering, maybe a little bit about how we work effectively with AI. Again, just as a reminder, my name is Eric naberg. I'm joined here today with Darrell Fernandez as we talk about AI and some of the questions that came up in our recent webinar that we weren't able to get to so Darryl thinking about this a little bit. Give us some ideas. Give us some thoughts around how we train our AI models around specific domain in contextual knowledge?
Darrell Fernandes:Yeah, I think if you have an enterprise model that you want to train, you can certainly do that, and some organizations are going that way. What I've seen in industry is the vast majority of organizations are using existing models. So it's not so much training the model, if you will, around your domain and contextual knowledge. It's it's making sure you have a really good, concise place that you can keep your domain and contextual knowledge so that when you're asking AI questions, you can bring that context, that domain knowledge, into the conversation with AI. So in many organizations, what we're recommending, what we're seeing others recommend, is that you build a context library, a context warehouse, a context capability for your specific need. So if we're talking about a scrum team, a scrum team is working towards a product vision, that product vision has a contained set of context around it so the users, the stakeholders, the product features, the glossary of terms for that product, the compliance constraints for that product, the industry, the geography, the demographics, All of those data points are unique context for your capability. And having those easily available, readily available for anybody on the team who's going to use AI helps so much in getting a more efficient answer out of AI.
Eric Naiburg:So we're building a library. We're storing that library. We're training our our models to have that information so we don't have to repeat ourselves. They're becoming experts in our domain as as we're feeding them
Darrell Fernandes:absolutely and once we start that iterative conversation with AI in the context of that project, whether that's called a project in Claude, I'll use Claude language for the moment, right? It will build up over time, and it will it will assume some of those things in a productive way over time, and that's really how you do it. But the first step is just to make sure the team is using the same consistent definition of a user, the same consistent definition of the market, the product vision, the glossary of terms, because that'll help AI get there faster number one. But it's also a really good exercise for the team to get on the same page, to make sure the team understands things the same way. Okay? And that's that's where the scrum master can play a really big role in making sure that's happening in a productive way. And it can be as simple as a shared drive. Whether you know whatever technology platform your organization is using, you can, you can create a shared drive for your product team that has all that relevant context in it, and you just pull it, pull from it when you're when you're using AI. And I think it becomes really valuable. It also creates a little bit of a work item over time, if you will, because you have to make sure it's still current. The market changes. Your competitor set changes. You add new glossary terms, like you have to keep that, that content library, if you will, up to date, at least periodically, and curated, and make sure it's it's workable for the team to continue to use. And I think that's a really interesting evolution of AI as you manage your context over time, because a year from now, new product owner, different developer on the dev team, different perspectives. You got to make sure that that content library is reflective of all of that
Eric Naiburg:and, oh, by the way, take take AI out of the picture. We have this problem with our teams today, but we kind of ignore it and just assume that they're going to talk, or assume that they're going to make mistakes and learn from those mistakes. And we're not building those things, or we're building those, those Confluence pages, or those, those SharePoint documents that they're not reading anyway, right? So at least in this case, if we're building these and we're collaborating as a team, and we're continuing to add to them, and we're always pointing our AI engine at that document. We may not be reviewing them all the time, but that AI engines constantly referring back to it and constantly learning from it, so we're maybe even ahead of where we are today. Well, and
Darrell Fernandes:if you have specific outcomes that you're hoping AI is going to help you with, and you're seeing failure rates in a certain area, it's likely because that context is Miss set if you didn't define your user demographics the right way, and AI is leading you towards a path that doesn't solve for the user demographics you're trying to solve for. It's probably because however you defined it is is not ideal for the way AI needs to consume it.
Eric Naiburg:And have you seen where AI because you've been using it a lot now in different projects? Are you seeing it get overwhelmed in any areas?
Darrell Fernandes:We certainly, especially early on, saw AI too much information forces AI to actually take the entire set that you gave it and ignore it and just go back to basics. We have data that tells us that that happens the amount that especially in rag models. I'm specifically talking in rag models where you're layering on
Unknown:context post Core Model response.
Darrell Fernandes:So we've seen early on that that limit was pretty low, and you couldn't put too much insight into AI on top of its core model. Now, that limit, I forget the most recent number, but it's in the 2 million character range now, so it's so much bigger than it was, like we really had to manage tightly early on how much context you were trying to provide AI, and how, how effectively and efficiently. You could provide that context in terms of character limits, but now that that's grown quite a bit, you can still it. Can still be overwhelmed. You still have to be careful of that. You still have to pay attention to the responses you're getting to ensure they're aligned with your context. But you can, you can give it more and more context, and with each rev of all of the major models, you can see that limit continuing to grow, which is really encouraging from the ability to customize that model for your needs in the product you're trying to deliver. Cool.
Eric Naiburg:Thank you, and I think you know, as as we go deeper, it's going to be about that storage and having that information. We don't want to have to repeat ourselves. We need to make sure that we can be more consistent. But also,
Unknown:I think the engine can can come back to us and
Eric Naiburg:tell us that we're being inconsistent and things that we may not even see because we're not talking to each other, but we happen to both be feeding and next thing you know, it's like, hold on. Daryl said one thing. Eric said something else. They seem to conflict here. What is it that you're. Really trying to do, and they might bring us better together. And this is where the scrum master and others can start to leverage AI to to identify and see conflicts within the team that the team doesn't even realize are conflicts.
Darrell Fernandes:Conflicts is such an overt term you think if you think about conflict in an intentional way, so many times, conflict is not intentional, and it's it's the product of missed assumptions or misaligned assumptions. And AI can do a great job, even if you have that context context library, even if you just say to AI, tell me where there's misalignment in this. And it can look at a product vision document, and it can look at a user definition and say, I see it, you know, there is a misalignment between your vision and your user definition, like it can provide that it's and it may be wrong, by the way.
Eric Naiburg:Yeah, let's, let's remember what we always talk about. It will hallucinate. It will be incorrect. We still have to get involved as humans, right? But it's going to point things out, to get us to make us say, Hmm, make us think in really drive that thing. And I think that's that's really important, so kind of tying to that. Is it important to be efficient with our prompts and prompt engineering, or can we go wild?
Darrell Fernandes:So I talked about this a little bit in our last episode as an ex CIO. I think it's really important that we're efficient in our prompts, the more efficient. So every prompt has a cost to it. So every time you force the engine to go back and do work, it consumes power, it consumes CPU, it consumes infrastructure. Those aren't free. They may feel free today because we don't understand the full pricing model, but they're not free, right? And they will not continue to appear to be free in perpetuity. So those are going to come back with cost at time, at some point in time. So the more effective you can be in asking the question on round one, and maybe you go from a from a six round set of questions by the time you get to the answer you you feel is complete for the question that you're looking for, maybe you can get that to three right. And so you just did a 50% cost, save on your interaction with AI, not to mention the human cost of that of that person having to read each response, understand why it wasn't complete and refactor the question to get to a better answer, or a more complete answer, if you will. So there's certainly a cost element to it, and the x ciome Just like, can't let that go. I just that's who I am.
Unknown:But there's also a, you know, a
Darrell Fernandes:satisfaction of the team using AI, the better they ask questions, the better team members can can be at prompting AI to give a response for the situation they're in that that more effective AI is going to be at solving whatever problem they're looking for help solving, right?
Eric Naiburg:And the more efficient they can be, the better the next question is going to be as well correct. If we're just going hog wild and and we're rambling on with with prompts, we're losing focus. And if we're not focused on our prompt, how do we get to the next prompt? How do we how do we start to narrow down the feedback that we're getting from from the AI? Because we need to be able to do that, which means we've got to have some some organizations, some focus around where we're going and where we want to go.
Unknown:What, what I think
Darrell Fernandes:to your earlier point, this is where the scrum master role can really help the team get better, and the scrum master can really focus on the value of some of these things to the team. And in this case, it's the dev team, it's the development team, but it's also the product owner, because the product owner is inevitably going to be using AI to try to deliver more value, and they need some of this same consistency and approach that the development team does, maybe in different contexts, maybe maybe, as we talked about, with different skill sets, for that they're asking of AI, but absolutely, and I think this, this whole process of contextualization and and questioning and interrogating all goes back to your point, Eric, of this is stuff we should be doing with each other all the time, but because we're humans, we have natural curiosity. We will, we will find our way to the right answer and build our own lexicon of and you. And knowledge base of information, we don't need so much of that coming to us on day one, whereas AI, if you don't provide it on day one, is going to take a really long time, because that ability to question every member of the team on who's the user doesn't exist until the team comes to AI. AI, can't go to the team and ask who's the user, so they can't form that definition,
Eric Naiburg:and it's not inquisitive, right? The AI is not inquisitive by itself. You can make it inquisitive. You can start asking it to ask you questions, right? And you can prompt it to start thinking and doing in that way. But if you're just asking it questions, it's not going to ask you necessarily back. But if you ask it to suggest questions, if you ask it to go different things, different ways and in different directions, it will. And I think that's that's critical to how we're thinking about those prompts, how we're looking at how it's being used and shared across the team and across the organization as well. Cool. Well, I think, I think that that's a good, probably stopping point here for this episode. Yeah, we could go on and on, and there's a lot more to come, but with that, just thank you for those who joined us today, And we'll chat again
Unknown:soon. Thanks everybody. Thank you. You you.