Scrum.org Community Podcast

Making AI Work: Value, Risk, and Trust in the Age of Intelligent Systems

Scrum.org

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 45:58

What does it really take to make AI work, not just technically, but organizationally? 

In this episode of the Scrum.org Community Podcast, host Dave West sits down with Dr. Alan Brown, researcher, advisor, and author of the new book Making AI Work for Britain, to explore the complex realities of AI adoption in large organizations and government.

Drawing on decades of experience in enterprise digital transformation from Rational Software to IBM to advising public sector institutions, Alan unpacks why AI is both an evolutionary step and a fundamental disruption, and why most organizations are struggling to bridge the gap between technological capability and organizational readiness.

Alan and Dave explore how lessons from the UK's digital government journey of consolidating demand, diversifying supply, offer a practical lens for every team navigating AI adoption today. They also dig into three principles Alan believes are at the heart of any disciplined approach to AI: value, risk, and trust and why that last one might be the most transformative idea yet for Scrum Teams.
Whether you're a Scrum Master, Product Owner, or senior leader trying to make sense of AI in your organization, this conversation offers a grounded, thought-provoking framework for moving from pilot chaos to purposeful delivery.

Book details can be found at https://futureofai.uk/

Blog: Making AI Work: What Scrum Gets Right and Organizations Get Wrong

moderator:

Welcome to the scrum.org community Podcast, the podcast from the home of Scrum. In this podcast, agile experts, including professional scrum trainers and other industry thought leaders, share their stories and experiences. We also explore hot topics in our space with thought provoking, challenging, energetic discussions. We hope you enjoy this episode.

Dave West:

Hello. Welcome to the scrum.org community podcast. I'm your host, Dave West, CEO, here@scrum.org in today's podcast, we're talking to a very interesting gentleman called Dr Alan Brown. I've known Alan for many, many years. I was first introduced to him, actually at Rational Software when rational acquired a company called catapults, he was the CTO there. Since then, he's had many roles, including CTO roles within IBM all over the world. Actually, he now teaches and does research and helps large organizations deal with digital technology. But today we're actually going to be talking about Alan's new book, which is based on a body of knowledge that he's been developing around AI and how it's being applied in large organizations and government entities. The book's title is making AI work for Britain, which is a very broad topic. So welcome to the podcast. Alan, thank you so much.

Dr. Alan Brown:

Dave, great to be here. Thanks.

Dave West:

So this is a very challenging and very large topic. Tell me, in our audience, our listeners, why you decided to write such a book.

Dr. Alan Brown:

Well, as you mentioned, you know, we've known each other for a while. I've been working in digital technologies for a long time, in very many places, in large time, in the United States, in mainland Europe, in the UK. And my journey has taken me from thinking about large scale enterprise systems and how they're architected and built, through to how they're adopted and used in practice, and what that means in complex environments, whether that's military and aerospace, whether that's in medical systems, whether that's in large scale, real time, avionics, whatever that happens to be. And starting then to understand that the technology itself just one part of a really difficult puzzle that most developers, managers, high level executives have to work within and over the last few years, a lot of the interest and excitement has been around how government and government based systems are either leading some of the key ideas, or in other cases, are way behind what others are doing. And those tensions about how fast we move or how slow we respond in public sector, in public services, is a really interesting indicator of where we are with technology adoption and what the issues are. So I've been spending quite a bit of time over the last couple of years looking at what that's meant. And of course, the big technology push over the last few years has been AI. So as we've moved through the digital transformation, wave of moving people, online, e commerce, cloud based systems, more complex, multifunctional architectures, and into the use of AI technology, we can sort of look at that journey as an interesting approach to how technology is adopted in large, complex organizations. And AI is both an interesting development within that, but it's also a new direction for a lot of the challenges that organizations are facing. So I wanted to try to reflect on that and having thought about what it means to digitize government and how to think about the delivery of public services in a digital environment, I wanted to then express what it means to think about AI and how those new capabilities of AI are not just faster, cheaper, better, but actually redefining how we think about the world of digital technologies and its adoption in practice.

Dave West:

So before we go into the what the book talks about and the key points, I just want to lean in a little bit on that one, because I think it's really interesting. So do you think that AI is just an evolutionary step in our technology landscape, in the same way as cloud, the internet, distributed computing, PCs, etc, etc. Have all been or do you think it is a radical shift? Where would you put it on that sort of like scale, as it were? Is it like extremely different and fundamentally different, and we can't really empirically use the history, is our experience, our history, to really determine where it's going to go. Or do you think it's just one more thing that we're doing digital?

Dr. Alan Brown:

So I'm going to take the the professor way out and say it's both. I. And the reason I'm going to say that is because I think whether you answer that one way or the other depends on where you stand right now. And what I mean is there are many people who are sitting within it, organizations, within policy organizations and regulatory organizations, within auditing functions, within Project Management, who will see it as just another burden they have to deal with as they try to understand how digital technologies are affecting their workplace and many of the things around them, how they count things, what their metrics are, how they look at performance, what they what they do about the policy waves that they're facing a lot of those things are not going to change, and therefore they see AI as just another technology. You know, my shorthand for this is that if you go into if you want to know an organization and understand it, ask, what do they mean by now, soon and later. So if I'm in a university world where I spend part of my time and I talk to our organization, now means this semester, soon means next semester, later means next academic year. If I'm working with policy people in government, now means in this jurisdiction of the of the government, you know, it may be within a five year period or three year period, depending on the election cycle. Soon means once we put our new strategy together for the next cycle, and later means the next Parliament, which we may not be in, and we have very little interest in one, in worrying too much about because our 98% of what we concern ourselves with is the current parliamentary term, while if you're with an SME in a very high paced delivery situation, now might mean today, soon might mean tomorrow, and later might mean by the next quarter. So when you place yourself in those contexts, you look at the technology around you, you look at change. You look at the way in which technologies are adopted in very different ways. And you might say that's a bit superficial, isn't it? But actually, it's so deeply ingrained in most organizations that it's very difficult for them to get out of and when you start to push on some of those things, let's change the procurement cycle. Let's redefine what we mean by outcomes and outputs. Let's rethink what the performance parameters are. You will get yourself in a world of hurt because you will be pushing so hard against so many different factors that you will really find it a challenge to change. So for one set of communities, it's very much seen as another set of changes we have to deal with. And there are a few interesting variations about this way of thinking, but by and large, it's just another wave. However, there's a whole set of new ideas coming through which I think are going to challenge fundamental things about value, what we price, what things cost, what value people get, the real outcomes they receive as a result of the technology and very, very important things like, who has agency, who's in charge, who makes the decisions, how those decisions get communicated, and at that level, we are going to see fundamental changes. Let's take a very simple example. In the UK right now, one of the most pressing emergencies is around health and the healthcare systems. It is in many in many parts of the world. The UK's particular situation is about the National Health Service, about efficiency, about delivering value for money and about how we'll fund it. Now, it's fairly obvious to many people right now that we're going to have to move to a predictive analysis way of thinking about health. That is, we're going to use data to be able to say the the job of the health service isn't to fix six sick patients, it's to ensure we don't get sick, and we're going to use digital technologies, AI, data driven approaches, in order to move us there. And that will be an absolutely significant shift, which will completely change how we how we look at health and healthcare systems, because the current system is just eating up everything we can put into it, because it's based on a very different premise. Now the big issue is going to be, how the heck do we move from one approach to the other? Because it's so fundamental, and we're in the middle of that conversation right now at every level across the UK, from technology development to policy to hospital administration to healthcare systems, we're right in the middle of that, and I think we see that in many industries right now. AI is going to force us to have some of those conversations about rethinking some fundamentals. So that's why I think we've got both going on at the same time.

Dave West:

And what makes it very interesting. I love that, that. Analysis, or that structure that model, because ultimately, the ability for an organization to consume and use the technology is not actually at all connected to the technology itself. And what you described was we could do radical things with data, and let's look at the National Health Service for a moment. We could do radical things with data and analysis and that and fundamentally change how we invest in the healthcare for a town or a city or a region or a county based on that however, the way in which that technology and that information is consumed, there's a massive disconnect. So without radical change across the whole system, holistically, there's it is always going to be this disconnect and this friction, right?

Dr. Alan Brown:

Yeah, let me give you an example, and this is right on the top of mind, because I'm struggling with this right now. As you may know, I'm dealing with some US based financial services organizations, and trying to deal with how I want to work with them, and it's one of the major financial institutions in the world. Yeah, so I've been for three weeks trying to change my address for three weeks. So they say to me, they send me lots of things. You go online. Isn't it wonderful. We can do automatic voice recognition so you don't have to sign on. We've got all of your portfolio online. You can go and do it's incredible. Some of the technological changes they've made, but they can't change my address because I'm now not based in the US, because they now can't deal with the fact that I'm a dual citizen or changing citizenship. They can't deal with the fact that I've got a 401 K and I've got an individual account, and I've got a brokerage account, and all those systems are different. I've talked with several customer services people. They send me from one to another. The forms that they send me are out of date, and you say on one hand, they're doing incredibly sophisticated things with technology. On the other hand, the organizational culture change, processes, connectivity, the seams between the organizations, the departments, the systems, is fundamentally broken. Clearly and almost all organizations I deal with, many scale and complexity reflect that? Yeah, and that's, I think, part of the key problem whenever you're dealing with anything that's broadly called digital transformation, digital change, you know, any of those large, large themes in an organization.

Dave West:

But the question about AI, which is an interesting one, there's always been a constraint, because if we were very heavy capitalist types of people, we would, we would say, Well, what will happen is a new breed of organization will come in, that they're the dinosaurs, and then we were replaced by the mice, you know, because now we've got this, you know, Meteor that's hit the Earth, right, or whatever. It's a poor analogy. I'm not good. But the the point is that they would say that they will be replaced by organizations that are digital first. Historically, that's always almost been impossible, because the level of investment necessary from a from a customer acquisition perspective, and the systems and the legendary and compliance and making sure that you've got all of the right things in place have made it almost impossible for these organizations to be replaced by startups, and they also have the financial might to buy those startups. However, some things a little different now, of AI, because of the ability of of you to build many of the constraints that were in place around how big you had to be, how much expertise you needed to have, aren't necessarily as constraining as they once were. So maybe that's going to be the future.

Dr. Alan Brown:

I think there's a lot in what you say. So for example, people are talking about the, you know, the first three person unicorn, the idea of scale and size and impact are perhaps breaking down. Perhaps, in some sense, the problem you've got, I think, is in isolation. That might be true, but as soon as you say this, the obvious thing. So if you look at some of the organizations, you just say, where are they going to get their next trillion dollars from? You know, Google and meta and Amazon and so on. Where are they going to get the next trillion dollars from? It's going to come from defense, education, health, healthcare, government. It's going to come from places where there are large spends. It's very inefficient. It's very sticky. Once you're in there, you're in there for a while, and there's a lot of opportunity for new technologies to be effective in administration and data management and prediction those sorts of areas. But if you want to play in those areas, being agile and efficient and only three people and. And, you know, we, we try everything, and we take high risk, and we, you know, fail fast, fail often is incredibly difficult in those areas and what happens. And if you look at what happened, is happening to companies like Amazon and others, and obviously Microsoft and IBM and others are already buried in there. They, they become, necessarily, start to reflect that. You know, there's this idea that your organization, what what you ship and what you build reflects the structure of your organization. I think the opposite is also true. The structure of your organization also reflects what you're building and who you're delivering into. Yeah you get, you inevitably have to Yeah because of what you have to comply to, how you're audited, what external validation means, what kinds of documentation they need. You, all of those things drag you in. And I think it's a really complex environment in which we see technology being applied. And that interests me a lot, the technology, how technology drives business, and how business drives technology. And that is a two way street, I think two sides of the same coin.

Dave West:

I mean, if AI is as radical as the printing press, and I wrote a blog on this that, you know, they the Gutenberg Press, 1490 I think was the press that most people attribute to be the the technology change that changed the world, that created the Reformation, the Age of Enlightenment, that ultimately created environment where people create worth sharing knowledge in a much more rich way. If AI is like that, it will be like the impact of the press. It will ultimately change the way in which government serves and the way in which the environment operates, the markets, etc, but it but it will take time,

Dr. Alan Brown:

I guess. Let me give you an example. So I'm dealing a little bit in military environments, and I was at a UK based conversation a few weeks ago, literally just a few weeks ago, and it was clear in the room were three very, very different conversations. When you talk about digital transformation, AI and technology. The first one was, well, for the next 2030, years, we're still building some aircraft carriers and big submarines. Yeah. So large contracts, multi year plans that were built five years ago for something that won't be shipped for 15 years and probably won't work when it's built now, all of that stuff. I mean, they were literally saying those things. So how does technology, new digital technologies and a help us to manage that procurement process? You know, make sure that it's more efficient. We've still got to produce truckloads of write only documentation. Can it generate some of that for us. It was all aimed at that, the very traditional one, the second part of it says, hang on a minute. What we're learning from Ukraine and everything else is a completely different way of doing things. You know, a swarm of $500 drones is going to take out a billion dollar aircraft carrier. So we have to think completely differently about everything to do with warfare, defense, the way in which we protect ourselves, the broader idea of what national infrastructure means. It's not just aircraft carriers and air bases. You know, it's your water systems. It's your electricity system. So there's a completely different conversation going on about that and what AI meant and what technology means. And then there's a third conversation, which says, as an organization and an institution, the military is completely broken. We can't recruit properly, we can't find stuff on base. We don't know where people are. We can't order uniforms. We can't keep our people supplied with food. No, it was. We're a large, complex organization that's still thinking about itself like the 18th century. So there was three conversations going on about digital change, digital technology, AI, where it fits what it means. And in some areas, the sophistication was unbelievable, and in other areas, you just shake your head and say, oh my, you're still doing this with bits of paper in order to understand this. And the answer is, yes, they have all of that. And of course, you can the military and government is always an extreme example. I accept that, but I think it's illustrative of where we sit in almost all organizations.

Dave West:

Okay, so the new book, wow. I mean, just, just this conversation, just highlights the, the sheer complexity of the topic that you're focused on. So what are the sort of like, you know, four or five key points that you're you're advocating in the book.

Dr. Alan Brown:

Yeah. So it's easy to, you know, be a critic and throw your hands up and say, what a mess. And, you know, I'm tempted to do that often, but in the end, you've got to say, okay, smart alec, what should we do? You. What how do we look at this differently? What progress can we make? So I in the book, I've taken an obvious approach. Can we look back over the last decade and say, what did we learn from the digital transformation efforts that have been ongoing for some time? And we can go back to in the UK's case, I was involved. Others were involved in aspects to do with the Government Digital Service and what we tried to do then, and it made progress quite quickly. It ground to a halt, which we'll talk about in a moment. But it made progress quite quickly, and the UK was quite widely, globally, seen as a leader in digital technology transformation. Why? Well, it did some very simple things. It consolidated demand and it diversified supply. What do I mean? Consolidated demand? It went across government and said, we've got at that time, they had 1900 websites, all different, all with different domains ownership. They tried to combine those through gov.uk, they tried to put some common management around it. They then built some common principles around what those sites would look like, about accessibility, usability, naming, all those sorts of things. They then tried to build some common components to do with identification, to do with verification, to do with form filling, those sorts of standard things. They then tried to look at the costs involved. And they tried to then look at every government contract over a certain amount. They tried to consolidate. They tried to make sure that we were buying not 1000 times once, but once in a thought for 1000 and then look at discounts and look at sharing. They tried to do as best they could to consolidate. Then on the other end, they said, we also need an idea to diversify supply. We're being strangled by small number of providers of technology and consultants and system integrators, and in many cases, they were well intentioned, but those kinds of contracts tend to lead to very poor practices over pricing, long term lock in and so on. So what can we do about that? Let's open up the marketplace. Let's get more SMEs involved. They created what they call G Cloud, which became the digital marketplace. They insisted on a certain amount of the technology purchase coming from other vendors outside the norm. So they tried to diversify the input, and they made incredible progress very quickly. Now what happened over the recent years is that's ground to a halt. Why? Because the surface level issues have been dealt with. Yes, you can now go to gov.uk, it's really well designed. All services are there. You can search. You can find stuff. As soon as you go one layer deeper, you're into a cobalt system that was built 1520, years ago. You're into a, you know, a PL, one based system that was built by IBM and managed by Oracle. You're now into a, you're back into the old infrastructure, which is very difficult to change, very difficult to understand. You can't give that to an SME and say, Now you look after it. We don't want IBM looking after that. We don't want capita looking after that. We don't the knowledge is embedded in the system and in the people, and they soon discovered that they couldn't simply break that in one massive goal. So the approach was right, and then they ran up against a whole bunch of issues. So then you say, Let's look now. So what do we do about that? What is AI doing and what's different? Of course, there's some more intelligence, better use of data. The algorithms are improved. Predictive Analysis. There are some incredible things that we can do in certain areas of that to improve it. But as you mentioned at the beginning, it's not just about fixing things as they are, it's reimagining a world that could be different. So we've got lots of different people trying new ideas in the UK case, we've got incubators. We've got, you know, we've got play pens that we and sandboxes that we use. We've got things we're trialing. The problem is right now, we've got hundreds of things being tested and trialed and and being promoted. We've got lots of individuals and organizations and institutions buying for themselves. So we've reached a point today in the UK where we've done the opposite to what we did 15 years ago. Now we've got diversified demand and consolidated supply. We've got everybody doing their own stuff, pilots everywhere, and we've got dependency on a small number of technology providers, mostly in the UK space. In the UK's case, it's us based and that's bringing up a whole sort of set of questions about sovereignty, ownership and what do we do in the future? There are some ways that we can start to get back to consolidating demand and diversifying supply. We can start to introduce better ways of management and governance. We can start to train and manage what people do in a more consistent way. We can bring some. Consistency to the front end of how people are using AI technologies and moving from pilot to real delivery, and then at the back end, we can increase some of the diverse diversification, some of the sovereignty issues, some of the ownership issues, so that we don't get trapped into a small number of providers of technology who, remember, have spent so many trillions of dollars, I think it's probably up to trillions by now, on AI models, management, training, marketing, that they're going to have to recover that right now. They're losing money on AI technology, by and large, making it up on advertising and so on. The technologies themselves are going to have to pay for themselves very soon, and that means that the prices are going to ramp up. Lock in is going to ramp up. It has to. It's a commercial reality. So the UK is becoming much more cognizant of these issues, and I'm trying to suggest ways that we can, we can think

Dave West:

about that. Okay, so our audience may work in government. They certainly work in large organizations, but they're not politicians or policymakers. They're not deciding the consolidation of supply and demand inside their organizations. What can they do effectively? What can they take from this book? So I think

Dr. Alan Brown:

that's a really good question, because what I'm proposing is, of course, for particularly UK based policymakers and senior leaders, there's a lot to say, but I also believe that the UK is a really interesting live case study for every scrum team's AI adoption challenge. And what I mean by that is we're caught between the US hyperscalers and EU's regulation, we're caught between the technology driven approach and the we must pause. We must consider all needs. We must make sure we're no faster than the regulators. I think every organization adopting AI based technologies is having those conversations in one form or another. Every scrum team that is making those decisions on the fly, every product owner is trying to decide where that goes, every executive is trying to look at the roadmap and their investment approach that and asking those similar questions. So how do you adopt AI quickly without losing sovereignty over your own roadmap. That is a really key question that we all must answer for ourselves, that I believe this book looks at at a macro level, but the approaches that it takes, the framing that it uses about consolidating demand and diversity diversifying supply, I think, apply to everybody. And I think what it offers is a set of principles about open standards, this consolidation at the front end and diversification at the back end, and a way of thinking about supplier diversity that translates to any organization. And I think we'll all go we're all going to be facing those decisions, and this is a way of helping you to create a language and a set of concepts for for making those decisions.

Dave West:

It's interesting because there is a an interesting tension. And there's always been a tension with Scrum, with organizations, because we encourage empowered, cross functional teams that are self managed and and there's this tension, you know, basically, give them a goal, give them some guard rails, and off they go. That's it. There's always been a tension, I think, with that and an organization in terms of control and governance and AI actually empowers these teams almost exponentially, like so I was working with my BT team, and we, we rebuilt a complete app in in an hour and a half, literally from the ground up. Because we were like, Well, how about if we did it completely differently? And we did and we we were using, you know, sort of like an agent based model. We gave it some goals, we gave it some guardrails, and we got going. So how can you part of the your hypothesis or your your part of your approach would encourage standardization and control in certain places. How do you balance that with the sort of desire of these Scrum teams to deliver on their goals? Because, frankly, now they have the ability to do it. They don't have to wait for legal, you know, those legal that had to review this stuff, they don't have to wait for the you know, how? What would you recommend?

Dr. Alan Brown:

I think that's always been the tension between innovation and control. And I think it's a it's a balance we've always had to make when I have these conversations, Dave, there are always three big terms that come up that I think are worth being more explicit about. One is about value, yeah. How do we focus around value? And this has always been a big issue for for Agile. Where's the value? Who gets the value? How is that distributed? How do we keep monitoring and managing value and and it can be argued, and I think many of the Agile community argued a previous way of looking at was value was much more to do with outputs than outcomes. Yeah, and I think refocusing on what those outcomes are, what they mean, but also how quickly those outcomes become obsolete is a really interesting question right now. And I think one of the things that is very important for people who are struggling with Agile adoption, and you know, pilot itis that they have, is refocusing on value and outcomes and getting a better handle on it. And I think the Agile community have perhaps the most mature view of value of anybody, and I think that's a really interesting point to keep focused on. The second one is about risk. These are these are related terms. I think so value and risk. And risk is about, which risks are you trying to optimize for, eliminate, manage, and how do they evolve? And we've always had this tension, I think, of at a simple level, operational risk and delivery risk. Operational risk is it can't break when we ship it, so we're willing to spend more money, spend longer testing, you know, do everything we need to do to say we're confident before we ship, because it's people die, or the financial implications, or brand and prestige implications are too high. The opposite is, if we take too long, we'll miss the window. Delivery risk. We have to deliver when it's needed, at the point it's needed for the people that need it, so they can solve their problems when they need those solved. And sometimes, you know, a week too late is is too long, sometimes a minute too late is too long. So trying to understand those risks and balance those risks has been always a key part of any disciplined approach, the empirical approaches that that agile promotes are explicitly about understanding risk. Which risk, how do we understand and measure it, and how do we keep monitoring it in an efficient and an effective way? I'm not talking about risk registers and those sorts of things that you review every month and everybody ignores. I'm talking of things that are much more down to earth and practical. And then the final one is about trust. And trust has always been an important thing for agile, particularly amongst team members, across the teams, between teams, between hierarchies of people outside of those teams, and the teams themselves, and how that's managed and maintained, and a lot of the ideas of communication and the ceremonies and so on that are key part of agile are about creating, managing and maintaining a trust relationship. I think that's become even more important, particularly when we're thinking about the use of AI, when data is coming from different places, when we're using AI tools, perhaps in support of software development or something else, but also when we're seeing AI replacing large chunks of what we do, that trust mechanism has become another really key thread. So I would say, if you start to focus on value, risk and trust and make those at the forefront of what you see and align that to some of the key underlying principles of agile, you'll be in a very strong position.

Dave West:

Mic drop. I have never thought of the events of Scrum and the mechanisms of Scrum as a framework to the sort of manage trust, but soon as you said that, it just makes total sense the and the daily scrum ultimately, is within the team about, are we trusting that we're delivering to the plan that we built when we did sprint planning, and have we learned something that breaks our trust in that outcome and our trust For each other in support of that sprint review, obvious sprint planning and retrospective also, and it's interesting that you this the AI technology in particular, changes that trust relationship that I hadn't really thought about when it provides us. So I'm at the moment, having a kitchen remodel. It's not started yet, but they've just the builder just sent me, for the first time ever, actually a contractor sent me a Microsoft Project Plan. So after I got over the visceral reaction, I then turned it into a PDF, and I'm throwing it into my my AI tool, and saying, is anything missing what? What's, what's because obviously, there's probably a body of knowledge around doing kitchen remodels that I'm not familiar with. So the trend. Changes there absolutely. It's just really interesting. But if I'm using AI to build things, then there's a whole nother trust thing. So that's really interesting perspective. So one

Dr. Alan Brown:

of the key things that I always follow is there's some some great work being done around this thinking. One of them is from Ethan Malik, who, Yeah, lovely, very well known. The one of the concepts he promotes is this idea of the jagged edge, and the way he describes it is AI is incredibly powerful in some areas and incredibly weak and disappointing in other areas. And our problem is we don't know which often, and it's very contextually dependent, and it's rapidly changing because of what we're learning new technologies. What you're learning as you use it, it's, it's such a dynamic area. And trying to plot out how you see the jagged edge of AI where you say, man, it's 99% correct there. So I can just leave it here. It's only 80% I need to do more here. It's really not great. I need to, I need to do it myself, and then have aI help and support and maybe check some of what I do, how that, how you see that in every aspect of how you apply AI is so important, but so complex. And more and more of what I personally do as I'm using AI technologies is I find myself asking those questions, which things can I leave to AI, because I now trust it enough. Which things do I need to verify? Trust but verify? And which things do I say, No, no, I do those. And you might help me, here and there, but I need to take ownership of that, because what you do I can't trust, or it isn't me, or I don't feel it's authentic, or I want to have a very personal time control.

Dave West:

Sometimes when I'm engaging in AI and and using it to help me build a presentation or write a document or whatever, I have to say that it feels very similar to you. Ever had those meetings with consultants that are super smart, and during the meeting, you feel so like, oh my god, this is amazing. This is awesome. Oh my god. I've never thought of this before. Oh gosh, this is incredible. And then afterwards, your boss or your mate comes and says, What was all that about? Then? And you go, Oh, you know what I mean it, it's incredibly polished and incredibly rich. But I don't, but I don't necessarily take it on board in you know, I can't, and I find that that's an interesting way of thinking about, you know, I almost have to challenge myself afterwards. In fact, I've been using AI to do that. What I did was I said, Okay, that's awesome. If you were going to test me on understanding that, what are the three questions I need to be able to answer and which just to test and validate that idea?

Dr. Alan Brown:

I think there's a lot of use of AI as a critical friend, and I think that's a role it plays quite well. And part of the danger of that, of course, is that it asks common, commonly used kinds of issues, and what you want is something that's knowledgeable about you. And so as I found, as the AI understands you more and builds an understanding of you and what you find, it useful and interesting, it can be more helpful to you, but, but I think I'm finding I use AI in several roles. If I look back, you know, one of the things I could look back on, and is to say, I wish 10 years ago I'd had a really good editor, one or two really good researchers, a really sharp personal assistant to keep me on top of things, and a really good, you know, critical friend that I could bounce ideas off, sometimes stupid ideas, and they'd say, Alan, you must do that stupid thing,

Dave West:

don't I thought that was my job. Alan, are you sure?

Dr. Alan Brown:

Okay, so now I've sort of got that, or I'm beginning to feel I'm getting closer to having that. And I think if we, all of us, start to say, now I have that, what would I do with it, and how would I make sure that it's acting on my behalf in the right way. I think we're in a much stronger position, individually and personally, in how we use AI. I think that's a little different to the corporate and broader challenges, but, but I think from an individual point of view, I think it's a really strong way of looking at it, yeah.

Dave West:

And I think that as Yeah. I think all of us as knowledge workers, ultimately, when our primary value that we deliver is about knowledge, right? We deliver information, we change. I think, I think what you described is ultimately your personal AI team and and I'm also doing exactly what you're describing and building up that content. Text, building up that memory to help me do that with with AI. It's interesting. All right, we could talk for hours about this Alan as actually we do so as we come to the end of this podcast, you know our audience is sitting there, hopefully listening to this and thinking, maybe, maybe they've taken this opportunity to get access to the book. And they're like, Oh, this is interesting stuff. What would be the one or two things that they should take away from this, this conversation that we've just had?

Dr. Alan Brown:

There are quite a few things, but let me just, just pick one, just to just to finalize the conversation. I think, let me say it this way, AI changes what gets built. It doesn't change the fact that you need a very strong discipline around creating and delivering things. That discipline is something we're still searching for, I think in most contexts, for for a lot of people, hopefully a lot of people on this call, they're already using AI as software development aids. Maybe it's to help them in testing, maybe it's to help them to generate some code. Maybe it's helped them to to document existing systems so that what they're trying to understand is the discipline they need around that, around value, trust and risk, and around how they adapt. Perhaps it's they're using traditional ways of thinking about software development, or more agile approaches to development, or some other other corporate approaches to the methods that they apply, but they're needing to adapt the discipline to this way of thinking, and I think that's a really critical approach that we need to think about. What does that discipline look like? And I think many of the things that are going to apply in areas of value, risk and trust, will find their roots in how we think about agile software development. And I think that's going to be a core part of what that future model looks like, and I think we're beginning to see some aspects of it, but I think there's still some more to do.

Dave West:

I think that's a really good way of thinking about this value risk, trust, and obviously trust and governance kind of a linked right? Because ultimately, governance, at the end of the day, is all about managing trust at an organizational and an individual level. And I love the way that you describe discipline. What I'm seeing is a more disciplined approach to using these tools is incredibly beneficial, because it allows you both to inspect and adapt. That doesn't mean the tools always behave in the same way, in fact, nor net so they don't by nature, but they don't, but it gives you the ability to learn and to get better at using these tools, which I think is ultimately at the heart Well, thank you for spending the time today. Alan, I yeah, I mean that the trust thing definitely blew my mind, and it's something that I think we're going to spend a little bit more time thinking about. I'm certainly going to spend some time thinking about but, but and all some of those big conversations. The irony today is that as you were talking about the UK Government. Well, gov.co.uk, that I got an email this morning from them saying, because I'm don't live in the UK anymore, I'm going to have to use an authenticator. I can't use my phone number anymore. They've removed that capability. And I'm like, well, that's just annoying, isn't it? So it's just ironic that you said I literally got that email this morning. So yes, so I guess unless I can use your phone, which might might annoy you a little bit, but I'm gonna have to change

Dr. Alan Brown:

Well, the book Making AI work for Britain is out at the end of April. It is available to buy, but it's also open access, free to download under Creative Commons license. So I hope people will download and take a look and engage in the conversation that I think it opens up

Dave West:

that's awesome, and it's very generous of you for providing it free to everybody. Obviously, if you want a physical copy of it, that does cost money, because it costs money to chop those trees down and print stuff, but that's, that's, that's awesome. Well, thank you for listening to today's scrum.org community podcast, and thank you, Alan, for spending the time today. Thank you if you liked what you heard today. We were very fortunate to hear Dr Alan Brown talk about how AI is a really interesting extension of the digital transformations that have gone before, and how it's both different and the same, which makes it really complicated, and how organizations can perhaps think in a more effective way around. Around supply and demand with respect to this kind of technology. He also talked about how risk, value and trust own important elements to any decision making framework that you think about and how discipline at the heart of AI is perhaps one of the most important things. I recommend that you download his book and and share and, you know, put it into your AI tool of choice and ask it some interesting questions about it. Or you can, if you can be old school, you can actually read it yourself. So I'm lucky enough to have a variety of guests on the scrum.org podcast talking about everything in the area, professional Scrum Product thinking, of course, agile. So thanks for listening today, everybody and Scrum on foreign.