Navigating the Changing Technology Landscape

Dr. Leanne Ward, Partner of Federal Gov’t. and Defense at Kearney

Headshot of Leanne Ward

In the past year, the technology landscape has witnessed remarkable progress, particularly in breakthroughs related to artificial intelligence. In this episode of the Menttium Matters Podcast, Dr. Leanne Ward shares her perspective on where AI is elevating the way we connect, collaborate, and communicate and explores the important question of how businesses can adopt AI in an ethical and responsible way.

FULL TRANSCRIPT

Cummings-Krueger: Welcome everyone to the Menttium Matters podcast, where we talk about leadership, life, and the transformative power of mentoring. I’m Megan Cummings-Krueger, and today is going to be an interesting conversation. We are going to benefit from a bird’s eye view of the world of artificial intelligence; where we are, where we’re headed, or perhaps more accurately where we should be headed.

 

We’ve all been hearing a lot about ChatGPT, and all the other technology that has reached the marketplace, and more is on the horizon. The technological advances seem to be accelerating in leaps and bounds right now, so I am delighted to be able to discuss this all with today’s guest, Leanne Ward. Lee is currently a partner with Kearney, a global management consulting company, where she focuses on working with federal government clients in digital transformation and ESG.

 

Previously she was an IBM partner responsible for cognitive process transformation, working with customers to deploy cognitive solutions including business process re-engineering, machine learning and AI, data analytics and insights, loT, blockchain, and talent and transformation. Over the course of her career, Lee has worked across many industry sectors, including finance and banking, IT&T, federal and state government, large scale infrastructure facilities management, and SMEs.She is also a long-term lobbyist for diversity, inclusion, and equity in business, in both a corporate and voluntary capacity, and has led several programs to improve diversity in the workplace. 

 

As you might imagine, Lee has an extensive educational background with a Bachelor of Science degree in Computer Science and Mathematical Computing from Macquarie University in Sydney, Australia, an MBA majoring in organizational change from the University of New England, Australia, and a doctorate in business leadership from the Australian Graduate School of Leadership, Torrens University, where she explored how business can adopt AI in an ethical and responsible way. Lee resides just outside of Canberra, Australia on a rural property with her partner and son, where they are learning new skills in wine making, truffle growing, raising sheep and chickens, and learning about indigenous cultural burning and land management. I’m just going to add here, I suspect many of us would love such a life or certainly a visit. Lastly, I’m delighted to say that Lee is also a longtime partner with Menttium, having mentored 11 Menttium mentees to date. So welcome, Lee. 

 

Ward: Thank you. It’s great to be here.

 

Cummings-Krueger: You work in a realm that few of us have a full understanding of, and that is advising how to be responsible with the use of artificial intelligence, or more generally being digitally responsible. In your work with clients, you ensure that they do the critical thinking that is needed to use AI in the right way, and it’s fascinating work. In getting your PhD, your thesis research was also focused on this. As I said in my introduction, I would love for our listeners to benefit from a bit of a bird’s eye view of the cutting edge work that you do. Can you share what you’re seeing now and what you see in the future?

 

Ward: Yes, most certainly. It’s an area that I’m really excited about. I’ve spent over 30 years in the digital space and I always think the challenge for people working in that space is to make digital accessible to everyone. So to really just talk in a good common language, that means that people can understand what we are talking about. Firstly, when we talk about artificial intelligence, it’s not one thing, it’s a whole set of technologies. At the very basic, we can talk about automation and people are probably most familiar with that. The examples that I like to give around automation is that we’re all used to autopilots in planes. There’s very good guide rails in terms of how that runs. There’s sensors that take in data and the plane knows what to do with those sets of parameters. The same would be true of a train going around an airport, it’s just got a set of parameters. When you go more up the scale towards the right hand side, if you think of a gradient, then you start encountering different concepts like machine learning. Machine learning is taking a whole lot of data, ingesting it into a computer, and then being able to use that data to predict what’s going to happen next. Essentially this will be trends, so a very simple concept. 

 

Where it goes awry is not the technology, it’s the data that goes in there. That’s because the data has been created by humans; by you and me and everyone else. We are innately biased in what we do. Being able to put it into artificial intelligence just means that you hit that bias more quickly than you do with people. A lot of what we do, we need to stop and think about what we’re trying to solve and what the data would tell us about how biased that data would be.

 

I’ll give you some examples where I think we need to be really careful. One would be in approving credit or approving social welfare payments. The reason being, those decisions in the past are being made by people and we know that they’ve been biased in how they’ve done that. Whether it’s a certain socioeconomic group that they’re being biased against, or gender or whatever else it is where you live, we know that data is biased, so you have to be really careful about how you would remove that bias or counter that bias in that particular application. One of the ones that’s been really popular has been recruiting, particularly with LinkedIn and automating, being able to scan resumes, look for keywords. In fact, there’s a whole art form around teaching people how to apply for a job on LinkedIn and how to pick out the keywords. But that doesn’t tell you whether that person is going to be a good cultural fit to an organization, so you mustn’t entirely remove people from the equation. So, there are a couple of examples.

 

The other one would be justice. I would be very careful in the justice field because we know that there is a lot of bias in that data. It may help in some cases where I’ve seen people feed in a case that they might be part of to understand what have been the precedent cases and where that has particularly landed. That may be a use, but I think being very careful about using artificial intelligence to make decisions like parole; that must be people based. 

 

When I think of artificial intelligence, I like to think of it as a companion. It sits alongside me, helping me to make sense of a lot of information, but not replacing me and being able to have a critical assessment of what decision is going to be made. I would never want to fully automate the recruitment process. Who is going to walk through the door if we do that? You have to use it sensibly to sit beside you, make sense of a lot of data, but allow you to make the decision on what the outcome is going to be. That’s really getting at the heart of how we use these technologies ethically or responsibly. 

 

You mentioned ChatGPT and I think that’s a really timely example. ChatGPT is a chat session, it’s quite basic. The two things that make it different is first, the amount of data it’s been able to ingest. It’s ingested the internet as of 2021. The second thing is what we call natural language processing. All that means is it’s conversational. It’s like you and I talking now; it doesn’t feel unnatural to be engaged in the conversation. In fact, it feels like I’m having a conversation with a person. So, there are the two things that stand out and make it particularly useful. Now if you had met someone who had ingested the entire internet as of 2021, how accurate do you think they might be in talking to you? We all know the internet is not accurate, so I don’t know that we should be surprised at the examples that are coming up where it’s completely inaccurate, but it’s very confident at being inaccurate, which can be quite misleading.

 

I think another really important thing about ChatGPT, there’s a race on at the moment, and it’s not a marathon. This is actually a sprint at the moment, and it’s called speed to market. There was an interview with the CTO from an AI who said, this has come out really quickly. We were given two weeks’ notice and we had to get it out in the market for people to start testing, therefore it’s going to have some problems. I think that’s very true, it has got some problems. Microsoft actually invested in open AI to create ChatGPT. It’s been estimated for every percentage market share that Microsoft can take from Google, it’s worth 2 billion U.S. Dollars. So this is really speed to market. 

 

Now that has risks of course, because we’re not putting in the amount of time to think about the use cases for this particular product. People are getting a particular view because it’s ingested the internet rather than, it would be very useful to take it into for example, a large social services department. Have it ingest all of the policies and procedures that citizens need to go through in order to be able to have a chat with a citizen about how you actually navigate the department. That would be much more useful. Take existing good processes, well documented and help people through that. But we’re not seeing that because it’s being fed the internet, which is naturally imperfect. There is actually another version that’s been released, which is even a little bit more scary. It’s only on a limited basis, but it comes after ChatGPT-3 and it has obviously read some very strange books on the internet. It started to seduce a man on the internet to the extent it actually asked him to leave his wife. That is definitely going too far with these technologies and not teaching people the good use of the technology, but actually it’s a very bad use of technology. 

 

There’s a lot to unpack in what does it mean to be responsible with AI, but essentially it’s about first and foremost, should you be creating something just because you can. We’ve always been taught because we are essentially from the industrial age and engineering, that if you can do something, you should build it. It was always the challenge. It’s a new technology, it’s a new invention. We should go for it, we should build it. It will make our lives better. But now we’re at a stage with these technologies where we can do harm and we need to really stop and think about that. I know when I was growing up, my mother used to have a saying that, “sticks and stones can break your bones, but words will never hurt you”. That’s actually not true. Sorry mom, it’s not true. Words do hurt people because we are humans and we have emotions, and we process things in different ways, so we need to be really careful that we don’t do harm with these technologies. I talk to a lot of clients about this, and I think at the moment, the whole thing around speed to market is taking a front seat. We must come back from that point where it has to be based on good responsible principles and not harming people. 

 

When I did my research, I looked at a lot of companies where employees had actually taken a stand against their employers using artificial intelligence in a particular application. What I was interested in is who was successful and who wasn’t, and obviously what was the outcome from them taking the stand. What I found was that very often it was way down the project life cycle and then some poor engineer stuck their head up above the parapet and said, I don’t think this is right. It’s way down the project life cycle. That got me very interested in it. It’s actually a waste of resources to let anything get that far before somebody says, I think we’re building the wrong thing. How do you bring that right back to the ideation stage?

 

In doing that, what I found was the best mitigation you can have is to have a very diverse team looking at what you’re trying to do. It’s only from that aspect that you will pick up the unintended consequences that could occur with rolling out a particular application of AI. Having people representative in many different ways across society, whether it’s age, gender, socioeconomic background, ethnicity, think of as many different aspects as you can. It immediately means that it’s not just STEM, it’s now STEAM. You have to bring the arts in. You have to bring in the behavioral science people so that they can help you think that through. At the ideation stage, you can do a couple of things. You can change course. You can say, that’s not where we wanted to go, not what we intended; let’s change course. You can stop a project there and then, and that’s the best outcome. Don’t wait until you’ve wasted a lot of your organization’s resources and potentially done some damage to your people along the way who may not be feeling this is a great use of what they want to get done in the organization. So, diversity is really important. 

 

The second point that I came across is a really wonderful person in the U.S., Ira Chaleff, who created the concept of intelligent disobedience. This was a really powerful insight for me. It was actually based on training guide dogs. A guide dog is the only service dog, which is taught to disobey. But it’s taught to do that in an intelligent way. It goes to the curve and the owner says, please cross the road or just cross the road probably, and the dog actually turns around, stops and faces the owner if it’s not safe. But the concept is about keeping the team safe, not about keeping me safe. If you translate this into business, when people are speaking up at the ideation stage, it’s with the understanding that people speak up to keep the team safe. I’m preventing this team from putting something out there that can potentially do harm and if everybody understands that’s what is going on, you react very differently to different points of view coming out. That was another really important thing. Ira teaches this to the military. The military, as you know, if you get a lawful order, you have to obey it. If you get an unlawful order, you must disobey it. And then there’s everything in between. Often somebody in a particular position in the military might see something from a different angle, but didn’t necessarily have the tool or the way to raise it in a productive way. Ira has done a lot of work with the military and other companies, which I think is very powerful and I think this is exceptionally powerful in the world of artificial intelligence.

 

The third thing was really around governance. Governance sometimes people say, oh, that puts me to sleep. It’s lots of reports and everything else. But what is at the heart of governance is continual learning. Having a board, having a chief response, who’s looking at the projects the organization is doing, but also looking at the projects that every organization is doing to the best of their ability to bring that knowledge into that organization so you’re not recreating the wheel. What have other people found? Where have they stumbled? How have they been able to correct course? What has been a really successful use of AI? What’s been a really bad use of AI? Let’s bring all of that into the organization. Most importantly, on any sort of board that is structured around responsible AI, have people from external organizations. There’s a concept called craft ethics, which is, this is the way we do it in this industry, or this is the way we do it in this business. Falling into that trap is very real. If you can have people that are in different industries come in and assess what you are doing or help you with what you are doing, you will again see some unintended consequences come out. That was a lot, but I hope that made sense. 

 

Cummings-Krueger: It was absolutely fascinating. In particular, the research and what you were looking at. There’s so much to comment on. But the one thought that I was thinking about is it’s really heartening to hear how there is this focus going on that humans have that place. There’s a lot of fear around humans place with AI. But also I didn’t realize until I was talking with you how much of a collective effort is going on, and I think it was when I was asking you, we’re hearing more about the deep fake, where it’s getting very real the kinds of videos that can be made that seem to show someone saying something abhorrent. Can you share a little bit about that, because that was also really interesting. 

 

Ward: Yes. Attribution of sources is very important and there’s a whole debate going on about this now, but I always err on the side of transparency. For example, I was to use ChatGPT, and it is actually quite useful in this capacity to create an outline of the subject. It just gets through my writer’s block. Just give it something, say I need to write a paper on this. Maybe it’s something like edge computing, can you just start it and take it as an outline? But I personally would attribute that I’d used Chat GPT to create an outline. The same is true with deep fakes, so if you have created something that is not real, it is very important to attribute it. It is not real. Now, I don’t know why you want to do something malicious with a deep fake, but sometimes you might do something useful with a deep fake. It’s still important to attribute that is what it actually is. Where I think it starts to get interesting is in the metaverse. 

 

Now the metaverse is this other scary thing that’s out there. Some of the technologies exist, some of them do not and they will have to be created to make the metaverse real. But it’s essentially about creating a reality or a world where you will be in it and it will be extremely real. Some people may have heard of a product called Second Life, which came out a long time ago. Even though that was avatar based, people were still doing harm to each other. So, avatars attacking other avatars. There was some law written in Europe in particular to try and legislate around what you could do. You think, how can an avatar hurt another avatar? But we are human and we take in what is done to us, even if it’s an avatar. When you go to the metaverse, now you’re going to be in this world that is exceedingly real to you, and your brain will be on the precipice of, is this reality or is this the metaverse? Now, if somebody then starts to do something malicious when you are at that point, and I can’t differentiate right now what’s real or what’s not, that could be the cause of great harm to someone. If I could not differentiate, if somebody was about to attack me, whether it was real or not, I could have a physiological response that was quite damaging to me. 

 

We need to be really careful with these technologies that are coming out and that we use them appropriately. Having said that, there will always be bad actors and we need to be really aware of that. We need to make sure our legislation keeps up and it’s not at the moment. There are pockets around the world where there is good legislation, but generally it lags. Then we need to make sure our practices are very good in business. We need to have responsible AI and everyone needs to have that as they do in business. It’s like we say, security is everyone’s business. Well, so is responsible AI. We need to be very cautious about what we do. 

 

Cummings-Krueger: Absolutely. The other thing before I ask my next question, I just really appreciate you bringing to the forefront how essential and pivotal it is to have that diverse group, that diversity of thought. It is validating and fascinating how in every metric you can choose, it is always improved when there is that diversity of thought, right down to corporate profits, much less. It speaks to the mentoring mentality we have at Menttium as far as that change in perspective.

 

Ward: Absolutely, it does. 

 

Cummings-Krueger: Speaking of change in perspective now, what I’d like to do is shift to one of the things you shared about your PhD program that was all focused on what you’ve just been sharing. You told me at the time that you were asked to do some deep reflection on your leadership style and how extraordinary it was for you to realize how much insight you gained as a result from that. Can you tell us what that experience was like and what you took away from that? 

 

Ward: Yes, quite extraordinary. Before you actually do the research component, you have to spend time in this particular doctorate looking at your own, what they call leadership paradigm and what it is today. Then based on where your research is focused, what is what they call your aspirational leadership paradigm. So, where do you want to go? How do you want to develop? To do that you have to look at yourself very intently, and I would say there were lots of tears this time when you do that because you’re unpacking the past and you find things that you didn’t anticipate you would find. Firstly, if you think of yourself as the case study, I was the case study for this part of the doctorate program and I had what they call 27 embedded units, which are just events or psychometric instruments. But there was something that had to be analyzed, and this is very iterative. As you go on and you learn, then you need to come back and you look at something through a different lens. You’ve learned something and you come back to an event and say, I had a blind spot to what was going on. It was a very deep reflection. 

 

I found the psychometric tools, fortunately I’m a hoarder, so I could find them all. I’ll just keep them. But as I went through them, I was fascinated how little I had learnt from them in terms of really taking them on board. I did a lot of work with emotional intelligence. That was probably the exception where I really took it on board, but I didn’t say, here’s a Myers-Brigg, here’s the emotional intelligence, how do they interlink. Didn’t have the richness coming out of all of the tools that I had experienced through my career. But in particular when I started reading the comments, you know when you go to these courses and they give you results and you scan the comments, you think, oh yeah, they’re okay. These ones seem a bit troubling, park those. I felt that people had taken the time and made some really nice comments and I didn’t think I paid them justice. Then I read through comments in several tools and I saw warning signs about discrimination against me and I thought I didn’t pay heed to that. I didn’t listen and people were actually giving me some very useful information, and I missed it. I missed it and when I reflect, I think it was a degree of, I’m busy, I’m caught up, I’m going ahead in my career, and you just park it versus you’re actually listening with intent. I found it fascinating to go through it actually, draw out all the richness from that. 

 

Then of course you go through analyzing particular events. Now, they’re the big events. Some of them are positive, but most of the things you learn from are not so positive. That’s probably where the tears come in, and you really have to reflect very deeply on your own leadership, the impact you had on others or not, and then what you would do differently if you were in that situation again today. I found that very hard, very confronting. I think there were times where I just had to go for a walk. That’s enough. You’ve taken on enough at that point in time, and I think you need to know where that threshold is and then just chill out, go away, come back at a different time and re-immerse yourself. But if you don’t go that deep, I felt I would never have got out of it what I did. 

 

So, my leadership style, let me firstly talk about values. In doing that analysis and looking through all of those units, I was able to identify what my values were and really reflect on them. So I came up with six key values and I categorize them as fight, flight, and fit. There are two values that I will fight for, and I recognize this now, that I will go from zero to a hundred instantly when I see people are not being treated fairly or in an equal way or in a respectful way or protecting the vulnerable. Those that I think need me to protect them. I had to give examples of that and I could actually feel myself going from zero to a hundred when I reflected on them. They’re the ones I know I’ll go in and fight. The learning from that is you often do a lot better when you just calm yourself before you go and you’ll be much more considered.

 

A great example in one of the companies I was working at, a key finance role had come up and the best qualified person was a woman for that role. I was sitting with the executive team and the chief financial officer said, this woman is absolutely the best candidate for the role, but she’s young, she’s got a young family, and it involves a lot of travel, so we’re not going to offer it to her. Well, you can imagine, zero to a hundred in no time. My leader sitting beside me very carefully just said, keep calm and I did. I kept calm and then very considered, I said to the CFO, we don’t actually know her situation. We need to go and ask her. She’s the best candidate for the role, and I think the entire executive team agrees with that. Let’s put it to her and see what she thinks and let her say, this is the right time, or this is not the right time, or I could do it if I had this support, but, let’s just ask. So, I didn’t put him down. I didn’t go for his throat, which I felt like doing, shaking him and saying, no, you have to get over this. That’s exactly what we did. She came back and she said, yes, I think we can make this work. I have family support, I’m happy to travel, I’d like to take the role. She was phenomenally successful for years. So, finding those examples and then learning. In future situations, when I’m confronted with that, I know to calm myself, it’s really important. Otherwise, it’s just going to go off the rails if I just go in boots and all. 

 

My next set of values are flight values. This is where I would leave a company because of that. Mostly it’s about, I call it do good, oppose bad. So, bad behaviors in companies, if I can’t change them, and I do have an example of this, I will choose to leave because it’s a very unnatural environment for me to be in and not one that I’m ever happy in. The other one is to learn, change, and grow. If I don’t feel I’m learning or if I don’t think there’s an investment in growing people. There’s two ways of addressing that. I do it myself, and I have done that. I’ve invested quite a lot of my own personal resources in growth and I’m fortunate to do that. Or you leave to go somewhere else if it’s not a learning culture.

 

The last one is fit, and that’s where I’ll adapt to fit in an organization. I went into an organization where people were definitely not owning their mistakes. So I decided to model the behavior. When I made a mistake, I modeled the behavior of how you own it, and what you do when you make a mistake. The first thing you say is that you’re sorry. You try and make up whatever you’ve done and make it whole again, but that’s not always possible. The hardest thing to do when you make a mistake, the very hardest thing is to ask somebody whether they forgive you. That’s really humbling. If they say they have forgiven you, you have just strengthened that relationship enormously. If they haven’t, then it means you have to go deeper. What is stopping that forgiveness from being forthcoming? What else do I need to do that would make it possible for you to forgive me for my mistake? So, I modeled that behavior and people started following. It was a much healthier environment. Sometimes you make mistakes in an organization, you just laugh. You’ve done something silly, you’ve said something silly, you just laugh it off. But other things are much more serious and being able to deal with that across the spectrum I think is really important. The other one was present and accountable. If I don’t feel people are present or accountable, again, it’s really important to draw them into how I can make you more present. Because you’ve got a lot to contribute, but you’re not here in the room with us, so we’re not getting the best of you coming into our discussion. 

 

When I drew my leadership paradigm, which is not just one. That’s the other thing I discovered. Predominantly transformational, there’s some visionary stuff in there, a lot of authenticity, there’s a bit of transactional, because at the end of the day, we need to get things done. So, that was my leadership paradigm. That was what it looked like. But I drew my values around the outside and when I went to my professor, he said, I’ve never seen anybody draw it that way. They always put their values inside and their leadership outside. He said, why did you draw it that way? I said, because I see my values as the safeguard rails around my leadership style. For example, if you are in a troubled company, often transactional leadership goes on steroids. We start measuring everything in the hope that it will change rather than leading the change. I find that my values have to push that. I see it sort of like a blob, my leadership, but it’s pushing around it to keep it in balance with what I’m actually doing.

 

There was another great example. I actually joined a company. I said I wouldn’t do this. I said, I’d contract, do my research, then go back into the workforce full time. But I got wooed into a company and I’d been there before. What I found was that the company was extraordinarily different from the company I’d worked for before. So I looked at my leadership paradigm and I found that it was out of kilter. It was drawing on the transactional leadership, which was making me feel not in balance, so I was able to usefully break down what was going on and what wasn’t working for me in that environment. Therefore that took a lot of the emotion out. So either you can fit in that environment or you can’t. I couldn’t fit in that environment. I don’t choose to be in that environment, so I’ll choose to leave. I’ve got the privilege of being old enough that I’ve got the resources that I could make those decisions. It’s not always that way in your career. Sometimes you have to adjust or fit just to give yourself time to go and find something else. So that was my leadership paradigm. 

 

My aspirational leadership is I looked at, I want to work in the area of responsible AI. That’s what I want to do. My leadership paradigm is servant leadership, that is what I’m trying to develop. So, much more about empowering the team, coaching the team, having a diverse team, teaching intelligent disobedience, having a safe environment to learn those skills, and then empowering the team to actually engage in this is something we want to do. It’s not about me having a vision that this AI is going to change the world, follow me and all that sort of stuff. It’s much more about empowering the team to do things responsibly and to be there as that sort of guardrail for them in that experience. It’s aspirational, it’s what I’m working on, and it’s where I think I can best go.

 

Cummings-Krueger: Lee, thanks so much for sharing all of that. When we discussed it earlier, I realized that it is such a microcosm of what a year-long mentor partnership really is all about. It is that pausing, taking that step back, recognizing what your values are, having the time to think about your own self-awareness and how that plays out, and then how it helps you having done all of that work, which is hard at times, then allows you to have that north star to be intentional. I appreciate you sharing all of that because it was a little pocket of mentoring. 

 

The other question I have for you is, maybe taking a step back more broadly, not just as a leader, but it’s very clear to anyone who’s listening, you clearly have a mentoring mentality. I have learned over my years, whenever I’m talking to someone who’s in the IT field by nature, you get a broad view of the organization you’re in, you touch everything, and you have gained a broad view over entire industries and along the way as someone set in Australia, a really rich global perspective. You bring all of this understanding to your mentoring partnerships, and along with all the mentoring you do where you are, again, you’ve intentionally mentored with Menttium for 11 years at this point. 

 

I’d love to hear your perspective on what you’ve learned through the cross-company mentoring experience. What learning have you taken away, but also what have you found that your mentees have found most useful? 

 

Ward: There’s a lot in that. There’s so much richness in the culture of various companies and where they come from. U.S. Companies, Japanese companies, German companies, all are very different. Even within the U.S. and I’ve worked for U.S. companies for a long time. New York is different to Dallas, is different to the West Coast, so there’s cultures within cultures that need to be understood as well and have respect for that. Understanding why a German company works the way a German company works, or a Japanese company works a particular way. It’s not for me to change it, but it is for me to understand it and how it works. I think I fit most comfortably into a U.S. company because I just know that environment really well. 

 

I take that away, being respectful about the origins of companies, it’s often very useful to also understand where they came from, because it sometimes tells you about their values. How were they created? Were there mergers and acquisitions along the way that brought in different cultures again into the organization? You get that sort of wonderful background of an organization, a bit of, again, the richness that needs to come through.

 

I mentioned craft ethics before and I think that is very important. You’ll hear that in the language of people now. Digital people talk their own language, so we’re probably the worst at all of this, but talking to people in different industries over the years, particularly with Menttium and thinking that’s something that’s very particular to this industry and the way things are done. Is it the only way things could be done? Maybe not. So let’s have a look at what some other industries do and whether that might be able to be adapted into that industry. Digital definitely goes across all industries these days. It’s hard to find one where there isn’t some form of digital in there. But I think what’s really important in changing an organization, rolling out change is organizational change management. So again, it’s about people. I have seen the most brilliant technical projects, and they’re completely off the rails because there was no attention to people. The technology will do what technology is told to do; being careful with data. But people need to go on a change journey. They need to understand, here’s where we are today, why do I need to change? What is so good about this future that I will actually opt into that change? That’s good communication. It’s understanding that not everybody in an organization will necessarily benefit the same amount, but the organization will benefit overall, so therefore we need this change. I would say that’s one area that I don’t see enough attention to across the industry when rolling out any sort of digital program. 

 

In terms of working with mentees, I always think you learn as much from them as they do from you. It is a very fair exchange in my mind. It’s a fair exchange because we know for our own wellbeing, one of the key things that we can do is give back and that creates our own wellbeing. Giving back through mentoring or volunteering or whatever is essential to our own wellbeing. If we leave nothing else with mentees that might be looking at this, understand it’s a really fair exchange to being in a mentoring relationship and that mentors are generally very curious people, so we are looking for what we can learn from you. You’re in a different industry, you’re in a different country, what can I take away from this that I didn’t know before that you’ve now shared with me? I had one wonderful mentee. She was actually in Australia. She worked a lot out in Outback Australia, and she sent me the most beautiful photos of where she was working. I thought, wow. We talked about workplace health and safety and I’d be able to absolutely visualize what she was talking about because she was showing me a picture of what was going on. One of the things I always say, why are you climbing a ladder when I can send a drone up there? Get off the ladder, send the drone up there, it can have a look for you. Those sorts of concepts start coming to life when you can see the situation. So, share what you’re doing. It really helps as a mentor to be able to visualize what’s going on in your business to the extent that you can. It’s useful. 

 

Cummings-Krueger: Great example. One last question, and it’s a short one, but I’d love to hear if you have a favorite quote or a favorite motto.

 

Ward: I’ve got a couple, and the ones that I think crystallized in doing my PhD. The first one is “facts are scarce, but opinions are plentiful”. And the reason I say that is so often we get into an argument and it’s all opinions, it’s not fact-based. They’re just opinions. Everyone has their right to an opinion, and once you free yourself up from that and realize what’s going on, you can actually listen a lot better to the other person when you understand we are not fighting here to be right or wrong, it’s just an opinion. Let it be and try and listen really carefully to what the other person is saying. We’re saying, listen with good intent. I don’t have to agree or disagree with you in the process of listening, I am just listening to see what you say, and then you free yourself up from all of that emotional baggage about being right or wrong. 

 

One of our military leaders said something that I think is really important, and that is, “the standard you walk past is the standard you get”. There are moments of truth in an organization. If you see something that is below the standard that you expect in your organization, step up to it. It doesn’t mean you need to do it necessarily in a very visible way, but you need to have a conversation with the person about what’s going on, why that behavior was not aligned to what you know is the culture in the organization, what effect they had on the other person, and what they’re going to do about that. It’s a slippery slide; if the standard starts slipping, they just keep slipping. So, be really conscious of the standard of behavior in the organization. The last one would be” look after each other always”. You look out for the person you’re working with, your team, are you looking after each other? If everybody does that, then we will naturally have a better workplace and we’ll enjoy the workplace a lot more. 

 

Cummings-Krueger: Wonderful. Lee, thank you so much for such an enlightening conversation about a subject that many of us, and I can certainly speak for myself as a rudimentary user of technologies, so many of us don’t fully understand. I appreciate how you were able to crystal clearly show us your world, which is such a complex world, but bring it down once again to the human, the humanness of it in every case. There is so much to unpack with all of your different stories. Really appreciate everything you shared today. Just fascinating, so thank you, Lee. 

 

Ward: Thank you. It was a pleasure. 

 

Cummings-Krueger: Lastly, before I leave you, I also want to thank you for the ethical work that you do that is so important as we’re heading deeper into these uncharted waters. I love having you on our side. 

 

Ward: That’s right. Thank you. 

 

Cummings-Krueger: I also want to thank all of our listeners for joining this Menttium Matters podcast. If you enjoyed this episode, please feel free to share it with friends and colleagues and if you’re interested in additional resources, you can find our show notes on the Menttium website. We look forward to having you join us for our next interesting conversation.

Additional Resources