489 Artificial Intelligence Can Create Real Liabilities for Leaders Can ChatGPT Speak Bank
This show is sponsored by FIS Welcome to breaking banks the number one global fintech radio show and podcast I’m Brett King and I’m Jason Henricks every week since 2013 We explore the personalities startups innovators and industry players driving disruption in financial services From incumbents to unicorns and from cutting-edge technology to the people using it to help create a more innovative Inclusive and healthy financial future. I’m JP Nichols and this is breaking banks Artificial intelligence offers both risks and rewards for business leaders on one hand AI has the potential to improve business Operations increase efficiency reduce costs and drive innovation AI can be used to automate repetitive tasks Analyze vast amounts of data and provide valuable insights that can inform strategic decision-making However, there are also a lot of risk associated with AI such as the potential for biased or inaccurate decision-making The possibility of job displacement and concerns around data privacy and security Leaders must carefully consider these risks and rewards when implementing AI in their organizations and Ensure that appropriate safeguards are in place to mitigate potential negative impacts But is there more to it than that? After all, I use the AI powered chat bot called chat GPT to write that very introduction So maybe it has its own built-in bias. So I’m excited to talk to a human being about this I’m welcoming back to the show by longtime friend Al Calgar Al is an attorney and the author of the book one nation under algorithms The threat of algorithms and AI to civil rights legal remedies and American jurisprudence in it He looks at the growing use of algorithms and artificial intelligence and the potential threats and new liabilities They may create for business managers and board members We’re seeing an explosion in AI into the mainstream right now with AI created art and writing including the sudden massive popularity of chat GPT from open AI Now he has a new article coming out in the technology journal at Case Western Reserve University called Corporate fiduciary duty in the age of algorithms now.
Welcome back to the show. What’s changed since you wrote your book in 2020 I Think the biggest change is more awareness of the ramifications of AI as Chat GPT and all these other new Technological devices have come out. There’s simply more exposure to society in general and to business Decision-makers about the these new tools that are available, but at the same time the risk of using these tools I Found it funny even last week listening to some journalists talk about using the chat function and saying wow it came up with false facts that it presented to me as as legitimate articles and one journalist was even complaining that She found an article with her byline on it that chat GPT had written That she had never actually written and it was full of false information But somehow chat GPT created it and so it’s like well There’s nothing new about these harms and risks That people have been talking about them for years.
At least I’ve been talking about them since my book was published a few years ago It’s just now society in general starting to become aware of these risks. Although even now they’re not aware of how Deep the depth and breadth of these risks to business operations business leaders well, it’s kind of like 20 years ago, it was about Wikipedia and You know kind of shared crowdsourcing of them for how do you know this is even right and there has developed a You know fairly decent set of editors and redditors That that go around and try to keep those things fairly pointed. Well And I know the same things happening with AI and these engines But as you say, there’s still a lot of risk what what in particular caused you to write this article? Well, I think it’s because I was reading more and more about the impact of AI on corporate operations and strategic decision-making and It got me thinking about the inherent deficiencies of AI and what happens to the unwitting unknowing corporate executive board member anybody that’s a corporate fiduciary who either relies upon AI Without their proper self critical analysis of it or ignores it because they don’t understand it and therefore therefore are not using the tools available to them that they should be using in the age of algorithms and That got me thinking about okay.
We have all these standards traditional standards of fiduciary duty what does a senior manager and a board member Oh To their shareholders and in some instances even the community around them in terms of making proper decisions using proper analysis proper bases of decision-making and How is that going to be impacted by AI and what it boils down to is on one hand there could very well be a breach of a fiduciary duty if Executives and board members don’t use these AI tools That allow them to crunch far more data than they ever could before to make decisions more quickly and more efficiently and more correctly usually the flip side is Anybody who relies upon AI like it’s some Omnipotent Oracle from the Greek gods and it’s always gonna be right It’s just simply Asking for trouble because they are not going to be fulfilling their fiduciary duty to do their own critical analysis to only rely upon Proper opinion givers whether it’s a human expert or an AI expert In order to fulfill their their duties to make strategic decisions and oversee their companies Well, let’s break that down a little bit. Let’s talk for a minute about fiduciary duties what what are the fiduciary duties of an executive or a board member and as you said whether it’s a You know a small startup with a handful of investors or a publicly traded company What does that look like in the eyes of the law? well The Breakdown of the categories of fiduciary duties changes from time to time But they sort of boil down to two areas. One is what’s called a duty of due care which requires Well, let’s back up a fiduciary The person who has these obligations is anybody who’s going to make a decision that’s going to Be so impactful on a corporation that it could affect the equity value of shareholders So normally that would be obviously members of the board of directors It would be senior executives and when you get to smaller companies it would be you know members of the managers and members of the limited liability company or the General partners in a limited partnership.
So what? Obligations fiduciary obligations to those fiduciaries have they can be broken down into a duty of due care Which says at any time you make a decision? You’re supposed to use the best means of analyzing that decision the best way of collecting the data to make that decision the best way of understanding that data and the other Fiduciary duty is what’s called a duty of loyalty Traditionally that has meant you don’t get in in conflicts of interest But that has been expanded more recently to include a duty for Transactions that would come before you to buy and sell an asset to merge that sort of thing. You have a duty to Completely analyze that transaction to the best of your ability and To not defer too much to others thereby relying upon their Analytical process instead of yours Also, it requires on a day-to-day basis that if your company is involved in an activity that could be harmful to society whether it’s making food or Doing things that are potentially environmentally harmful. You have a duty to make sure that the company is Exercising the best means possible to be in compliance with the law and to not Have those potential harms actually occur So that’s how the duties are basically broken down by the courts.
So pretty long-standing in business law and case law and really slight variations But exists in all 50 states and in most countries right some version of this that if you’re a Manager or director of a business you have some responsibility So, how does AI change that and and what are you know? Both the upsides and the downsides of of AI for those with fiduciary duty well the benefits are that Any Fiduciary should be able to fulfill their fiduciary duties more easily with proper AI tools because once again AI whether you’re a Scientist or a doctor or an executive in a company allows you to have information and data crunched for you and a far greater volume than any any human ever could collect let alone analyze and Usually AI is great at Drawing proper conclusions or correlations from that data including correlations that a human wouldn’t even think to make so there have been instances for example in the medical field where scientists were researching let’s say the efficacy of a drug and The AI they were using to research that actually said hey, there’s a whole nother benefit or ramification of this drug that you didn’t even consider and The algorithm discover that itself so an executive or a board member is going to have this powerful tool to make decisions more quickly and better and Once again even come up with alternative decisions that perhaps were not even brought to That board member by say the executives because nobody even thought along those lines. So that’s all the best-case scenario. Yes But we live in the real world what actually happens or can happen in some cases does happen with Things like AI and algorithms to get in the way of these fiduciary duties.
Well any algorithm Program, whether it’s used by a business person or the medical field is going to have certain inherent deficiencies number one just The ability the abilities of the designers who came up with the algorithm if you hire a company to design an algorithm to analyze Which of your current income streams should you sell off because you don’t want to be in that business. Well, how do you know that that? designer has sufficient knowledge about business practices and You know They’re not gonna have an MBA probably they may not even come from a business background such that they know what questions to ask Let alone how to design The algorithm to start creating those correlations. Yeah, this gets into the black box problem, right? Yes.
Yes, and then the second problem is AI needs data it that that’s how it operates, but Data can have multiple problems. First of all a database that is not sufficiently broad May not have enough data to come up with the proper correlations Secondly Data whether it’s private company data or public data is imbued with decades and decades of discrimination and bias so that An Algorithm could pick up on that historical bias and discrimination and basically say to itself and I don’t want to over humanize the AI process, but oh the the most efficient way to come up with this decision is I Looked at the in history and for example in hiring decisions You hired white men because they were the ones that traditionally have been most Likely to fulfill a c-level position So why do I bother looking at women or why do I bother looking at people of color? Why do I bother looking at anybody that you know didn’t graduate from an Ivy League school? So those inherent historical biases and discrimination can get reiterated and concentrated By an AI process. So so part of the challenge is that AI learns from history but embedded in history is Maybe not some practices.
We want to repeat so it ends up repeating the sins of the past and let’s Click down on that Hiring example because you in your article you you actually talk about a company that did that I mean the part of the challenge is you kind of reverse engineer. It seems like a good idea Hey, if these people have been successful, why don’t we kind of reverse engineer? What are those traits and how do we hire more people like this? But then it ends up being like you said The implicit biases are not necessarily picked up right this goes back first of all to the competency of the designers They may not know that the questions they’re asking are full of biases So they are introducing their own biases in the design process Secondly As we’ve been saying Any AI program that needs to Find correlations could either find these correlations due to improper design or the algorithm itself through the machine learning process where it teaches itself how to act more efficiently in Analyzing data could pick up on these historical biases and then to make matters worse Once the machine learning process starts You don’t know where that’s going to end up. Nobody knows where the software has taken itself has machine learned itself to to make these correlations, so Once it starts spitting out results, you don’t even know if those correlations were properly drawn because there’s no way Well, at least in many instances, there’s no way to Analyze how the analytical process happened.
So the historical discrimination can be reintroduced and and Highlighted in some very scary ways and Going back to the whole idea of a fiduciary If you have allowed an AI program to make HR decisions that result in discrimination or you’re with FinTech and you’ve allowed an AI program to End up redlining again in a loan process You could be found to have violated your fiduciary duty to your company by allowing that AI process to consent Continue and to make your decisions based upon that without doing your proper critical thinking and questioning and double-checking So I want to come back to that because that’s a really good and important point But I want to go back to the correlation because as you’re saying that, you know, it just strikes me it’s it’s just a classic human error, right the confusing correlation with causation and You talk about how the AI might use the wrong weightings in order to come up with those correlations And then we get to the back to the humans that are relying on this so What’s what’s the fiduciary to do right it one alternatives? Well, I just won’t use AI But that doesn’t seem very realistic both with the advance of technology and the many benefits that you describe So, how can I double-check this when like you said, it’s it’s hard because of the black box problem well, and before I answer I want to throw in one other fundamental problem and that’s just AI is never going to be perfect. It would cost far too much money To develop an AI program that’s going to be 95% effective 99% effective between this hardware you would need to run it and The the analytical process that would have to go through so but that’s okay because humans aren’t perfect either But but we have some unique Challenges and risks when we’re relying on algorithms instead of relying on people, right, right, right because If you once again we go back to this idea that you can’t treat an algorithm like it’s an omnipotent Oracle Or omniscient Oracle, let’s use that word So what has to happen the fiduciary is going to have an obligation Just as if they would have hired an expert to say to them make this financial decision or do that operational change They have an obligation to question that they can’t just defer to the AI process They have to question just like they would question what? Expertise does this expert actually bring to this question? What did that expert look at to come up with the conclusions? is there something the expert missed and then using the just the expertise that a human has that AI processes don’t to say based upon years of experience and based upon Even professional intuition what what doesn’t sound right here? so the individual fiduciary needs to Make sure that the final step of any process using AI is that there be a human Analysis questioning it and double-checking it now to go one step beyond that obviously if fiduciaries are going to buy AI programs, they should start insisting that there be double-checks built into the AI programs Obviously an AI program is going to be cheaper if it doesn’t have all these layers of double-check and transparency and everything else but you know you get what you pay for in that case and If the fiduciary relies upon a cheap program and then finds out that that didn’t have adequate Transparency and double-checking they’re going to be liable for failing to Use proper AI just as if they would be liable for hiring an incompetent expert Well, what are the core challenges here? Of course is that hindsight is 2020 when we figure out a decision We’ve made is a bad one it’s easy to look back and think of things that we would have should have could have done differently and When shareholder lawsuits emerge the common areas of defense fall under business judgment How does business judgment? How is it impacted by AI and algorithms? Well, once again, there’s it’s a two-edged sword The business judgment rule says that board members or any fiduciary who makes a decision will not be personally liable If that decision is wrong if they acted in good faith and acted like a reasonable board member would So you’re as a board member. You’re not expected to be perfect.
You’re just expected to do your job so The business judgment rule is going to say well if these tools out are out here. These modern AI tools are out here a Reasonable board member would use these tools So once again, if you start saying I don’t understand this modern technology crap You could be looking at having lost the immunity of the business judgment rule because you’re not acting like a contemporary Business person would act the flip side is you are obligated to act like a reasonable Fiduciary and that means you can’t just delegate the decision-making process to other people or in this case other technology So if you don’t use the AI tools You could be liable for not meeting the minimal standards required of a business judgment rule If you do use them But then rely too much on them you could also be violating the business judgment rule by not acting as a reasonable business person should in doing the analysis and critical thinking and double-checking if we seen much in the way of The case law or precedents around not using AI when that would have been reasonable and prudent There have been some Shareholder litigation where Shareholders lost money and They sued the brokerage or their financial advisor on the grounds that the AI that was used was too simplistic and cost them money because it got them into Investments that were not appropriate for them or made decisions that if a human had stepped in The human would have said no, we’re not going to make that investment. There have been Employee actions brought mainly in the public sector Where for example a school district in Long Island used a very simplistic AI program to evaluate teachers and Teachers were given promotions raises or fired depending on the scores that the AI program gave to the school board and It turned out that one of the best teachers that the school district had was put in the most negative category Simply because she was so good that the AI program said I don’t know how to fit this within the standards I’ve been given so obviously this person’s not acting within the standards So they must be wrong because the AI program did not sufficiently say well if they’re better than the standards Well, yay for that teacher.
Hmm so Those are just two examples of where AI programs have already resulted in complaints and litigation But once again, the whole Industry is so new that it hasn’t gotten to the level of The courts looking at what AI programs fiduciary should be looking at if there’s over reliance on the AI programs in fact, it’s even scary because what I’m reading in the legal part of the world is Gee how can Completely non-human corporations be created and run operations using AI and isn’t this fascinating that we remove humans completely from the idea of running a company and it almost gets Into like some really bad sci-fi digital autonomous Organizations and yes, exactly. Yes. Yeah, we can just put everything on a smart contract on a blockchain and Retire right, right, right and we’ll ignore the fact that all sorts of black markets will be using the the Sort of technology to completely hide their tracks as they launder money and and import and export illegal goods and will protect all the investors because they’ll say well I just relied upon this AI program to run the company and it’s not my fault if it resulted in contamination of the local community or Health problems for all the workers, etc, etc, etc.
So well, and as you say it, it’s still early So we’re at a stage where we’re envisioning both the tremendous upside and some potential downside What are the recommendations or best practices and and you know, how? How early are we in this and and are there best practices yet? There are the very Fundamentals of best practices Which are as simple as a human should be the final decision maker regardless of the AI process used and be any good AI should have a component of self-checking or self-criticism and Means of transparency in other words overcome this black box problem of nobody knows how the AI program worked by Requiring that there be some sort of a Tracking mechanism if you will beyond that not much has happened yet It distresses me because what I see happening is what could happen when? people start making decisions within their own little spheres of their own little vacuums, so You see legislators trying to legislate against AI whether it’s self-driving vehicles or The EU has been looking at the use of robotics and a whole wide variety of situations They’re not necessarily asking the right questions. They’re not looking at all the ramifications so what they’re coming up with either won’t answer all the problems or Will actually be negative because it won’t address the problems, but it will defeat the growth of the AI industry to make it better so ultimately, you know, I put on my free enterprise cap and say Industry leaders And not just people within a corporation but those consumer advocates and those plaintiff lawyer organizations should be working on standards that set The criteria for when should AI be used and when should it be questioned? and using those standards courts and legislatures can come up with Court rulings and statutory law that will then set the standards for fiduciaries in the future That are based upon adequate grounds and not just being made up on the fly Well by definition legislation and regulation always lags innovation But I agree with you that I think we need to Take this upon ourselves to try to do the right things try to think a couple of steps ahead About disparate impacts and unintended consequences and Start to put some of these guidelines in place. So Al, how can people find out more about your work and your writing? I Frequently publish articles As you mentioned, there’s one coming out in May.
I obviously had the book that I published in 2020 Just before the COVID hit so it was published in complete silence, so to speak Beyond obviously what I’m doing there are there are experts in the field And you’ll come upon them as you read things. I think that once again, it goes back the problem of there’s lots of people writing articles and giving opinions that Are novices when it comes to AI and are just now thinking about this so Just like with AI itself You can’t rely too much on what any individual person is saying and I will be the first to say what I’m saying now could Prove to be wrong in five years because who knows where the technology is going to take us It could be proven wrong in five weeks, right the fastest things are going so yes. Yeah.
Well nonetheless, I appreciate you Rejoining us and sharing kind of the latest greatest on this and I think we’ll have you back again in five years or five weeks Whichever comes comes first, okay Thank you very much When it comes to global payments, there’s no standard consumer or one-size-fits-all solution Each markets payment landscape is unique and so are its participants this and your customers Start with the global payments report from our partner FIS with data on more than 48,000 consumers across 40 global markets the global payments report breaks down how consumers pay today Both online and at the point of sale and projects how behaviors will change in the future Get up to speed with the fast changing payments landscape and position your business for future growth Download the global payments report today by visiting worldpay globalpaymentsreport.com FIS Advancing the way the world pays banks and invests This show is brought to you by alloy labs as much as we love talking on the show We believe that action is more valuable than talk Alloy labs is the industry leader in helping fearless bankers drive exponential growth through collaboration Exclusive partnerships and powerful network effects that give them an unfair advantage Learn more at alloy labs.com alloy labs banking unbound Besides the downside risks much of the upside has yet to be realized from chatbots and financial services Chatbots are helping to reduce costs but doing little to meaningfully improve the customer experience Can chat GPT help the sector do better? Breaking banks Asia Pacific hosts Rachel Williamson and Simon Spencer our resident AI expert tackle the subject Asks chat GPT and offer the use case of a Singapore based insure tech firm That’s using big data and AI to help deliver their product Hello and welcome to our bank My name is and I’m here to assist you with any inquiries you may have before we proceed How has your day been so far? I hope you’re doing well It’s always nice to have a friendly chat with our customers. So how may I assist you today? Is there anything specific that you need help with if you’re comfortable with it? I’d love to know more about you Do you have any exciting plans for the weekend? Have you tried any good restaurants or cafes lately as your bank representative? It’s important to build a rapport with our customers and make them feel comfortable So if there’s anything else you’d like to chat about or if you have any questions about our bank services Please don’t hesitate to ask. I’m here to help you in any way I can That’s chat GPT when we asked it to pretend to be a bank representative and Malaysia and strike up a friendly conversation with a customer awkward in reality bank is only just starting to think about how a Conversational AI like chat GPT might be useful to them to discuss whether chat GPT could ever speak bank I’m joined today by breaking banks co-host Simon Spencer who is a bit of an international AI expert Later, I’ll speak with Sandra and CEO of igloo insure, which is a Singapore headquartered insure tech That’s already using big data and AI to deliver its offering.
So let’s bring in Simon. Good morning, Simon Good morning. It’s great to be back on the show again Simon you have worked as head of data and AI with a big insurance firm that Suncorp So tell us how it’s been used there Look before I go there I always been something that I think banks have been looking at for more than a couple of decades now you know as various incarnations of iterations, you know expert systems and then machine learning and Chatbots and AI and then obviously most recently open AI and chat GPT so it’s it’s a story that keeps coming back again and again and You know at various points, I think people have done some really interesting stuff You know nab did some really interesting stuff with my cyber twin and yeah Liesl Liesl Yardsley who went on to work at IBM and then founded her own startup Yeah what absolute pioneer and I was actually really privileged to work with Liesl and work with us guys over at ANZ to build a Prototype, you know intelligent agent and that was back if you know, goodness how many years ago a long time ago Back when I was at ANZ There were early stage technologies and in each case it’s about how do I use this within a some sort of guardrails more recently When I was over at Suncorp, I was there for about four and a half years Doing a variety of things including trying to move their their AI story and their machine learning story along What I saw was actually an organization that really desperately trying to do the right things, you know And and actually there’s some really good stuff that’s been done there.
You know the the work they did with chatbot with sunny They were trying to add smarts to it They weren’t just trying to create a dumb robot that that could use some sort of knowledge base they’re actually trying to add some sort of ghosts in the machine and and some sort of ability to do a bit more than then To just respond to simple queries where I think there was some interesting work that was done And yeah, I’ll take a moment of credit for perhaps but but you know There was it was there was some great people do it driving it was actually looking at ethics and looking at yeah How do we use AI and how do we use machine learning and there was a real desire at Suncorp and then alone? To be beyond reproach to use this in a way that was actually about how do I drive a better customer outcome a few people? Who’ve known me views me? You know heard me talk about you know The gymnasium and the personal trainer and and you know about adding more value to the experience not just being transactional Which is obviously what a gymnasium has to be and and there was a real desire at Suncorp And I think still is a real desire to be beyond reproach and use these technologies in a way that actually Drives better outcomes for customers. I think that’s that’s a path to goodness Obviously where we are right now though is it’s really still early stages and they are very cautious about what they do and how they Explore this because you know, it’s easy for these things to go wrong Yeah, we’ve reached out to a number of bankers for this episode and no one was comfortable yet talking about how they might Use chat chat GPT and in particular one person did say to us since chat GPT correlates Probable answers as opposed to finding an exact truth He was saying that that could be a deal-breaker for using it in any meaningful way for customers at least for now Because for example, you can’t infer Probable financial information your bank balance is either $100 or it’s $5 and chat GPT at least can’t do that yet So let’s talk about Where the game-changer could be for things like chat GPT and AI more generally Look, this is really interesting. I Was you know again, you know going back in my career I was exposed to web technologies before they were even called web technologies when it was SG ML and We experienced these technologies sometimes in a really early sort of nascent form And and and I think we’re seeing that obviously with open AI and chat GPT You’re seeing the equivalent of sort of Windows 3.1, you know Which at the time was a horrendously insecure unstable operating system a bit of a ripoff of the Mac operating system But actually it was a game-changer for Windows and game-changer for Microsoft and and and obviously we’re seeing of the other equivalent of the first Version of the web, you know Which again was a horrendously mature platform that didn’t have transaction management security and a whole bunch of things that you’d take for granted today So you’re seeing chat GPT and open AI as the first precursor of of what’s to come the difference of course is that Iterations of some of those are previous technologies like the web and the operating systems and so forth Well relatively slow they took you know months and years to new versions of them with chat GPT You can see iterations happen in weeks and days We’ve later on in minutes and even less so we could see iterations of these technologies mature quite rapidly It’s interesting.
Somebody said to me. I’m an expert in chat GPT and I go It’s only been around a short period of time I’m not sure but you really can use that term but but I think we will see a very rapid maturation We also need to see these things for what they will become rather than what they are right now And also work within the guardrails like you wouldn’t have built an air traffic control system on Windows 3.1 I’m sure someone probably tried to it probably was disastrous So there were some things you should probably shouldn’t be trying to use chat GPT Use chat BT for But there are definitely things you could use it for right now Yeah summarizing documents producing an understanding of the context of things a lot of organizations have been looking at AI and and Machine learning to help understand next best conversation and they tend to use very transactional Sort of signals to try to figure out what we should talk to the customer next about with chat GPT You can have a much richer understanding of of the customers experience the sorts of contents They might want to be consume the right way to deliver a message and then synthesize the correct message and personalize that message So I think we’ll start to see cat GPT use with Personalization with ways to frame the right message if you’re going to turn someone down for a loan or turn some will prove someone’s You know You can actually put the message in a way that they will understand or at least I give them perspective that will go actually that Wasn’t a negative experience. That was actually a personal a coaching experience again going back to that personal trainer So I do think that organizations should be looking very closely at this You know, we’ve had a decade of war two decades really of Organizations building what I’ve tend to scribers dumb veneers against even dumber backends, you know, they might be slick They might have swipe left swipe right and they might look great, but they’re relatively unintelligent.
They’re transactional I think we’re gonna now see a decade of people building very sophisticated experiences potentially against equally dumb backends But those sophisticated experiences will be contextually aware They’ll be sensitive to what’s going on and they’ll put very be much more personalized And I think chat GPT and open AI will open the doors to those sorts of things You’ve mentioned guardrails a couple of times now I know you’re a longtime proponent of the potential for AI in all facets of our financial lives But what are those guardrails and where are the limitations? authentic and for banking so What you want in place to make it work in this sector, which may not be necessary say an entertainment or retail or so on Yeah, look, it’s a really good point Just because you can do a thing doesn’t mean you should do a thing I Did some work a long time back involving the casino industry and techniques they use to get people to continue to stay on them the machines in Vegas and and some of the Capabilities they have are quite insidious and quite toxic and those techniques have obviously been used in computer gaming And many of the cases they came from that world, but it’s in another in other areas They could be used for lots of different purposes. You know, there’s a point there where that’s wrong Where that is you’re working against the customers self-interest and the customers best interest And as I said, I was really gratified to see a good conversation happen across several banks looking at how do I use these technologies? to To enhance the customers life to be more relevant in the customers life to be as said that coach to the Personal trainer that drew delivers a better outcome It was based on a hypothesis So I think you sound that a more successful Customer is going to be a more openly a more profitable customer to the bank That just gaming the customer to try get a few extra coins is actually not a good strategy It’s a strategy that lacks integrity, but it’s also a strategy that probably won’t generate strong financial returns Ultimately, I think we start to see organizations say I don’t want to just have a an AI Endlessly try to do a cross sell and do you want fries with that? I actually want a eyes to say well, how can I help and if I can’t help get out of the way? And all find ways you can be a part of that story and let that be part of that customer journey in a positive way So those conversations are being had inside organizations like, you know Like some corporate and others and and I think they help define what the guardrails are. How do you deliver value? How do you help manage risk? How do you deliver better better prosperity and better outcomes and And obviously the reverse of that is if you’re not doing those things then you’re over the other side of those guardrails Those guardrails are really important in a sector like insurance where you’ve come from because you know you are Playing a part and protecting people from huge risks.
Yeah at its most extreme floods fires Earthquakes volcanoes so on I’m not sure how many of those you guys actually insured for But if you could program the next iteration of chat GPT that an insurance firm might actually pay for What would you want it to be capable of? Oh goodness. How long do we have here? the I’ll start my my time Look I think that the insurance companies are about understanding a risk and risk is a very complex Interplay of a lot of different factors. So understanding context understanding the larger environment, you know, ultimately, you know, we’re going to see Risk models that model, you know model the world model You know the economy they model the environment they they model, you know a whole bunch of different financial factors and and so where you can understand a System a complex interplay of risks and so forth.
You can then manage to hedge against the the things that can happen in life And and so I think it’s it’s about how do we how do banks and insurance companies and people you’ve got your questions about insurance How do insurance companies provide that ability to understand risks and then protect customers from you know The various things that they think can happen. Yeah an interesting conversation. We had was As uncle was flipping insurance on on the eyes head.
So instead of you know paying for insurance You we first started the conversation with yeah These are the sorts of things that could possibly happen to a business like yours, you know If you’re running this sort of business, you know The fryer can catch fire or you if you’re running a business with lots of inventory You can get burgled if he’s running a business digital business. You can get hacked You know understanding, you know, the probabilities of those risks actually happening to you and then saying well guess what we can protect you against those You know, here’s the here’s the card data that shows how many times these sorts of things happened and what were the impacts to businesses? Like yours we want to protect you from this happening And when you have that particular protection, guess what if they would have these things are to happen. You’re safe.
You’re okay You can sleep well at night and and it’s sort of flipping the sort of insurance on its head a little bit and saying we’re Actually here to protect you and I think you again, you know using AI’s to run simulations to run models to Start to understand the the actual type of business you run and then if we understand that business we can then say well This is a sort of insurance We think you probably should have based around the sorts of risks You’re likely to have based upon the unique nature of your business. So that sort of highly Personalized risk pricing and also very transparent risk pricing as well. I think those things are coming as well that idea of insurance companies Protecting people more is quite different to the way people perceive insurance companies now, isn’t it? which is I will pay you money you might pay me out and and if my Accident or problem meets all the TNCs.
Yes, that’s yeah, that would be a huge change for the industry Yeah Absolutely. And look I’ve been working with Post Suncorp, I’ve been working with another insurance company that’s doing some interesting work in the climate space and they Have a parametric insurance product. So basically, you know, it sets a series of Events and if these happen as parameters in the insurance policy, they automatically pay you out They just let you know Oh, yeah you may not have known that your your wheat crop or your your wine crop is perhaps being compromised by 30% because of frost or because of maybe smoke damage from a bushfire that was 20 kilometers away You know, they know about it And so they literally just send you a check and so that to me is a really interesting sort of model where you know You you have this parametric approach and when these parameters are reached you get paid And I think that might become more and more common.
That is fascinating Now, let’s get back to chat GPT There’s a lot of people who are tinkering away with it to see what it can do Much like our next guest Sandra at England. Sure But there are also a lot of people who are dismissing it and as a journalist I have seen what happens when people dismiss new tech out of hand and think it’s not coming for their jobs because Generally, it always does Um, what should people be watching closely with this new application? I We’re seeing chat GPT as a prototype, it’s it’s running within a sandbox it has lots and lots of limitations It’s it’s running within a sandbox deliberately They don’t they don’t want it to start trolling the entire internet and becoming so dynamic and sort of polymorphic Where is it just it is changing and evolving too rapidly. So so you have to see it for where it is It’s an experiment right now Coming a very strange experience like thing very much If this there’s lots of companies building really interesting integrations, yeah, I think we Will see a flood of people experimenting around an experiment and that’s great But we’re also going to see a point a very not so distant future when this thing is liberate is live It’s connected to the internet.
It’s connected to news feeds. It’s connected to weather. It’s connected to contextual information and now it’s also a Production platform with five nines of availability and a pricing model that is somewhat locked in so people can now build an economy around these Technologies, we’re not there yet, but we’re not far off it either and so That’s a point to watch when this thing goes live is it connected to the internet? This is going to explode in a way that’s you know, maybe people might say it’s already exploded and that’s actually not the case It’s this is just you know, the early starts I do think this the use of a open AI The use of AI in general will be as profound as the rollout of the Internet the rollout of the web Yeah, it is a profound shift it these things also are not isolated You know the semantic web the that web that’s describes itself the distributed web Yeah, the web three-story all these things are actually Connected together and we’ll start to see some really interesting things, you know We also will start to see businesses that are just an AI, you know And in some areas already we see countries recognizing AI as corporations or as persons You know Estonia recognized AI as a person so you could actually have a business That’s not owned by a personal maybe it’s owned by a person somewhere as a board But it’s run as an independent autonomous enterprise Providing a variety of products who could provide software engineering products Knowledge management products by the stock and the financial advice products It could provide insurance products of body coalesce products and this doesn’t if there is actually no human involved other than maybe a shareholder Or an oversight and so so these these things are coming quite quickly We’re that there already exist, but I think they’ll start to mature rapidly and that context, you know I I has a lot of precedent to learn from When it comes to corporate behavior How do you make an AI which and Is a good corporate citizen There’s you know How do you make it so it’s not going to be learning from it’s not gonna be reading barbarians at the gate and then learning from From there, so it’s not going to be taking the Exxon model and going.
All right, let’s do that Then there’s a whole podcast that we should do at some point around The future of regtech and the in the future of regulation and oversight because I think the future is actually AI You know that that opera and you know, they literally have a set of agents that run across a bank or an insurance company and they’re looking for particular events and transactions and profiles and they’re continuously profiling the state of the You know of the financial services organization in real time They’re looking for risks and they’re looking for areas where potentially we can optimize and improve. Yeah, that’s the ultimate future We’re not there yet, but it’s coming and if you’re working in sort of in the reg area I think you would become clear that over time the amount of data that the regulators are requiring is because we continue to go from, you know quarterly to monthly to You know continuous data feeds of a summary information to ultimately, you know a shift to actually just just give us access Our eyes is going to crawl through your core systems and understand sort of any risks And if we see something we’ll let you guys know first so you can fix it But if you don’t fix it again, guess what? We’re gonna raise some red flags or maybe we’ll throttle your business a little bit in some way to attempt to manage those risks So definitely I think that AI’s will act And agents will act as you know, sort of guardrails as well over What sort of transactions are acceptable or what sort of behaviors or except? Thank you, Simon. It’s always lovely chatting about the potential of AI with you.
Thank you very much. Great to be back If Simon is a futurist when it comes to AI our next guest is using it in her everyday work life Sandra Ernst is a German expat with a master’s in Arabic and Islamic finance This led her to Malaysia where she worked in the Sharia compliant private equity real estate sector and she’s been in Singapore with her family since 2016 where she now works with Igloo Insure, a company founded by former Grab CTO Wei Zhu Igloo is a platform that makes digital insurance products. Welcome Sandra.
Thank you for having me Now we can start talking about Igloo. Can you tell me, can you tell us about it? What it does and how it currently uses AI to be a competitive player in the industry? So Igloo is a Singapore headquartered Southeast Asia focused full-stack insure tech company with the pain to make insurance Simple, affordable and painless The way we use AI, I think as a tech company We have been exploring it ever since we started to make human processes or to replace human processes completely or to complement them which means that we use AI from the claims journey, for example, to the underwriting journey and also the risk journey. To give a couple of specific examples of what that means to us Like when I speak of the claims journey, it’s basically making the claim process much more efficient and automated So we have products there instead of a team member like looking at the claims and assessing them There would be like a rule based approach, a rule based AI approach that would assess the claim based on parameters that we define and then make a decision on that So that is one of the use cases When I speak of the risk management or underwriting, that is another one Maybe basically assess the risk of an individual policyholder also based on various factors and then we can price in the individual risk And we have also tried out AI usage where I think one of the typical examples is, for example, for car insurance In the past, you would insure the driver based on the risk according to the age and like some other factors you looked into Well, now the new use cases are looking at driving behaviour of that driver Where are they driving? How fast are they driving? What time of the day are they driving? To assess the risk there as well, based on that Have you looked at anything incorporating any kind of climate change things into your insure take? Really interesting question, because we actually just launched a weather insurance product in Vietnam Which sits on the blockchain and it’s basically a completely automated insurance product Where from the moment the farmers sign up for it That’s where the whole automation and rule based approach and like smart contract journey starts So we are integrated into weather monitoring stations that pull the data from there And then the moment there are specific periods that we assess the data And if the data is or the weatherfall is below or above a certain level We would automatically make the decision on the claim and then the claim would be paid out So that’s a great use case as well, right? It’s around automated decision making and you simplify the whole end-to-end process for a product There have been like smaller projects where people were testing it But it’s still a very, very new concept And I think it’s really one where myself, a big part of the team, we are really excited about it And let’s talk about the new AI kid on the block And that’s ChatGPT Have you been playing around with this within your business? So I love it I have been playing around with it for a while And I don’t know how much you have tried it But, you know, it’s beautiful in a sense that it’s like having a conversation with an expert While also being aware where the shortcomings or the limitations are Like I think where it’s amazing to use and where I’ve used it for my day-to-day work as well Is for asking or for getting specific information, for getting started When I work on a policy or like a process and I just don’t want to do that initial work To put down the structure and the initial thoughts It’s fantastic for that I’ve also explored using it in other areas like yesterday at an event And I talked to people and they’re actually using it for their compliance Where they are starting to integrate natural language and processing technologies like ChatGPT In their compliance portals or like everything that is being stored in Confluence, for example And then it’s instead of basically having, you know, when you have a question about compliance Can we do this? Can we not do that? What is the rule around that? And like having to Google and having to find exactly the key terms that would be used in compliance You can use the equivalent of ChatGPT and you can just ask a question the way you would ask a colleague And they would recognize, right, OK, you’re not using one to one the same wording But you’re referring to that And instead of them pointing you to section 10.5A.3 or so and say read it yourself They would say, you know, like this is the response to your question, right? Like you can do this or you cannot do that So massive improvement or potential improvement there as well for work, I think But it’s around helping us to be better at writing and initial policy creation At compliance with the policies that we already have in place I think where the shortcomings still are, at least the ones that I’m also seeing as legal compliance questions I have asked things where I knew exactly the answer was different and it very confidently gave me the wrong answer And I tried to find out where it did get that wrong answer from And I mentioned it I was like, you know, from what I know, this is the answer And it was like, yeah, you are right Sorry for giving you the wrong information earlier But I couldn’t understand why it would have given me then the wrong information earlier And I think that’s where, you know, we need to be very careful It’s great for initial research, but everything still needs to be verified Yeah, it’s for things like compliance and legal issues and accounting issues, I imagine as well You know, it’s trolling the entire internet It’s not trolling just a legal database or a medical database or an accounting database And so there is going to be misinformation out there that ChatGPT picks up And as you say, very confidently says, this is your answer How are you reckoning at the moment with the time limit? Because, of course, the data is not up to date Learnings are not up to date They stop in 2021, I believe And I imagine with compliance, this is an area where you also need very up to date information So have you run into that roadblock at all yet? Yeah, so I would never use it to try to get up to date information I think I would really use it for simpler tasks there And I think where I see a very big potential advantage is like once you have defined your compliance policies, once you already know them for internal usage and for making compliance a bit more natural, more like integrated, more in processes Right, like it’s one thing to have policies drafted But then no one looks at them ever And you might have inconsistencies across policies that you can’t detect unless you have a very smart language processing tool that also helps you identify that There have been technologies like that also before already But I think looking at what ChatGPT can do, it’s just a different level And it’s so much more accessible, right? Let’s look at the blue sky possibilities for ChatGPT 2.0, 5.0 and all of its successes within the InsureTech and FinTech industries What do you imagine it could do for your industry? So I think a lot of it, and also because these are the use cases I’m currently looking into are that operational efficiency of nicer, better customer servicing, more standardized customer servicing Right, like if you have a hundred different customer servicing agents depending on how strong your training is and how long the people have been with the company I’m sure they would handle things slightly differently So you might not have the same level of brand recognition, of like service level as you can do if you use the technology that helps you with that It could also be like identifying, right? Like where are cases handled in a way that’s not in line with how we want to act as a company Like customers getting frustrated, people not handling escalation cases in the right way That’s one I think another one is really that the risk-based pricing for insurance companies where, you know, based on the information that you receive from customers you can offer them a very specific premium rate Now, that is a tricky one, though Because one of the key concepts of insurance is pooling, risk pooling, right? Like you take a lot of different people in one pool, in one bucket And you know that some of them or most of them will never claim at all And then others will claim and like overall it works out And you still make a bit of a profit The customer is insured, they have that security and that works But if you price everyone according to the individual risk What that would do is, A, you might not have that pooling effect anymore So you actually just like price exactly the chance that this individual person will access that risk And then you price people out of insurance, right? Like you have people that fall into certain risk groups And they might not actually have access to an insurance anymore Based on certain data points that might apply to a group they’re in But that might not actually apply to them Like they might never ever develop a certain disease Only because the group they belong to like slightly overweight Being from a certain ethnic group Like there’s racial bias in data There’s like age bias in data There’s lifestyle bias in data So you have to be very careful with that Do you think that that sort of evolution of the technology could lead eventually to private companies being mandated to cover people Whether they’re insurable or not? It could be depending on the mandate a company has as well, right? Like I started my career in banking in Germany And there actually some of the banks had a mandate by the government to open bank accounts for anyone And the private banks didn’t have that mandate They would be like, you know, you’re not a customer that is actually that we know we will ever make money with So we refuse to open a bank account for you But there were banks because banking access to banking services is a fundamental right And it’s so important in any country, actually, for people if they have access to the formal banking or finance sector So, yes, definitely Like I can see that I can also see that instead of applying a detect and repair kind of approach Where we basically say we detect certain risk and we just exclude you And we say like, we don’t want you unless you change some of these factors We could move to a predict and prevent approach Where we say, like, based on your profile, you do fall into that higher risk category But we give you suggestions on what you could change And, you know, what if you have a higher risk of developing certain type of chronic diseases We say if you do these lifestyle changes, we ensure you, we give you the same coverage We give you the same premium But, you know, we want to help you prevent your chance of developing these diseases And then you would still have the people included, but with, you know, encouraging certain changes It’s a tricky one, right? Like, because where do you draw the line and how much can you tell people what to do? But that could be an approach as well that I think is interesting To continue having the pooling effect, to continue not pricing out customers that are most in need of insurance But still benefit from the advantage of AI and data analytics Continuing with this idea of the potential problems that AI and natural language processing programs might create Do you think compliance issues could be out there? And could they be overcome to make conversational AI work for banks and insurers? Yeah, I think compliance issues are general risk issues that I’m concerned about when looking at AI Is, you know, how does AI make the decisions, right? And can we understand that? And it sounds great when you look at some of the potential use cases But there are regulators out there, there are industry groups that are defining the standards that AI should fulfill in order to be properly applied And I just looked at the MIS guidelines yesterday and it was about transparency in terms of how AI makes the decision So, you know, whether there’s any biases that you don’t want to actually include in the decision making process In order for AI to work properly for your organization, you have to define what ethics means for you What level of empathy you would want AI to apply And that could be completely different across organizations And then the third one is accountability and the fourth one is like fairness in the approach And, you know, they talk about what that means for the individual companies or for companies in the financial space And I think that’s an important one, right, like defining this Because if you implement AI and there’s no one who checks on it, it can get out of control in terms of It can become racist, it can become sexist I’m going to take you back to the ChatGPT question around customer service Could ChatGPT be an assistant to a new call center staff member? And would that eventually change who you hire? Absolutely.
I think for certain use cases, it could completely replace the need for a human And then there are use cases where, again, you would want to simplify the life of your CS agents By having that access to a database like, hey, what can we do? Hey, how would you assess that case? Can you provide a template on how to handle that escalation nicer? Or it could be like, you know, ChatGPT or something similar They would just immediately draft the response and you would review it and then you would send it out It definitely means hiring people that are more comfortable addressing more complex use cases And that also to a basic level understand, right, like what their responsibility is when using AI They can’t blindly use it and accept everything that AI suggests They still have to have a basic knowledge of what’s going on They need to understand when there might be differences in our policies So AI is pulling the wrong data potentially, right? When markets or regulations change, you need to update them So definitely, people who are blindly following SOPs, I think it’s been coming for a long time Like it started when I started working 20 years ago And people were worried about ATMs replacing them Because they couldn’t like count the cash in front of them to the customer anymore, right? That skill is just not really needed But there will be other skills that will be needed in order to apply it correctly Thank you so much for joining us today, Sandra I’m Rachel Williamson and you’ve been listening to Breaking Banks Asia That’s it for another week of the world’s number one fintech podcast and radio show, Breaking Banks This episode was produced by our US-based production team Including producer Lisbeth Severance, audio engineer Kevin Hirsham With social media support from Carlo Navarro and Sylvie Johnson If you like this episode, don’t forget to tweet it out or post it on your favorite social media Or leave us a five-star review on iTunes, Google Podcasts, Facebook or wherever it is that you listen to our show Those actions help other people find our podcast And in return, that helps us build an audience that can be supported by sponsorship So we can continue to provide you with our award-winning content every week Thanks again for joining us We’ll see you on Breaking Banks next week