The Lessons Learned through a Failed Chatbot, with Imane Bakker

“You cannot have a financial system and a whole world moving to embrace AI, but not have the same people, at least the same expertise, involved into risk managing it.”

EPISODE:

128

with guest:

Imane Bakkar
Managing Director

Logarisk

Episode Summary

In this episode of the Digital Banking Podcast, host Josh DeTar welcomed Imane Bakkar, Managing Director at Logarisk. With years of experience in financial risk management, Imane unpacked how today’s risk landscape had shifted—and why traditional models were no longer enough on their own.

Josh and Imane discussed how regulated institutions could navigate risk in an era defined by AI, speed, and interconnectivity. She pointed to real-world failures, a chatbot gone off the rails and algorithmic trading systems that misfired were discussed, to highlight the importance of designing controls that matched the use case. She also explained the role of “risk appetite” versus “risk tolerance,” and why understanding that difference mattered.

The episode closed with a reflection on preparation versus prediction. Imane urged risk leaders to stop chasing the perfect model and instead focus on knowing their dependencies, designing for speed, and asking smarter questions—before the next crisis hit.

Key Insights

⚡ Risk Management Isn’t Just About What Can Go Wrong

Risk is more than damage control. It’s a calculation—balancing the probability of failure against the potential for reward. Imane Bakkar broke down how institutions often overlook the upside when assessing risk, focusing only on what might go wrong. But as Bakkar explained, every risk also carries a potential gain. The smarter approach is to understand the risk-reward equation clearly. That means asking better questions: What’s the opportunity if this goes right? What’s the probability it doesn’t? Financial leaders need to get comfortable assigning numbers, scrutinizing assumptions, and recognizing that risk isn’t just a threat—it’s also, when deployed thoughtfully, a lever for growth.

⚡ Speed Is the New Risk Factor No One’s Tracking

Regulations haven’t changed much, but the pace of financial risk has. In today’s digital environment, problems don’t simmer—they explode. One of Bakkar’s sharpest observations was how speed itself has become a risk multiplier. From AI model misfires to power grid blackouts, institutions are facing events that unfold in hours, not months. Traditional frameworks that rely on long review cycles and reactive planning won’t cut it. Instead, leaders must account for the velocity of impact. Preparation needs to be faster. Decision paths need to be shorter. Risk models should be tested for how quickly scenarios play out, not just if they do. The question isn’t just “what if?”—it’s “how fast?”

⚡ Data Alone Isn’t Enough—Context and Curiosity Matter

Clean data and good models are essential—but they’re only part of the equation. Bakkar emphasized that understanding risk requires context, discussion, and diverse viewpoints. A model may suggest a 90% success rate, but what assumptions drive that number? What data points are missing? This is where collaboration matters. Risk isn’t just technical—it’s philosophical. Bringing in people who ask different questions or challenge the inputs can expose blind spots. Bakkar drew comparisons to medicine: you wouldn’t rely solely on a diagnostic tool without a doctor’s judgment. The same applies in financial risk. Curiosity, context, and shared reasoning are what turn data into smart decisions.

About The Guest

Imane Bakkar
Managing Director

Logarisk

Find Imane On:
LinkedIn

Risk expert with a background in central banking, AI, and financial system modeling

[00:00:00] Imane Bakkar: I wouldn’t say that the only reason people felt to wanted to take more risk was because of the model’s availability. I just say that was the whole impetus because there were more data, more information, more connections. So there were, uh, more opportunities, but more risks as well. is just that technology, is always a backdrop to, decision, but it’s not necessarily the cause of the decisions that are made, but it just facilitates decisions when they are made

[00:00:32] The Digital Banking Podcast is powered by Tyfone.

[00:00:39] Tyfone is the creator of Infiniti, an ally better digital banking platform for community financial institutions, as well as several platform agnostic revenue generating point solutions. Our highly configurable platform and broad ecosystem of third party partners ensure our entire suite is scalable and extensible to meet the needs of NEFI.

[00:01:00] On our podcast, you will hear host Josh DeTar, discuss today’s most pressing financial technology topics with seasoned industry experts from every possible discipline.

[00:01:20]

[00:02:19] Josh DeTar: Welcome to another episode of the Digital Banking Podcast.

[00:02:22] Josh: My guest today is Imane Bakkar , founder and managing director of Logo Risk. Where do I start? I’m actually really excited to talk about a really boring subject today. How’s that for a lead in? Let’s talk about risk management now. Before you tune outta this episode, hear me out. My guest is who’s actually gonna make this a really fun and insightful conversation. You see, Imane started her professional life with a dream of being an improv comedian, and when she couldn’t make it an improv, she said, well, what’s easier to do than improv? How about financial risk management? You can check her LinkedIn later, but let’s just say she’s got one heck of a resume to back up that she was far more successful there than she was in comedy. Although I do hope she still throws a few solid one-liners at me today. So how does a comedian get into this space? Well, as someone born in Morocco who then moved to France at 19, and now living in London, Imane grew up loving mathematics and thanks to her school system, a deep appreciation for philosophy. Now, how do those two things go together? Well, better than you might think. A great example is found in one of her favorite weekend pastimes playing chess. Chess has a certain set of rules, and if you do a certain thing or process, you can expect a certain outcome like math. You can’t bend the rules, but it also has a deep connection to the philosophy of the game and of your opponent. Now, think of how this applies to risk management and financial services. I think you might get an idea of where this conversation will head today. Now, all of this is built on kind of an unshakeable foundation of her core values of being trustworthy and being held to the highest of standards. Imane has always wanted to be able to democratize access to expertise and when life conditions and technology collided perfectly to give her the ability to take on this initiative, it’s exactly what she did. So, Iman, welcome to the show.

[00:04:29] Imane: Thank you, Josh. Thank you very much for having me on the podcast.

[00:04:33] Josh: Yeah. I think anybody who listens to this on the regular is no stranger to. I absolutely love when I get an opportunity to talk to folks, that are, you know, really in kind of adjacent verticals to ours because it provides some outside perspective and then that even gets more compounded when it’s somebody from a very different geography, right.

[00:04:54] Josh: That has very different banking laws, regulations. Because I think at the end of the day, and maybe I’m about to go on the like. you know, kumbaya soapbox. But at the end of the day, I think what we’re all trying to do is help our fellow humans, right? And specifically in financial services, like we’re trying to create, you know, healthier, stronger, more financially stable families, communities, et cetera. And so that’s kind of universal no matter where you are, hopefully.

[00:05:22] Imane: Yeah, for

[00:05:23] Josh: Um, but it’s just

[00:05:25] Josh: how you get there is maybe a little bit different. So I’m really excited to be able to talk to you one, just about all the different things that you have seen and learned over the years in, you know, your career, but also just how you’re looking at kind of the, the new age of risk management in what is becoming an incredibly rapidly evolving digital landscape for financial services. So I wanna start by leading you into, a conversation that we started to have almost by accident before we hit record. And I was like, no, no, no. We gotta get this on the recording. So I wanna set up a little bit of the framework for today’s conversation by will you tell me a little bit about, we were talking about what happens when you have, you know, two perfect AI systems playing chess against each other, and then how is that kind of a metaphor for, one just risk and risk management as a whole, but two, then when we start thinking about this next generation of technology and the, implications of ai, how does that completely change the game?

[00:06:31] Imane: Yeah, sure. I can start with that. So what we were talking about in the relationship of it to risk management is, that. You know, you mentioned AI and AI is, for many people, very complex. It’s difficult to understand exactly how it works, to understand the mathematics behind it and so on. But I was referring to this, game of chess by a, a New York YouTuber that I really like, uh, Gotham Chess, who sometimes in his videos make a cha g PT play against.

[00:07:04] Imane: Gemini. So you see, the video and its two ais play together chess. So because they are large language model, they did learn the rules of chess. So, but these, two ais were not designed to play chess, right? they were designed to do more things. So they start to play chess, and in the beginning they respect the rule.

[00:07:25] Imane: At one point, one of them stops respecting the rule. So they get prompted by, bottom chess who tells them, Hey, that’s against the rule. And then the AI choose to ignore him because AI is basically, they’re trying to optimize the outcome and the outcome is to win the game.

[00:07:42] Josh: Win the

[00:07:43] Imane: Exactly. So they start playing, and then when one of them breaks the rule, the other one said, Hey, you’re breaking the rule, but then maybe I will win better if I break the rule.

[00:07:53] Imane: So they, the other one also breaks the, and then at one point, you see one of the ai, I think it was Gemini, eat their own pieces. Right. So you’re like, what’s going on here? So, and the analogy I make is you can, read a lot of books, try to keep track with ai, or if you are busy because you’re a smaller company or, or you don’t have the technical background to understand all the complexity.

[00:08:21] Imane: one way to understand what can go wrong from AI that is not designed for the right use case is actually to watch this video and it shows you exactly what can happen. ’cause if you look at the financial system is also a constraint system, like the, like chest. It has a set of roots. And if you use AI that were not designed for a specific use case, then it can go in all the directions, right?

[00:08:50] Imane: So, yeah. And.

[00:08:52] Josh: Yeah. And I think that’s a great, a great metaphor for exactly kind of what we want to talk about today, which is yeah, you’re, you’re playing within a constrained system, right? In financial services, regardless of where you are. I mean, you’re a regulated industry and for good reason,

[00:09:07] Josh: right? I mean, we handle people’s money. people tend to have a problem if they wake up one morning and see all their money is gone. Funny about that.

[00:09:14] Josh: and so, you know, we live in this kind of constrained system where there’s rules and regulations, and at the same time, we wanna protect ourselves. We wanna protect our organizations, we wanna protect our account holders, but at the same time, we, we wanna win.

[00:09:27] Josh: And you know, what winning means may mean something different based on a lot of different factors, including just who you are in that. You know, game or scenario, whether you’re the financial institution or whether you’re the account holder, right? But it, it’s really interesting to see how, the game is just changing and technology is having a really big impact on that.

[00:09:50] Josh: And so how are we thinking about, what the impacts of these changes are going to be within our highly regulated systems?

[00:09:59] Imane: Yeah. it’s very important to, to think about, about that. But before answering, I will pick on something you’ve said, which is, that when we think about risk management in general, we think about what can go wrong, right? Like, what can go wrong, what scenario can adversely impact us, what, things we haven’t thought about and so on.

[00:10:23] Imane: But risk is also, to what you said, what we want to win. Like what’s the reward If you are taking this risk, it’s for what reward. And then it goes to, you really need to understand the risk that you are taking for that reward. And I would even go, further, because to answer your question, what has changed?

[00:10:48] Imane: I need to look at what was before. So we, humans are wired for risk management, right? Like if you look, even, say you have a group of hunter gatherers. I’m going very far in history. Bear with me. So, and, one of them sees a tree. And after the tree, maybe there are some resources for the group.

[00:11:09] Imane: Maybe there are fruits, maybe there is a prey. But behind the tree, maybe there is a lion as well. Maybe there is a predator. So. At that time, that’s the risk management. Like you do a calculation is the reward, where’s the risk? And if you had a lot of information, if you know the landscape, if you know the tree, the type of tree lines, don’t go near it because it’s poisonous.

[00:11:31] Imane: Wherever. If you have information about the location, if you know that resource is unlikely to be behind the tree, if you have information, you can make more decision around the reward as well. So, and you can solve that risk reward equation, right? So when we say risk management has changed in the risk era, it’s not only what can go wrong has changed, but what can go right has changed as well.

[00:11:58] Imane: So that is, these are the two, consideration, that I want to put forward before answering. What has changed, if that, so, ’cause it’s important to look at, at both.

[00:12:11] Josh: Yeah. okay. I, I took a note. I want to come back to that whole example of like the lion behind the tree. But, before we go too far off of, you know, I think you were talking about, just using a very specific example of AI and I think, you know, yes. It’s the buzzword. Yes. We’re probably gonna talk about it a little bit more throughout the episode. it’s not the major topic of conversation, but I think it was interesting that you brought up the point of just, you know, risk versus reward and in a highly regulated industry. And, you know, you’ve gotta design systems that understand your industry, otherwise, your risk goes up significantly. But again, you’re playing this risk reward game and, you know, I, I don’t know if you’ve seen the example, it’s probably a little dated now and I could probably find a better one, but I. I wanna say this was maybe six months ago or so, and Ford Motor Company here in the US launched an AI chat bot on Ford’s homepage on their website. Okay. They just connected it to open AI’s chat PT and put basically no guardrails up. And so there’s screenshots all over the internet that you can find of people going in there and saying, tell me why Ford sucks. Tell me all the problems with the Ford F-150 and why I shouldn’t buy one of these.

[00:13:34] Imane: Okay.

[00:13:35] Josh: Tell me why I should buy a Tesla, cyber truck instead of a Ford F-150. Right. And it answers and it’s like, oh, Ford has a really bad reputation for warranty management or, well, you know, whatever it was.

[00:13:48] Imane: Yeah.

[00:13:49] Josh: And Ford didn’t lock this down. And, even to the point where people realized that, you know, Ford had an enterprise account with unlimited, credits and people were going in there and having it write programs for ’em. They were like, well, why should I pay for chat GPT? I’ll go to Ford’s website and say, you know, I need to build, a program to write code for this. Like, write the prompts like it, and it was doing it and Ford was paying the bill. Right. You can imagine they pretty quickly shut that thing off.

[00:14:18] Imane: Yeah.

[00:14:19] Josh: But, you know, you think about that, and the reason I use that as an example is, was that a a bad place for Ford?

[00:14:25] Josh: Absolutely. Right. What was their risk in that scenario? What was reputational risk? It was potential loss of revenue if it did actually encourage someone to go buy a Chevy instead of a Ford or whatever. Right? But you think about, I would argue in financial services, apply that same kind of un guard railed scenario to making financial advice or decisions or, you know, automated actions.

[00:14:52] Josh: And I would argue that the implications are far more severe,

[00:14:55] Imane: Yeah.

[00:14:55] Josh: Um, like if somebody, you know, it’s, yes, it’s bad for Ford if somebody buys a Chevy instead of a Ford. It’s really bad if somebody can’t make their mortgage payment because an AI gave them bad financial advice, right?

[00:15:08] Imane: Yeah, no, exactly. and so iv definitely in the, in the way the financial system has changed and it’s changing technology. C artificial intelligence is very much a live discussion in the financial system. and the importance here first is to have the right use case. are you using, because, AI takes many form now, which is used to talk about many different things at the same time.

[00:15:35] Imane: But at the end of the day, we’re talking about statistical mathematical model that can learn and take decisions on their own based on the data that they have used to learn. so Zen. If you are in the financial system first, what actor in the financial system are you? What is the use case you need? And then once you have defined the use case, then you need to make sure as you said, there are the appropriate guardrails so that you avoid, each given bad advice or you avoid it taking end you risks for your company.

[00:16:12] Imane: Because there are, I mean, not necessarily in small firms, but there are more sophisticated players who have algorithm that algorithms that trade directly in the market and take decisions to buy and sell completely automated. And some of them, can be, using, automated, information as well.

[00:16:31] Imane: So for that, you need to have embedded in that artificial intelligence or those automated processes, the adequate controls. And you also have to still need humans that understand them because if you don’t understand how it works, then how can we understand the decision it’s making? I think that’s one of the biggest problem with ai, not only for the financial system, is when it takes decisions.

[00:16:59] Imane: Sometimes it’s difficult to understand why it took certain decisions. So that’s kind of one of the ma most press pressing things at the moment, but not only on the financial system.

[00:17:11] Josh: so you made a couple of points in there that I think, you know, is important. So, one is just about use case and having it actually aligned to, what is the problem you’re actually trying to solve. and I think what’s also interesting is going back to kind of your comment about the risk versus reward side of things. Right? so I wanted to come back to you, you were talking about like going back to prehistoric days and like the lion behind the tree. as you think about, you know, just park AI aside, just all the things that are happening in financial services that you’re managing risk around, there’s gonna be a lot of factors just like in the scenario of, you know, the hunter gatherer and a potential lion behind the tree. And you know, if the reward on the other side of the tree is one blueberry that’s not even ripe yet, then I would argue there’s probably very low reward. And no matter what the risk is, it’s probably not worth the reward. But you know, if there’s an entire field of blueberries that are all ripe. And ready to be picked and could feed your entire tribe for a week, you know, that’s probably something worth risking. But even then you go back to, well, what data do you have on hand to understand what is your risk? Is this a calculated risk or is it a blind risk? And then at the same time, you’re gonna come into just competing personalities, right? Like I would argue, well actually here. Perfect case in point, my wife and I, and I’ll use a an example directly within, your sphere. when my wife and I first got married and we started putting together our stock portfolios and our retirement and all of that stuff, we met with, my financial advisor and, he went through a whole series of different questions with both of us to revalidate all of our, goals and, you know, definitions of success and risk tolerance and things. And I score like an 11 on a scale of one to 10 on risk tolerance. I’m like, go for it. And my wife is like a negative five. Right? So again, you go back to that scenario of the lion in the tree, right? And two, in that scenario, like I would be more risk, prone and I would be willing to probably go out there before she would. Right. So you just, you have so many different factors around this. So, what are some of the things that you have found as just good, best practices to, how do you try and make that as educated of a decision as possible? Managing all of the different personalities that may be involved and all the other nuances to it to say how do we identify, risk versus reward?

[00:19:49] Imane: Yeah. So first of all, like, risk is boils down to calculation probabilities, right? And basically you. Are weigh in, what is more likely to happen versus what is more unlikely to happen, which is an extreme scenario. And usually, so for example, if someone comes and say, Hey, I am going to make this business decision because I have 90% chance that this is useful.

[00:20:26] Imane: it has a use case because it is, aligned with the business strategy and it has 90% chance it’s gonna make, I dunno, $10 million. and it has only 5% chances that it’s gonna lose more than that amount, right? For example, it’s, it’s a calculation. It’s calculation probabilities. I think, Suzanne, what, what zen is, okay, how did you arrive to that 90% chance that it’s gonna make that amount?

[00:20:55] Imane: What data did you use? How did you make that reasoning? Which market are you operating in? what is your counterparty? Who is the person on the other side of the trade that you are, operating on? how strong are your systems that you’re gonna book this on? Right. like how, so like, tell me, how did you arrive to that 90% and on the other side, how did you arrive to that 5%?

[00:21:20] Imane: worst what scenario did you think of? Are you sure that’s the worst one? Like, can something else happen? is there, like, have you considered the, all the dependencies there, et cetera? So. This all, what I’ve described here, can be done in discussions and should be done, in my opinion, should be done in discussions so that people can think outside the box, bring their perspective, but can be totally done mathematically as well.

[00:21:52] Imane: And that is, also possible. so I’m gonna tell you a little bit of a story of risk management in, in, in the financial system. And you, and, and I dunno if you will be surprised, but some of which will sound very familiar. Do you know, so, when risk management became really a thing in the financial system that was in the nineties, early 2000,

[00:22:14] Josh: Okay.

[00:22:15] Imane: this is when it became like a discipline in the financial system, like a core discipline.

[00:22:21] Imane: And why was that? It was because in the nineties, a new revolutionary technology came back, came on internet. A lot of new tech is coming. Telecom was getting bigger, people got more connected. So that’s one of the reasons. The second reason of the 19 and early two thousands is because new commerce became very prominent in the financial system because it got very sophisticated And you know what do new commerce, were engineers with mathematical backgrounds, sounds familiar, that do models and code stuff in mathematically.

[00:23:03] Imane: So they came and said, Hey, like this risk, I can translate it into a model that gives me some numbers. And those numbers I can use in my strategy to quantify what those risks are. Some of those risk model that were developed late nineties, they’re still used now, they have been codified by

[00:23:24] Josh: Oh wow.

[00:23:25] Imane: right?

[00:23:25] Imane: Like one of them, like is very popular is basically, using historical data or using data that I am simulating from a scenario I can calculate how much I can lose with a probability, 1% or a probability 5%, and I can say whatever I do, I want that number to be within a range. That model. Still used now, it’s called value at risk.

[00:23:52] Imane: So that’s what happened in the nineties, and then it happened very fast and then it allowed people to take more risk and to, develop more, structures and so on. And it brought us. So that was not the cause of it, but that was the environment where 2008 crisis happened. Right. You know, and then there was a, and then regulation codified a lot of this, and we had a lot of other metrics, which I feel like now we are now in a new era where we have this whole new technology come in and whole new, ecosystem developing.

[00:24:31] Imane: Right. So,

[00:24:32] Josh: can I ask a dumb

[00:24:32] Josh: question? So, in that scenario from the nineties and the early two thousands, where all of a sudden we had kind of mathematicians actually modeling out these risk scenarios, I feel like you were kind of saying this, but is it fair to make the assumption that people actually were more willing to take on more risk then and propel forward because they felt like they had better data on it?

[00:24:56] Josh: Is that what you were saying?

[00:24:58] Imane: I wouldn’t say that the only reason people felt to wanted to take more risk was because of the model’s availability. I just say that was the whole impetus because there were more data, more information, more connections. So there were, uh, more opportunities, but more risks as well. is just that technology, is always a backdrop to, decision, but it’s not necessarily the cause of the decisions that are made, but it just facilitates decisions when they are made ba

[00:25:32] Josh: Yeah.

[00:25:33] Josh: because there is, there’s kind of a, there’s in, in everything, right? There’s a balance of, again, even going back to your, lion and the tree scenario, right? Like there’s a balance of data and gut,

[00:25:43] Imane: Yeah.

[00:25:43] Josh: right? And your gut is actually built on a lot of past experience and data as well, right?

[00:25:50] Josh: So, kind of to your point, like, Hey, I, I know this route. I know that I’ve, you know, gone across it a hundred times and I’ve never seen a lion before. So the odds of there being a lion this time are pretty slim. That’s more maybe gut powered by a little bit of past experience, but you know, if you have a data point that says this tree is poisonous to lions, therefore I know there will not be a lion there. Like those two things could be, you know, one is a little bit more on the mathematical side, one’s a little bit more on the philosophy side. And so, you know, in this scenario we’ve got now just a new impetus of data, validation of gut that’s helping to inform better decisions. And where I think this gets interesting, and you were kind of a alluding to, you know, we have one big revolution that happened with the advent of the internet and that we’re kind of in a similar one with, the more mass adoption utilization of ai. I kind of always come back to people always. there’s definitely the, you know, concern about there of like, will AI take my job? And I’ve probably made the point far too many times, but I’ll make it again. I don’t think AI is gonna take your job, but I absolutely think that someone using AI may take your job. Right. And it’s simply because they will have better, greater access to knowledge, the ability to do operations faster, right? But again, it does take someone to validate those things. But so do you think that we’re kind of coming into another age of we’re gonna have a whole new level of access to data and being able to, you know, not just make gut decisions or even data-driven decisions, but like far more sophisticated data-driven decisions on risk management.

[00:27:38] Imane: Yeah, I definitely think so. that’s the kind of the short, the short answer, but the, but it, it would have been too easy if the answer was like, yes, definitely. So, and, and that’s it. And it’s gonna, because, because let, let’s give data it’s due when it is clean, correct data, it gives a lot of transparency.

[00:27:58] Imane: It, it gives a lot of transparency, right? So, having things that have data and something that can analyze them, gives transparency and access to information to more people. So it has a positive size for sure, but, you know, it would have been too easy if that was the only thing that’s happening with this, with the, with the technology.

[00:28:21] Imane: changes that we are seeing now, including but not limited to, to ai, I would say here is how the financial system. Has has changed and, it’ll maybe answer, uh, more your question and some precedent questions first. I think it became much more interconnected. So interlinkage and interconnectedness in the financial system has increased a lot.

[00:28:48] Imane: It’s increased a lot because the exchange of information is now easier. entering the system is easier because the second big change is also technology, right? So, and when I say inter connection and interconnectedness, I mean, imagine if all the actors in the financial system were some dots, and between each dot there is a line, right?

[00:29:15] Imane: That’s the interconnection. But actually between each dot there are many line, each one of different color, because they can be interlinked from technology, they can be interlinked because they have a bank account. In each one of them, they can be interlinked because one of them is a vendor to the other one, et cetera, et cetera.

[00:29:35] Imane: They can be interlinked because one is the counterparty of the other in a trade and so on. So not only it’s very interconnected, but it’s very interconnected in different ways. So that’s a big change. And the other change is technology, as we said. And the third one is fra, uh, which is referred to as fragmentation.

[00:29:57] Imane: Fragmentation really means that in my map of dots that are very interlinked, there are many, many dots, which means that there are many, many firms that do one service and many, many other firms that do another service, et cetera, et cetera. So some services become very fragmented with a lot of actors. So all this is means that there are much more data, but is that data available can be collected easily?

[00:30:28] Imane: Can be accessed easily. So these are like, we are like interconnectedness, fragmentation, and technology. Big changes in the financial system. But the big, big one is the last one. And uh, it’s a philosophical one as well, which is its speed. When things happen, they happen now fast,

[00:30:51] Josh: Yeah.

[00:30:52] Imane: So, you know, like we are used as humans to think things true, to ponder decisions, especially when there are big decision.

[00:31:02] Imane: But now when things happen in the financial system, they happen really fast. When Silicon Valley Bank failed three years ago, it failed in three days. I think less than three days, like everyone involved had to take really quick decisions. when, I mean, I can give you many examples in the financial system recently where something bad happened, it happened really, really quickly.

[00:31:28] Imane: So that is another change, and that, that, that is I think one of the key ones that would need attention of how we’re building our framework around risk, how we’re thinking about risk now.

[00:31:43] Josh: Yeah. You know, one of the other things you were talking about, I think, really hit me is, you know, you’re talking about just good clean data and access to all of the data, right? Like, how many systems are you missing as a part of that puzzle? and I’m gonna

[00:31:58] Imane: E.

[00:31:58] Josh: trying because I’m a simple brain, so I’m gonna keep trying to bring it back to your lion example, you know, and it’s like, yeah, if, if we’re making a decision that says we’re a hundred percent confident, there is no lion on the other side of that tree, but we’re missing a data point that says. Pack of lions, literally just got rained out of their home and have rehomed and is right behind that tree.

[00:32:20] Imane: No, exactly. or, or,

[00:32:22] Josh: point might be pretty important.

[00:32:23] Imane: yeah. Or if it was poisonous, but they developed a resistance to it. To it and you didn’t know

[00:32:29] Josh: And you didn’t know,

[00:32:30] Josh: right? And you’re running on old assumptions.

[00:32:33] Imane: Yes. Exactly. And you know, we’re facing a lot of uncertainty right now and we faced a lot of uncertainty in the past as well. But one thing is we, humans, but also, I mean, I’d keep it to the financial system, smaller, smaller ecosystem.

[00:32:50] Imane: we have sometimes to take decisions with incomplete information,

[00:32:54] Josh: Yeah.

[00:32:55] Imane: either because the data is not there, but now it’s because the time is not there. so this is one area. Maybe some safe, responsible AI can help us into this, but it has to be safe and responsible as well. For now, as you were saying, I will use your words, it’s like a mix between gut and models and data.

[00:33:22] Imane: it has to work that way, because we don’t always have all the information.

[00:33:28] Josh: Well, and this is why, you know, experts will always be needed. Right? And again, you, use this example before we hit record and I wanna bring it back up again. Is, talking about yes, we now we have AI tools that can help identify cancer faster. We still need a doctor,

[00:33:47] Imane: Yes,

[00:33:48] Josh: right? And we still want to check that stuff. because even 99 with a lot of point nines behind it, you’re still that one person that goes in and it says you have cancer and you don’t like, that’s gonna be a real rough day for no reason.

[00:34:03] Imane: yes. Yeah. EE exactly. That’s the analogy. I would, uh, use and, the other things to consider, there is this movie, I dunno if you know this movie, it’s called Margin Call. it was out I think in 2010 or 2011 after the 2008 crisis.

[00:34:20] Josh: Yeah, I think I’ve seen it, but not a long time.

[00:34:23] Imane: So, yeah, so, so that movie, it has like a, a scene where they are all sitting on the board and, the crisis has been going for weeks and so on, and they realize like something really bad is, is gonna happen and they have to take a decision what is gonna be good for their farms to do.

[00:34:41] Imane: And, in that boardroom, one of them, actually the person they ask in that boardrooms, there are a lot of officers, a lot of senior people, but the one person the CEO ask is the mathematical modeler who did the model, who is like an analyst there who did the model that showed that, you know, scenarios.

[00:35:03] Imane: They show you that actually with the information we have, it can be really, really bad. So he asks him, like, you explained to me, like, what this means. And he says, well, it means that if that this situation worsens by 25%, the loss is going to be bigger than the whole value of this firm. So the CEO in that movie was, says, okay, so what should we do?

[00:35:29] Imane: So some of the traders in the aboard say We should do A, B, C, D. And then he says, if we do this, where is this gonna come back to us? And the answer was everywhere. So that everywhere now I would do that movie and I would say everywhere fast, very fast. That’s the interconnectedness and speed that I’m talking about is you can see it very clearly when there are crisis now that it happens very, fast.

[00:36:08] Imane: You don’t have always the data and exactly as you said, that’s why sometimes having experts that come before this crisis hits you and give you some more information. Re sometimes even do like a what if scenarios to see if you are prepared. Do some tests can help. So, and this is kind of the, preparedness that is needed.

[00:36:32] Imane: Before you are in the middle of a crisis and you’re not prepared.

[00:36:36] Josh: Well, and that kind of goes back to one of the things you, you started talking about earlier, which is, you know, at the same time you’re kind of looking at what’s the worst that could happen? And what’s the best that could happen? And that’s just with what you know, right? And so again, this all comes down to probabilities

[00:36:55] Josh: when we’re thinking about risk management, right? I actually, you know, it’s funny, I really, now, I wish I had it. I don’t know why. the like Instagram algorithm chose to show me this ’cause I’m not actually a gambler. but I found it hilarious and it stuck out in my brain. And now I know why is because apparently my brain was thinking about this podcast before it even happened.

[00:37:14] Josh: But literally, like two days ago I saw this meme pop up on Instagram, a man that was, it was a billboard on the side of a freeway and the billboard was like an anti-gambling or help with gambling addiction type of message. And it said, you know, 20% of gamblers end up in bankruptcy. And then the caption was, A solid gambler will take an 80% probability.

[00:37:39] Josh: I’m gonna win all day every day. Right. And it’s like, yeah, what’s your, what’s your frame of mind in that? What’s your risk tolerance? And then, you know, what’s the worst that could happen? And they’re like, oh, worst that could happen is a 20% chance I go into bankruptcy, so there’s an 80% chance I win. Let’s go.

[00:37:59] Imane: Well, you know. Yeah, no, I mean, this is a good example because, you know, like gambling is also about probabilities, but you said risk tolerance and, One of the things that, many firms do in, to manage the risk is, uh, the notion of risk appetite

[00:38:14] Imane: and the notion of risk appetite and risk tolerance is a little bit different because, risk tolerance is, what you can live with and risk appetite is what you must live with.

[00:38:24] Imane: So that’s kind of a,

[00:38:26] Josh: I didn’t realize that.

[00:38:27] Imane: is the distinction. So risk appetite generally is when, firms, decide strategically at certain level, like collect. if we take all our activities, we don’t want them to exceed, we, we are gonna define some, metrics. because you can put the risk appetite on metrics that, that you think represent your activity properly, and then you put a limit on them and you say, okay.

[00:38:54] Imane: For example, it can be a stop loss. I’m gonna do this activity, but when I hit this loss, I stop. We don’t do more. That’s kind of a one way you can do a risk appetite. And risk appetite is very interesting as a notion because you know, I spoke about the mathematical side, but there is also the philosophical side, where you’re gonna see that number view is very different from your wives, right?

[00:39:21] Imane: Right. it’ll be very different. And this is where at the moment there isn’t like an analysis. This is like science. There isn’t like yet an analysis necessary that will tell you, Hey, that number should be this number that is the right number for you. That’s still, that is still on the real me of decisions and discussions and behavior.

[00:39:46] Imane: but he still needs information though. Because you, that number cannot be random. So risk appetite, uh, and risk tolerance are very, uh, very important, notions in risk management, in the financial system as well.

[00:40:02] Josh: Can I put some words in your mouth? And I, I’d love to see,

[00:40:05] Imane: Yeah, sure, sure.

[00:40:06] Josh: what your thoughts are on this. So if you put yourself in the, you know, shoes of, maybe a risk officer for a community financial institution here in the us right? There’s, when you wear that risk hat, there’s a lot of different risks that they are potentially looking at, right?

[00:40:22] Josh: And that they’re trying to mitigate. And it’s everything from, you know, the risk of fraud against their account holders from certain channels or payment methods to, you know, maybe they’re involved in conversations with the, you know, CFO and the CEO and the board about, the risk to some of the investments they’re doing with some of their deposits. To you name it. Right? And so, you know, they’re looking at all of these different parameters, all the different nuances. would it be safe to assume that, you know, in an ideal scenario, your recommendation is that, you know, you also have somebody that’s, playing a data role, whether they’re inside the organization or brought in from outside the organization to look at as much data as you possibly can.

[00:41:09] Josh: All of those disparate systems, pulling those data pieces together and being able to present with as much data as possible. kind of clear scenarios of what’s the worst case scenario, what’s the best case scenario, what are our potential outcomes? And then having kind of a conversation probably at the board and the senior leadership level of what is our risk tolerance, what is our risk appetite? And then kind of having that, you know, final layer of having. Experts, again, whether internal or externally brought in for kind of consulting on those major decisions that have especially the largest risk or reward.

[00:41:55] Imane: Yes. I mean, that is generally a good practice. ’cause look, if you are gonna make a big decision, you’d rather make it an informed decision, right? So you’d, if you have your information, uh, you have information internally and you can put it together so that. When you are taking the decision, you base it on some numbers, you base it on some analysis, you base it on some, diagnostic.

[00:42:22] Imane: So that I’m looking again, the analogy with the doctor analogy. Uh, again here, if you are diagnosing something and you wanna give the best treatment, you wanna look, you do blood tests, you do x-rays, you do, you assemble a lot of, data, a lot of which is in your example is like the, scenario we can go like, can we check this metric, this gge of good health?

[00:42:47] Imane: Can we check this gge and so on and find what are the vulnerabilities in each one of them? So definitely it’s good practice when. Whether you are a risk officer or a chief, financial officer, or literally just someone doing, a job where they want to reduce vulnerabilities as much as possible to look at the available information and try to extract from it the information about the risk and reward.

[00:43:20] Imane: and so, and actually finally, you touch on a very important point because we said the financial system has changed a lot. And what I think your question, and I’m now I’m pushing the words in your mouth, is do we need to change also how we risk manage it?

[00:43:39] Josh: Yeah.

[00:43:40] Imane: And I think the answer is yes, and definitely data scientists have a role in it.

[00:43:47] Imane: You cannot have a financial system and a whole world moving to embrace ai, but not have the same people. At least the same expertise involved into risk managing it.

[00:43:59] Josh: Hmm.

[00:44:01] Imane: So that’s kind of my, uh, answer to, to that.

[00:44:05] Josh: Yeah, so I mean, that makes me think, you know, like as I replayed in my head talking through, you know, kind of what an ideal scenario would look like at a, at a financial institution. I’m sure anybody who especially works in risk out of financial institution is like. Yeah, no D Ashley, duh. Like, we’ve been doing this for forever.

[00:44:22] Josh: That’s how we do that. Right. So what are the things that are maybe, like you said, kind of changing or, you know, if you were to do kind of an evaluation of your institutions, just risk management processes, and I guess almost even resources, like where would you recommend they start and what would be some of the conversations that you would recommend that they be having?

[00:44:50] Imane: So I think the starting point should be what is the available like, stock tech of what’s the available information? So what, what’s the information that informs our risk decision?

[00:45:04] Josh: Mm-hmm.

[00:45:05] Imane: a stock take. and then after the stock take, this is where internally or externally, but or within, asking, uh, the various, uh, internal respective teams.

[00:45:19] Imane: Where do you see the gaps? Do we have any blind spots here? Are there things that we, like, you know, the project priority number 22, like that we have to improve something. Should it be number one? Like have we thought about what should be our priority number one here? Our priorities set up properly?

[00:45:40] Imane: You know, that’s kind of the, the discussion, that should be the starting point. And then the second thing I would say is, looking at information, on a holistic manner. I. So, and by that, that’s a big word, holistic, and we use this word, but, uh, I’ll tell you a story from one of my experiences.

[00:46:03] Imane: I was working, for, uh, the Bank of England during COVID, right? And obviously during the COVID was such a shock to all of us, in terms of that. We never had that scenario. Ne never that scenario happened before, as in everything was shut down. Everything was, uncertain. And in the financial system that translated with.

[00:46:32] Imane: everyone dashed for cash. So we, we called this, it was called Dash for cash. Everyone who wanted to get some cash tried to get hold of some cash, which in the financial system, meant that, basically, central banks needed to intervene. There needs to be some actions and so on. But what that led is we had to sit down and consider the system holistically.

[00:46:56] Imane: there is even a report called the Holistic Review of Events during COVID, that was written by, um, the Financial Stability Board, which is like a global authority. but basically that was holistic. We had to look at everything at the same time. So, fast forward to now, and your question is, I would say decision makers need sometimes to need now actually more and more I.

[00:47:23] Imane: To challenge themselves, to think holistically because when things move now they move together. when there is an event that might seem remote, you still need to consider on the whole scale of think, does this impact my risk? Yes, no. Does this impact my reward? Yes. No. So that holistic approach is something I feel like we need to do more and more, because usually now it’s like siloed.

[00:47:50] Imane: So you would have the people that will look at fraud and have the lens on fraud. You’ll have the people that will look at technology risk and have the lens on that and so on and so forth. But I think more and more now, we need to bring these strengths of risks together and consider the problem as a whole.

[00:48:08] Imane: And unfortunately, as I said, when crisis happened fast, you don’t have the time during the crisis to start understanding the complexity of it. How this, this links to that and how does the, so, uh, yeah. So it’s almost again, like, sorry, I just to conclude on this with my example of, the, uh, health, care.

[00:48:28] Imane: Imagine if, you had, um, there was an emergency and you go to the hospital and they’re still figuring out, oh, do we need the neurosurgeon or do we need the heart surgeon? We don’t know exactly which one of them do we need. Right. That would be really bad.

[00:48:45] Josh: Yeah.

[00:48:46] Imane: firemen, doctors, nurses, they are trained and they do tests and they prepare to consider, like they want to act very fast and get the right person very quickly.

[00:48:58] Imane: So I think that’s the analogy I would use.

[00:49:01] Josh: I’m glad you brought up the pandemic because, so I have a friend who loves to use the saying, you know, I didn’t have that on my Bingo card. And like, I would agree, I think, you know, pretty much all of us didn’t have Global Pandemic, that humans were not allowed to interact with each other and we were just gonna shut the entire world down, literally overnight.

[00:49:20] Josh: I don’t think many of us had that on our Bingo card.

[00:49:23] Imane: Yeah.

[00:49:23] Josh: Right. So, but, but what was the learning from that from a risk management perspective to say like, Hey, we cannot always have this myopic view of our Bingo card. And it always has, you know, five squares by five squares. Like, and that’ll cover all my scenarios.

[00:49:41] Josh: You know, how, how did we take learnings outta that to say, my bingo card needs to be a lot bigger and I need to consider some ridiculous stuff. Like, I mean, do we need to be considering that? So just, this is, this is where my brain works sometimes. This is the conversations my wife and I have over dinner. my wife is a huge Jurassic Park fan, and she was just reading an article about, somebody that’s actually working on, like, doing exactly Jurassic Park. Like literally

[00:50:07] Imane: Yeah, I, I saw that

[00:50:08] Josh: the movie. They’re working on it, right? And my wife was like, hell yeah. Like, let’s go for it. She’s like, I know exactly how that movie ended.

[00:50:15] Josh: She’s like, but I also saw like, don’t do stupid things. Don’t cook bacon in a trailer near the T-Rex. Like, we know how to avoid this. She’s like, but I definitely wanna see some t rexes. So she’s like, let’s go, let’s bring this program on. Anyway, I digress. Where I’m going with that is like, should that be on our Bingo card?

[00:50:31] Josh: Should we have on our bingo card some, you know, crazy scientist creates Jurassic Park and T-Rex is run amuck in New York City. Like, should that be on my Bingo card? Like, I don’t

[00:50:41] Imane: Uh, well, I mean, if, if, if I were to quote Yuval Noah Harra, who, you know, he’s a fame like a historian and he wrote, the book, sapiens and other books, but, and I agree with him on this, we already have that scenario.

[00:50:54] Imane: It’s ai,

[00:50:56] Josh: Hm.

[00:50:56] Imane: it should be on our bingo card, right? If we move in five years time to a world where a lot of decisions are made by ai, like what can happen,

[00:51:07] Josh: Yeah.

[00:51:08] Imane: you may have an AI in your Bingo card that’s gonna shut down, because it’s not optimal for their operation can shut down something. I dunno if you saw the article recently where a model that knew he was gonna be shut down, rewrote code so that it’s not shut down.

[00:51:26] Josh: Yep.

[00:51:27] Imane: I think I. I think we should say, we should realize that first we do have the Jurassic Park, Bing go-kart already. It, this is just my opinion, of course. I know many other people have different opinions and I’m also thinking, I mean, I, to be clear, I think AI has a lot of benefits as well. But I do see this risk for sure, for now.

[00:51:50] Imane: then the sec, the second thing is about, the pandemic or these extreme scenarios. We cannot claim that. We will know every single bad scenario that’s gonna happen. Maybe if physicists do big quantum leaps, we will be able to move in time, whatever. But, we’re not there yet. But at the moment, we cannot predict.

[00:52:14] Imane: we cannot know what is the future. There will be, there will always be events that we have not anticipated. However we can be prepared. Meaning, to my earlier point, my own biggest learn lesson learned from COVID is it’s not a good time during the crisis to try to understand complexity during, whilst you’re trying to solve a crisis, and at the same time you’re trying to understand how things work.

[00:52:45] Imane: if you are in that situation, this is not a good situation. So the mitigate for. so actually, so luckily we were not in that situation then because, at least from the financial system, central banking side, there were a lot of experts they put up together very quickly. The technology was there for that, they could rework remotely.

[00:53:11] Imane: But I’m thinking that if now you look, at the level of each institution, the question, or to ask to ask yourself is, am I prepared? Do I understand how all my dependencies are, or at least not all, but most of my dependencies. ’cause if there is a big crisis, I don’t wanna be at that moment trying to understand how, how this process works and,

[00:53:37] Josh: Yeah.

[00:53:38] Josh: You know, that’s, interesting. You know, as you were talking, it makes me think, again, going back to your kinda like prehistoric

[00:53:44] Josh: examples, right? Like you think about, I think sometimes our circumstances will also dictate our risk versus reward system, right? And sometimes they accelerate our ability to do things that we didn’t think that we could do because of maybe a perceived risk.

[00:54:04] Josh: And you, you kind of use the example of, you know, how many financial institutions said they couldn’t do remote work for years and then overnight did it because of the pandemic, right? And you’ve got your hunter gatherer and you know, there’s a field of blueberries, but it’s across, a canyon that they would have to jump and they don’t think they can jump it. And so forever they don’t think they could jump it. The risk is, you know, greater than the reward. But then all of a sudden their side of the canyon lights on fire and they have no choice but to either burn or jump,

[00:54:38] Josh: and then they jump and they make it and they’re like, ah, crap, I could have jumped this weeks ago and got in the blueberries.

[00:54:43] Josh: You know, like sometimes we have those kinds of learnings too, that. Again, are you gonna be able to plan for all of those scenarios and then, you know, would that influence Again, it is just kind of going back like if our caveman knew, well, one of the scenarios is my side could light on fire. I should probably figure out how to jump this before it comes. Like, he

[00:55:05] Josh: didn’t have that on his bingo card, but he had to figure it out in the moment

[00:55:07] Josh: and it wasn’t like trying to analyze everything about it. It was like, ah, crap, I just gotta jump this thing.

[00:55:13] Imane: No, e, EE. Exactly. And that’s why, like for example, there are a lot of analogies we can make here. Like, when,

[00:55:19] Imane: , navigators, when they had to take the seed, that’s a very risky endeavor, right? So you had to, to take it. But then, navigators equipped themselves with Compass. In the beginning they were looking at the stars, then they equipped themselves with Compass.

[00:55:34] Imane: Then they had the mass certain way and then, then they had engines. So that’s the technology come into ships, right? For them to be able to navigate. So think about, uh, risk management and how do you navigate the sea? You don’t know where the wave is gonna come from. You dunno if Moby dick is there. You don’t know many things.

[00:55:52] Imane: So basically, you just need to, uh, navigate to the best of your ability. So you put technology on your side. Right. And, but the technology you have, you understand, you don’t have owner ship something that you don’t understand how it works. So that’s, the first thing. And then the crew is trained.

[00:56:11] Imane: So there is a crew. They know how to work together. They are trained, they know what to do, but also they, have backups. So it’s like a whole, ecosystem of the ship knows how to navigate, but it’s braving ama like incredible risks. so, and I think now we are, we have the technology. Things move fast.

[00:56:34] Imane: We can get some data, we can get more data than ever before to inform our decisions. So now we need to rethink that, how we can operate the most effectively, and to use this word holistically. within an institution as much as possible, so,

[00:56:55] Josh: yeah. No, I like that analogy. When you think about, yeah, the advancements in. Just, you know, navigating the seas on a ship. And you know, it’s a great point, like if you have a compass, but if you don’t know if it actually works for your application, then it’s pretty useless, right? If it, most of the time points north, but sometimes it doesn’t,

[00:57:16] Josh: doesn’t do you much good. Right. And at the same time, let’s say you have a compass that works perfectly, but your crew has absolutely no idea how to use it, then it is a useless, very powerful tool.

[00:57:29] Josh: And you know, that’s kind of exactly what we’re coming into with AI is we now have a tool, we’ve gotta make sure we have it aligned to the appropriate use cases and it’s the right tool for the right job and then it works.

[00:57:44] Josh: And then we’ve gotta have people that understand and validate and the experts that say, yes, this thing really is still truly pointing north. But you know, again, in your shipping example, like you think about. You know, somebody who has, a shipping company and they’ve got a crew that navigates by the stars in a wooden boat with a mast and a sail. And then you’ve got a company with a giant diesel, you know, container ship and GPS and oh, I hate to break it to you, but who’s gonna win that? You know,

[00:58:20] Imane: Uh, the,

[00:58:21] Josh: between those two companies is the one that’s got the tech and that’s using it. Right. And so I think the same thing applies in our scenario

[00:58:27] Imane: it is navigated by a prudent captain,

[00:58:31] Josh: Yeah. Great point. Yeah.

[00:58:34] Imane: yeah, so that’s, that’s why people are very important too here as well. So, yeah. Yeah.

[00:58:40] Josh: what was the, oh gosh, I’m gonna get it wrong. I wish I could remember exactly where it was, but, Our CTO has a, a funny story where, he needed to go somewhere for work and I wanna say it was like San Jose and he meant like San Jose, California, I’m gonna get this wrong, so apologize.

[00:59:02] Josh: The, geography experts are gonna chastise me later, but like, there’s like a San Jose, like Puerto Rico or something, right? And, I mean, we’re talking pretty significant. And and he got the ticket to the wrong place and shows up and is like, oh yeah, something felt wrong. But he just literally was kind of on autopilot, wasn’t paying attention.

[00:59:25] Josh: And I’m not even kidding you, man. It was like to the level where, you know, when you take a domestic flight, you just get on with a, a, a driver’s license when you take an international flight, like you have to go through customs,

[00:59:36] Imane: Yeah, yeah,

[00:59:37] Josh: and all of that. He went through all of that. And like never once thought like, this is, this seems weird. He was like, oh no, look, this is where my ticket is. This is where I’m going. He sits down, he’s like, that was funny. Like, you know, everybody’s in totally different, like, you know, clothing than I would expect for this and people are using different languages, whatever.

[00:59:54] Josh: It’s just normal. You know what I

[00:59:55] Imane: yeah. No, I

[00:59:56] Josh: And so he had everything at his access. Right. But I’ve given him a little bit of a hard time. But it’s like, if you’re not paying attention, sometimes you can go down a totally wrong path. Even with the best of tools.

[01:00:09] Imane: Yes, definitely. So in a risk management jargon, this is called operational risk. Well, and, a lot of things are there, which are basically risks that, the logistics didn’t work or the, technology didn’t work or the, actual vehicles didn’t work. So the, the way your operations are supposed to go didn’t, didn’t work.

[01:00:37] Imane: So, yeah, I mean, well actually, like in what you described, I’m also surprised, like these are, the checks as well must have failed in this scenario because those who check their his tickets did not check whether it was

[01:00:52] Josh: Oh, no, no. He bought the ticket to the wrong place. That’s

[01:00:54] Imane: ah, initial, ah,

[01:00:55] Josh: Yeah, yeah, yeah,

[01:00:57] Imane: So at inception, yeah.

[01:00:59] Josh: So it was literally, and it wasn’t, he didn’t buy it. I think it was like, you know, his wife or something bought it for him, and he just said like, I need to get to San Jose. She just looked for the first flight to San Jose, and I think it was literally like he was in Florida, so maybe

[01:01:15] Imane: I see. Okay. So,

[01:01:17] Josh: you know what I mean?

[01:01:18] Imane: okay. No, no, I see what you mean. So, okay. I mean, it’s not exactly the same thing, but what this brings me to mind is when, uh, traders make fat finger error.

[01:01:29] Josh: oh yeah.

[01:01:30] Imane: So, and this can be like, sometimes leads to like a big meltdown in the markets and so on. So again, this is like, something that, risk controls in institutions should have, should pick up basically when you are, entering the trying, like thinking you’re entering the right information, but you’re entering the wrong information.

[01:01:49] Imane: uh, but it’s like an interesting

[01:01:51] Josh: that’s a, that’s a funny example of my own life of, kinda your, yeah, your risks of moving too fast. And

[01:01:59] Josh: I, uh, this was probably, gosh, I don’t know, 10 years ago or so,

[01:02:04] Josh: but I remember I went to make a credit card payment from a credit card with one institution from an account at another institution, right? So I go into the credit card, say pay from, you know, this previously set up source account. And the credit card bill, I remembered it to this day, was like $3,000 in change. And I went to pay it off and I fat fingered and put an extra zero and it made a $30,000

[01:02:36] Imane: Wow.

[01:02:36] Josh: Right. and I didn’t have that much money in that checking account.

[01:02:40] Josh: So it significantly overdrafted caused all sorts of other problems. You know, scheduled payments failed, like you name it. It was just this colossal, you know, outpouring of, problems. and you know what I called the institution. I was like, guys, you clearly see a fat fingered and hit a zero, right? And at the end of the day it was like, that was on you bud.

[01:03:02] Josh: Like, you fat fingered it, you said? Yes. I was like, all right, I guess I did and had to back out of it. But, anyway, so that’s kind of irrelevant, but it’s just funny

[01:03:09] Imane: No, I think, yeah, I mean

[01:03:11] Josh: of it like that is a risk, right?

[01:03:12] Imane: that is a risk, but that

[01:03:14] Josh: going back to your bingo card. Like one of your employees could just fat finger something like I did in that scenario, and that’s your risk.

[01:03:22] Imane: yeah, that is a risk. I mean, and this risk happens in many ways. Like, so for example, there is the, this risk where when the employees fat fingers and order and buy one zero or two zeroes or many zeroes, that’s one risk. Another risk is. in some, um, financial systems. So, in particular algorithmic trading, when there are algorithm trade automatically, they often have a functionality called kill switch.

[01:03:49] Imane: Meaning that if doing starts doing something crazy, someone goes and switch the button off to turn this off. So again, there are occurrences, like, when do you turn it off? Who turns it off? imagine someone forgets to turn this off or doesn’t see when it needs to turn this off. And there are many, many other examples like this, all across the, the financial, system, right?

[01:04:12] Imane: So, I mean, even, lemme tell you on the, on the bingo card of extreme events. I dunno if, you saw, but a few weeks ago, I think it was end of April, Spain and Portugal, two big countries had a total blackout. The power grid went down.

[01:04:31] Josh: I heard about that.

[01:04:32] Imane: So the two countries didn’t have power for, for a, I think 24 hours or 20

[01:04:40] Josh: Yeah, it was, I think it was pretty close to a full day, wasn’t it?

[01:04:42] Imane: it was close to a full day. so, that means that during that day, the places which had backup generators, the backup generators did something. The places where didn’t have a backup generator, you couldn’t buy groceries, you couldn’t buy water, you couldn’t do anything. Transport didn’t work, nothing didn’t work.

[01:05:03] Imane: so there is still an investigation on what exactly happens, but in terms of risk management, for example, let’s not translate it in the financial system. Say you operate in a certain region, you rely on that region to, access to the exchange every day and to get some liquidity from that exchange every day.

[01:05:23] Imane: So what happens if there is no power?

[01:05:26] Josh: Mm-hmm.

[01:05:27] Imane: I’m not saying that everyone needs to know the answer to this question, but I think if, someone has a big activity somewhere, that’s a scenario to say, okay, what’s like, what’s the cost? What’s the price? And if I’m doing certain type of contracts, what happen to them and things like that.

[01:05:46] Imane: At least, should be again, understood a little bit because, you know, it just happened in two countries at the same time and it happened in many regions of the world quite often actually. So I.

[01:06:00] Josh: Well, you know, and that, that kind of brings up a point too. You were, I, I think you kind of made the statement earlier, like there’s kind of this, there’s not this expectation that we’re going to have a perfectly filled out bingo card that’s gonna have every scenario. Like we just, we don’t have a crystal ball. Right. And, even thinking back to, I had a conversation with a financial institution a while ago that, their area had a power outage and and I was like, oh, but I mean, you guys run backup generators and that was fine. And they were like, oh yeah, it was fine. I. Except our backup generators failed. Right? And so they were completely dead in the water. and you hear about those types of scenarios, it’s like, okay, well, so do you have a backup generator to the backup generator? And then, you know, at what point is it, is it ridiculous? Right? At what point do you have the 32nd backup to the 32nd backup? and at what point is it just not make sense? And there’s almost this thought process. I kind of just, for you, I’m curious what your thoughts, but I kind of talk about, you know, risk management to a certain point of, you know, there comes a point where it’s like, yeah, hey, we should have planned for that.

[01:07:06] Josh: We should have prepared for this. hey, if this happens, we gotta be focused on getting this system back up online. But there’s also a certain point where like if it goes south far enough, you and I are fighting over freeze dried food and ammunition and cigarettes, right? Like there’s a certain point where just none of this matters anymore. And maybe once you get to that 32nd backup generator, like something is happening where that’s probably not the biggest concern anymore. And so, you know, how do you, as, as a risk management officer, I mean, how do you kind of sleep at night knowing that perfection is impossible? Having a crystal ball is impossible, but feeling good about, you know what, have a backup generator and I know what I would do if the backup generator failed too. That’s a pretty good assumption to say, you know, what, is that ordinary No. Is that in the realm of possible on my bingo card? Probably.

[01:08:07] Imane: Yeah, I think this totally the right approach is exactly in this example. I’ve already mitigated first order, maybe second order as well. Like I have a backup generator and I have a, a backup to the backup, for example. and if both fail, I know what to do. I’ve been transparent with all my clients and I’ve been transparent with the other stakeholders I deal with that that’s the backups I have.

[01:08:35] Imane: Right? I’ve been transparent with, my employees that these are the plans and in terms of managing. Uh, you know, managing the balance sheet or managing et cetera. I am kind of planning for, included something for planning for the unexpected. For example, like in some regulated parts of the financial system, actually the regulators make you hold some of this, liquidity cash to, for, for these things, accordingly to some calculation, but which basically I think that’s the, on the real me of it.

[01:09:11] Imane: in addition to that, and this is, I think it’s Warren Buffet, I’m gonna paraphrase him because I don’t remember exactly how he said it, but he said like, the biggest risk is to not know what you’re doing. Right. And I think that’s, that’s, I completely agree with that. So long as. we understand where we operate with whom we operate, that, the systems that are being used and the process are being used, you know, they cannot be at every point in time.

[01:09:43] Imane: Perfect. But at least they’re usually reviewed. Made sure that they’re up to scale and standard. I think it’s like the good practice and obviously best practice means that you have cutting edge data, cutting edge, tools that you’re using, models and metrics that you’re involving like experts.

[01:10:02] Imane: and then so this kind of best practice, and then you’ve got sophisticated practice. Sophisticated practice is when you enter the real me of like predictive models. very granular. We’re talking like granular, super duper granular data, more complete and much more bigger suite of scenario.

[01:10:21] Imane: So, which is again, like we talked about the bingo card, but how many rows and how many, columns you have in it, how many cells do you have in it? So some people will have like a 10 by 10, some others will have 10,000 by 10,000 as well. So, and, and you know, depending on which is your. Business activity and watches is the complexity of it.

[01:10:43] Imane: You might not need the 1 million by 1 million card,

[01:10:46] Josh: Yeah, totally. Yeah, I think that’s a great point. You know, I was, like there is such a thing as overkill too,

[01:10:53] Josh: right? And you can almost think about it from, you know, risk management is, is like insurance, right? And if you are, I don’t know, a small mom and pop donut shop that has, you know, an annual revenue of a hundred thousand dollars is $50 billion worth of insurance necessary. No.

[01:11:16] Josh: And your premium is gonna be insane

[01:11:19] Josh: and you’re not gonna be able to afford it. So, you know, same thing like with almost the backup generators is an example too. It’s like, well, can you even afford the 32nd backup to the 32nd backup? And is that a good ROI.

[01:11:30] Imane: Yeah. Well, very like the. The donut chop reminds me of, uh, another example, which is, it is also about what do you want to risk manage to, and let me explain with an example, so I, I’ll give an example. So a donut chop that also sells hot chocolate and they’re famous for their chocolate donuts. Like just for example, I dunno if such a thing is a thing, but

[01:11:58] Josh: I don’t know, but it makes me hungry.

[01:11:59] Imane: yeah.

[01:12:00] Imane: So anyway, so, and I think it was two years ago or a year ago, cocoa prices spiked. So cocoa prices worldwide, it was, I think it reached like over $10,000. A ton of cocoa. Like cocoa was as expensive as some metal, precious metals. Like it was very, very expensive. And this is because, so West Africa produces, I think 70% of cocoa in the world, and they had extreme weather.

[01:12:32] Imane: So the production was down, the production, I think in Ghana, ivory Coast and other countries. It was down. So a lot of, speculators, but also a lot of traders, a lot of producers, et cetera, started being bullish. Cocoa and the prices just increased. So if you are somewhere, which has as depends on the supply of cocoa, even if you are a small business, et cetera, you will be impacted.

[01:13:02] Imane: Right.

[01:13:03] Josh: Yeah.

[01:13:03] Imane: So, and then you’re like. If you were thinking about this two years ago, maybe you will not think that you should need to follow the weather in West Africa, right?

[01:13:15] Imane: But yet it’s very important to your business. So my point here is, yes, not everyone needs to overkill, but you need to understand your dependencies.

[01:13:29] Josh: Yeah,

[01:13:30] Imane: And it’s very important to understand, you know, like in your business, what can derail it. what are the dependencies I have here? So I am super famous from my chocolate donuts, so I need really good quality cocoa. So that commodity is very important to me.

[01:13:49] Josh: yeah,

[01:13:50] Imane: so even when you are a small business, you, it’s good to take that reflection of where are my dependencies and, how can I mitigate them if I have to.

[01:14:03] Josh: yeah. No, I that what’s the, the old saying, right? Like, what does the price of tea in China have to do with this? And I would argue probably more than you think,

[01:14:13] Imane: Yeah, I mean, like, again, going back to the interconnection, everything is very interconnected right now. And sometimes a crisis that you’re like, oh, I have another good example for you here. Especially for, for community, especially relevant to the community bank, and com banks and, um, local institutions that try to serve the communities and very important to them, what happens to their communities as well.

[01:14:36] Imane: Right. So this is an example. So would you think that a European debt crisis, so European debt crisis that happened in the two, between 2000, I mean, roughly speaking between 2010 and 2012, there was a big European debt crisis. Okay.

[01:14:58] Josh: Okay.

[01:14:59] Imane: What has that to do with impacting a lot of, Minnesota and other state farmers.

[01:15:06] Imane: You’re like, what? So there was a European debt crisis and there was a broke, a broker called MF Global. I dunno if you heard this name.

[01:15:17] Imane: That broker was very, uh, was operating in many markets, but was operating in the commodities markets. And a lot of farmers who were producing, grains or feeds or hog feeds, for example, were using that broker to lock in their prices, which is a standard practice.

[01:15:33] Imane: You want to hedge. You want to lock your, grain price that you’re gonna sell at. So you have an account with a broker and he’s gonna sell and buy futures for you to lock your prices. That is common price risk management that you would do if you are a. Small or big producer, right? So they were, had accounts with this broker, but this broker was also betting on European debt.

[01:15:59] Imane: And when European debt went down, that broker went down and some of these farmers had their accounts with it. so if, I mean, if you Google the name, you’ll find, I mean at the time I remember reading titles in C-B-S-C-N-N, like a lot of us media farmers impacted by a, by a financial institutions failure.

[01:16:19] Imane: And you’re like, what? Because of European debt. You’re like, what the, what is the link here?

[01:16:25] Josh: That’s

[01:16:26] Imane: And how would you think because you’re head hedging your feed hog, your the feeds to, to, to grain or grains or whatever commodity. Like how would you make that link? and that was like back so. I think it was in 2011, so we’re, so this was like 14 years ago, so imagine the interconnection now.

[01:16:47] Imane: So yes, this, is, I don’t have an obvious solution now other than say we ha are n allow now living an era where access to information is the easiest it has been. and I would say fact check, having access to fact check. Solid information is very important. and understanding dependencies are very important.

[01:17:12] Imane: and I think, you know, to be honest, I mean, not because I worked with them and because, but because this is really a true fact, is I think regulators and central bankers are doing a lot of work to publish as much as possible information around this, like publishing maps, publishing maps of how the interconnection works, making speeches so, you know, just to give credit when it’s when it’s due.

[01:17:38] Josh: Yeah. You know, that’s a good point. I mean, and, and when the data is accessible, like we need to use the data, I don’t know if you’ve seen there’s a, a social media account that I follow and now you’re going to gimme a hard time because if I say I follow it, I should remember exactly. I want to say, I think the account is called something like we have the data,

[01:17:59] Josh: right? And, and basically it’s this, you know, social media person who anytime there’s something major that’s happening, like in the news or whatever, and people are getting, you know, all riled up on one side or the other about it, or, you know, they have these assumptions, he comes in and just says, this is the data. And so a, a really good

[01:18:21] Imane: I see. So like very factual.

[01:18:23] Josh: over the last like six months or so, US air travel has been getting a lot of flack, for being said that, Know it’s really unsafe and there were lots of plane crashes and all of this. And so people were like, oh my gosh, you know, air travel is the most unsafe it’s ever been and you know, there’s more plane crashes than ever. And he posted all the data and he was like, and here’s all of the parameters. I used to get all of this data. And the data. said, actually today it’s safer than ever to fly on an airplane in the us Right? But like the news was telling us that it’s super unsafe. And so, you know, kind of to your point, it’s like we’re almost doing ourselves a disservice to, as, as we do any number of operations within our business of if we have access to the data man, use the data.

[01:19:16] Imane: Yeah, no, for sure. I mean, if there is access to a solid, reliable source of information, then it should be used because, you know, at least you’re gaining, some information, some insight, but also diversifying the, source of information as long as it is, verified sources and that are, you know, solid reputation and so on, can give insight.

[01:19:38] Imane: So. it can, can give different, ways to look at it and then applying your own judgment into another, layer. So, and similarly, like as you said in your example with your, your wife, it’s good to have two different views because then they can debate and sort it out. And maybe, you know, she might convince you of some of the risks and you might convince her of some of the rewards, and then you will find a good equation that works for you.

[01:20:05] Imane: And by doing that, you, both of you are calculating a lot of probabilities, but you’re not even realizing it. Right? So, and, you know, that’s

[01:20:16] Josh: Yeah.

[01:20:16] Josh: I love that. As, as kind of almost a, you know, a, a final statement to this whole thing is, is that it’s. Yes. it’s about the data. It’s about managing risk, it’s about the people side of things. It’s about the math. It’s about the philosophy. And you know, at the end of the day, the more people that we have kind of involved, the more information that we have involved, the more diversity that we have involved in that,

[01:20:40] Josh: in everything from the diversity of the data that we have to the diversity of the personalities and the people

[01:20:46] Josh: like my wife and I making those decisions, the more likely you are to come to a more reasonable and appropriate judgment about what you’re gonna do.

[01:20:55] Imane: Yeah. And, and you know, I feel like I haven’t delivered on the one liners, but, I think you think, you just reminded me now because you, you mentioned math and philosophy. You know what, I may have one here because, and this is a very philosophical touch, which is, do you know. I think the two professions that are paid to speculate, traitors and philosophers.

[01:21:21] Imane: And that’s the thought I would leave you with here. So,

[01:21:26] Josh: Yeah, you’re not wrong.

[01:21:28] Imane: so, philosophers, speculate, traders speculate and both.

[01:21:33] Josh: making guesses.

[01:21:34] Imane: And one, based on probabilities and the others based on what they view of the truth of our life and our observations.

[01:21:45] Josh: Yeah.

[01:21:45] Imane: So,

[01:21:46] Josh: You know, man, this has been like, I’ll, I’ll absolutely say this has been the most fascinating conversation I’ve ever had about risk management.

[01:21:53] Imane: okay. Well, I’m glad. I’m glad. I hope it was not boring.

[01:21:57] Josh: yeah. You know, I just, I also, one of the things I found really interesting about hosting this podcast over the years too, right?

[01:22:03] Josh: Is I’ve just, I’ve had the ability now to. not through any testament of my own, but like, to me just so many different people and so many different personalities and had so many different guests. And I’ve been really blessed by the just, incredible diversity in the guests that I’ve been able to have on this show and get to know.

[01:22:19] Josh: And, and one of the things that I always love is the types of personalities that are really good about talking very about, very complex subject matter, right? Or very scary or challenging topics or, or something that’s, you know, kind of abstract. And then being able to, I’m just gonna say it bluntly, like dumb it down for somebody like me. And I really appreciate the way you were able to put things into just really simple analogies, like when you started with the whole risk of the lying behind the tree. Like I was able to kind of run with that through the whole episode of, okay, like, that made sense to me. Like I can kind of tie things back to that analogy. so this was really, really helpful and I absolutely think that I walked away with some, I. Some really insightful philosophical nuggets of how we can use math and data to be smarter about what we do. So I really appreciate you coming and being a guest and thank you.

[01:23:06] Imane: No, thank you. Thank you for inviting me.

[01:23:10] Josh: well, before I let you go, I have two final questions for you.

[01:23:12] Josh: So, where do you go to get information? Like, what do you do to stay up to date about what’s happening in the world?

[01:23:18] Imane: I state to date, two main ways. One is by reading new thoughts that come out, like some books that come with new ways of thinking. You know, I mentioned Yuval, Noah, Ari, Daniel Kaman is another one. Like I, when there are thoughts that challenge the status quo, I wanna learn about them and like why, how, also I would say.

[01:23:41] Imane: A reliable source I go to, and this is because I, contributed to producing some of it in the past. So I know the rigor, the research, the background that goes into it is regulatory reports. So all the big central banks publish something called financial stability reports. And usually they’re very documented, they’re very, informational.

[01:24:05] Imane: And they, and I like to go there to see what are they caring about, what are they seeing in the system even like, it’s very interesting and informative and like, for example, in the US also the New York Fed, I think published a series of podcasts on communities, institu, communities, banks and communities, institutions.

[01:24:25] Imane: And it was fascinating and so interesting.

[01:24:28] Josh: have to check that out.

[01:24:30] Imane: Uh, yeah, I mean, for example, and I always find those, information that comes there. Very research. So these are the two, sources, uh, like, thought provoking, thought leadership books from usually either historians or scientists. And then regulatory reports such as a financial star, financial stability report, or, uh, risk reports by the central banks, which gives you like a system view of what’s going on.

[01:24:58] Josh: That’s cool. I, I’m gonna definitely check those out. well, if people want to connect with you or if they wanna learn more about your company, how can they do that?

[01:25:06] Imane: I think, so I have a website. It’s called log risk.co. But also they can look at what I post on LinkedIn. I don’t post that often, maybe once or twice a month, but I try to lay out my thought and I try to lay them out in an accessible way around complex topic. For example, the last one I posted on, on LinkedIn was on algorithmic trading.

[01:25:27] Imane: But I tried to explain it in a simple way. The chess analogy we started the talk with, I also posted there. and I kind of lay out the risks that you can see from that video by watching that video, for example. So yeah, LinkedIn is, is where I sometimes publish my, my thinking.

[01:25:47] Josh: Awesome. Well, thank you again, Imane for coming and being a guest on the Digital Banking Podcast.

[01:25:53] Imane: Yeah. Thank you so much, Josh, for inviting me. It was great talking to you as well.

[01:25:56] Thank you for listening to the Digital Banking Podcast, powered by Tyfone. Find more episodes on digitalbankingpodcast.com or subscribe on Apple Podcasts or wherever you get your favorite podcasts.

2025-09-05T12:41:12-07:00
Go to Top