top of page
  • Writer's pictureCarolyn Röhm

Surely you can't build predictive models without data?!?

Updated: Dec 7, 2023

Here at Scores4All, we know just how tricky it can be to implement new scorecards. We know that it can be a very painful and frustrating exercise. We’ve experienced it ourselves and we’ve experienced it with our clients.


🗸 We know the drill.

🗸 We know that scorecards that have been in place for a while, need to be replaced.


I clearly remember needing to replace application scorecards several years ago.


Kicking off the Scorecard Redevelopment Project

Find a vendor...

...surely this can’t be that difficult, I thought naively to myself. As it turns out, that wasn’t step 1. Step 1 was to rally the troops and ensure our internal decision-makers were on board with the fact that our scorecards needed to be replaced.


Easier said than done, partly because everyone knew that replacing our scorecards meant that a project would need to happen.


❔ And the issue with this...


Internal resources, or more specifically, a lack thereof 😔


Well, to be fair, we had great resources; it was just that they were all deployed on other important projects. Jumping the queue would mean deferring other important work.


So, no queue jumping!


Back to step 1 – getting the various signing powers on board with the fact that our scorecards needed to be replaced.


Now, all these individuals were experts in their areas of expertise, it’s just that their areas of expertise weren’t retail consumer credit risk analytics, with a focus on scorecards. So, my first, and possibly most important job, was to convince these people that we did need to replace our scorecards, despite the fact that they were still OK.


Once we had agreed that the scorecards did need to be replaced (before it became URGENT), well, then we started to get into the nitty-gritty.


blue-green bricks of varying shapes and sizes
No historical data required - Image by Carolyn Röhm

Getting into the Nitty-Gritty

As we did not have an internal scorecard development team, this meant that we would need to engage with a scorecard vendor. And because of the nature of the project, it was decided that we ought to have a competitive process to determine who would develop the scorecards for us. In order to do that, we would definitely need a Project (yes, with a capital P).


The project was formed, and we engaged with various internal resources to ensure that all the i’s were dotted and the t’s were crossed.


➡️ We had a project manager – a wonderful woman who kept everything on track and running smoothly.


➡️ We had procurement on our side, to ensure that the vendor selection process was fair and that we achieved the best outcome.


➡️ IT were involved as they would need to test and implement the new scorecards.


➡️ My team were involved, as we would need to work closely with the vendor to ensure that the scorecard met expectations.

☺️ We were responsible for sourcing the data and making sure that any issues were identified and rectified – no small feat!

☺️ It was also a great opportunity for my team to learn about scorecard development.


As far as scorecard development and implementations go, that project was one of the smoothest I’ve ever seen.


😃 My team of analysts was keen to learn about scorecards, our vendor was wonderful, and engaged, and despite working remotely from a different country (halfway around the world), ensured that we knew what was required and by when assuming we wanted to meet our own deadlines.


😃 We had a fantastic project team, an engaged procurement team and some seriously switched-on and superb project and IT resources.


😃 It meant that the development, testing and implementation went smoothly.


What a relief that was.


Because we’ve seen some projects that haven’t run as smoothly.

😰 Projects where the data is difficult to extract, and needed to be extracted from the host system.


😰 Projects where our client’s resources are far too thinly stretched and have too many competing (and conflicting) priorities.


😰 Projects where the necessary data resources weren’t available, and deadlines continuously made that whooshing sound as they flew by.


Here at Scores4All, we’re absolutely determined to make scorecards, the gold standard of retail credit risk assessment, accessible to all organisations, and to break down the barriers that prevent organisations from using scores.

One of the key barriers to entry is the historical data requirement.


When developing scorecards, one of the key caveats is that the future is like the past. What scorecard developers mean when we say this is that as an organisation, you’re targeting similar individuals going forwards as you have historically targeted. This is important because different segments of the population can, and do, perform quite differently. And while we're talking performance - the same population using different products will perform differently.


And because of this requirement, it means that to build scorecards, data is required. Traditionally, scorecard developers have gathered as much historical data as necessary (and as little as necessary) and used this to develop the scorecard(s).


This is fantastic; if you happen to have readily accessible data and a team of analysts who can pull it together and ensure that it is clean and merged properly. Although I’m sure any self-respecting scorecard developer could do that for you, and I assure you, they will charge you for the privilege.


So, what about organisations who really want to implement scorecards but fall at the data hurdle – where extracting the required data puts scorecards firmly in the "Too Hard" basket?


It’s probably the question we’re most frequently asked – what do you mean by “no historical data required”? Or its close cousin, “How do you build scorecards without historical data?”


As it happens, we (my cofounder Eva Neves and I) have spent many, many years (decades) working across industries, products, regions, and countries developing and implementing scorecards and performing retail consumer credit risk analytics.


And when you do this, and pay really close attention to the data that you’re working with, you become intimately familiar with the patterns within the data; understanding where the breakpoints are, understanding the impact of features (independent variables in old-money) and their attributes on performance and the target or performance variable (dependent variable in old-money). Understanding which variables are highly correlated to one another, and of those highly correlated variables, which ones are the most predictive and stable.


In fact, one of my colleagues claimed that I have spidy senses and I know that Eva does.


It’s just what happens when you have spent as much time as we have working so closely with such similar data across industries, geographies and products. You spot and recognise familiar patterns and you spot the inconsistencies. And being super curious, you hunt down the ‘why’. Why are these patterns different when I might have expected them to be the same, or why are they similar when I expected something different?


Hmmmmmmmmmmmm, that’s interesting… is one of my favourite phrases of all time, usually muttered quietly to myself when finding something unexpected.


And so, when presented with a client who, for whatever reason, does not have historical data available, we rely on our knowledge, our experience and our expertise.


✔️ We create scorecards using this knowledge.

✔️ We implement them.

✔️ We start scoring records using the client’s data.

✔️ And then we use our proprietary AI and ML algorithms to customise the scorecards, making them unique to our clients.


The more clients use the scores and the more data they supply, the better the scorecards become. We continuously update our scorecards, ensuring that they are always performing as well as they possibly can.


 

Interested in hearing more about how Scores4All may be able to help you?




25 views0 comments

Comments


bottom of page