By Julian Hobbs, CEO, Commercial Finance, Siemens Financial Services, UK

As Peter Thomas, chief operating officer of the Leasing Foundation, noted in 2017, “AI has a long history, a complex present, and an exciting future”.1 I couldn’t agree more. It is clearly a major new tool in our armoury and to ignore its power would be nonsensical.

We often forget about this long history when discussing artificial intelligence, but in fact, in 1950, English mathematician Alan Turing was already thinking about intelligent machines, how to build them, and how to test them.2 Today, numerous commentators define AI in relation to human intelligence, explaining that “AI is a machine’s ability to perform the cognitive functions we associate with human minds, such as perceiving, reasoning, learning, interacting with an environment, problem solving, and even exercising creativity.” This is where complexity arises, as humans grapple with all the known and potential connotations of this capability, but it is also what excites us, as we contemplate a future where machines can help us to save time, increase accuracy and eliminate the mundane.

AI is comprised of systems that examine decisions; by learning how these decisions were made and the parameters of those decisions, it is able to quickly recreate the human decision-making process. Of course, this great power comes with great responsibility, and the proper mechanisms must be in place to ensure ethical governance of AI-powered decisions. Without moral checks in place, AI use is likely to be constrained in many sectors – particularly financial services, for instance – and businesses may not be able to maximise potential gains. 

AI in action today

AI is already in widespread use across many sectors. Let’s look at a couple of industries where we are very active with lease finance. 

Among the attractions of AI, its ability to address skills and staff shortages plays a big part. In medical imaging3, for instance, AI is an invaluable support to human teams, enabling greater efficiency and productivity. In many countries, the number of imaging specialists is seriously insufficient to meet growing demand for radiology examinations.4 Every stage of the imaging process can be enhanced through AI, including scheduling appointments, accurately positioning patients on machines, and detecting and describing abnormalities.5

Human analyses are prone to subjectivity and error and may be affected by a range of external conditions on any given day. The healthcare sector is also a prime example of where the accuracy and consistency of AI-powered decisions is a great asset, addressing a key pain point: in the US, misdiagnosis is estimated to cause up to 80,000 hospital deaths each year6. This level of consistency will be a relief to medics who can then focus on where their expertise is most needed, with reliable data to boot.

This accuracy and reliability also reduces expense, as in manufacturing, where AI is used for factory automation, order management, and automated scheduling.7 A Make UK survey of 135 manufacturers found that over half of companies (55%) have already implemented or, are planning to implement, AI and machine learning to automate decision-making processes and improve operational efficiency, though more than 80% of surveyed companies also expected it to take up to five years to see a positive impact of their investment.8 However, with manufacturing estimated to generate higher volumes of data every year than communications, finance and retail, this investment is a growing necessity.9  

The impact of demand forecasting is only expected to grow with the introduction of AI-based tools, with global corporations such as IKEA (demand sensing) making use of it to manage output. Demand forecasting has enabled IKEA to lower costs, reduce waste and even decrease emissions, with the ultimate benefit going to the consumer by keeping prices low.10

Make UK, Manufacturing and Automation:
Opening the Gates For Productive And Efficient Growth

So what about our own sector – financial services? AI systems have the advantage of being able to process vast data volumes, which can then be used to make more informed, personalised recommendations. For instance, by analysing data such as ‘customer behaviours, earnings transcripts, and trading patterns’ in real time, AI gives financial advisors and lenders useful insights for customised portfolio strategy and planning.11 This type of customisation is a critical differentiator in a highly competitive market. A better understanding of the individual allows financial institutions to push margins (and marginal decisions) while also managing risk. However, numerous issues may arise from personalised decision-making, especially in finance, as we discuss in this short piece.

All financial institutions are under pressure to adopt AI and the benefits are undeniable, but below are some of the reasons why we need to take this transition slowly and ensure that we keep employees and customers front of mind before we make any giant leaps.

Areas of exposure: AI & Financial Services

Employment ethics
Of course, automating tasks may lead to some job displacement, driving unemployment and therefore creating new ethical dilemmas. Those affected are likely to require reskilling to allow for re-employment, to fill other required roles and to prevent financial dependence, as well as all the social and human ills this can create. However, the use of AI can also support job satisfaction, as we reduce or even eliminate menial and manual tasks, streamline operations and improve efficiency, or take advantage of having a trusty AI ‘co-pilot’ to support day-to-day work. Though we don’t know precisely how this will develop, it is not unreasonable to anticipate benefits for work-life balance and standard of living.

How can we address this?
Industry and governmental organisations – not least the financial services sector – will need to support a fair transition and minimise the introduction of new or greater inequalities within society, since certain sections will be more affected than others.12

Transparency & accountability of decision-making

The complexity of many AI models makes it difficult to lift the bonnet and understand why a specific credit decision was made, for instance, and so transparency on the decision-making process may be lost.13 This creates legal vulnerabilities when decisions are queried, especially if there are significant consequences related to those decisions, and ultimately, this erodes trust in AI systems. It also raises the question of who is responsible for AI-generated decisions. 

How can we address this?
There must be clear lines of accountability to address ethical concerns and to allocate responsibility in case of errors, but also to protect users such as financial advisors.

You can’t just look inside a deep neural network to see how it works. A network’s reasoning is embedded in the behaviour of thousands of simulated neurons, arranged into dozens or even hundreds of intricately interconnected layers. The neurons in the first layer each receive an input, like the intensity of a pixel in an image, and then perform a calculation before outputting a new signal.14

Will Knight, MIT Technology Review

Hidden bias in AI- generated decisions

AI is not completely safe from human subjectivity, being based on previous human decisions and real-world events. Should bias based on factors such as race, gender, or socioeconomic status creep into AI-powered decisions, whole lending communities may be subject to discrimination, perpetuating or even exacerbating existing prejudices in financial decision-making. Not only is this unfair, but it could also have a broader impact on entire sectors, and therefore, national economies. This is just one example of how the widespread use of AI in financial decision-making could introduce new systemic risks, affecting the stability and resilience of financial systems. 

How can we address this? 
AI systems must be designed with inclusivity in mind, to avoid excluding certain individuals or groups from financial services.

Data privacy & informed consent

Data privacy is likely to be at the forefront of commercial concerns related to AI already but is always worth mentioning due to its importance. Protecting personal, financial and commercial data from unauthorised access and ensuring compliance with privacy regulations are critical ethical considerations. 

How can we address this? 
Individuals (that could mean a sole trader) should be informed about how their data is used in financial decision-making and have the ability to provide informed consent. There also need to be measures that prevent malicious actors from attempting to exploit AI systems for fraudulent purposes. 

Regulatory compliance

This element brings together multiple factors outlined above, since regulators will be scrutinising financial services in each area. Financial institutions need to ensure that their AI systems comply with relevant laws and monitor the latest developments. One analysis of leading global banks revealed that 8 of the 23 largest banks in the US, Canada and Europe currently provide no publicly available ‘responsible AI’ principles.15 Only three “showed evidence of creating specific responsible AI leadership roles, publishing ethical principles and reports on AI, and partnering with relevant universities and organisations”.16

How can we address this? 
There is a need for harmonised regulatory frameworks that address the current lack of clarity on how current rules apply to AI17, so that all firms are working towards the same standard. It is partly with this aim that the World Economic Forum created the AI Governance Alliance in 2023, bringing together multiple global stakeholders to shape a responsible and sustainable approach to AI development and implementation.18

AI’s environmental footprint

As a final note, I feel we have to take all the evidence around the computing power that sits behind AI into account when thinking of our sustainable footprint. A recent article in the FT revealed the major effect of data centers on energy consumption and water usage (for cooling)19. Our digital lives therefore have an impact on the environment and we must examine all the available data to manage it responsibly. Just another factor that needs recognising.

Combined powers

There’s no doubt that AI is exciting, and that it is a fundamental part of our future. As with the advent of the internet, we need to grab it with both hands. 

AI is here to stay and is – as we said right at the start – both powerful and exciting. We just need to make sure we’re also be careful and thoughtful. Responsible use of AI in financial decision-making requires a combination of ethical design, robust regulatory frameworks, transparency, and ongoing monitoring to ensure societal values and ethical principles are maintained.

There are many advantages to be gained from deploying AI, not least the possibility of focusing our efforts on the most interesting and fulfilling work and delegating all that is dull and repetitive to machines. However, questions remain on the topics discussed in this paper, meaning that continued human oversight remains essential to uphold inclusion, fairness and legality. 

While generative AI on its own has a great deal of potential, it’s likely to be most powerful in combination with humans, who can help it achieve faster and better work.20

McKinsey, What is AI?