AI: Innovation vs. Risk - Laurence Simons

AI: Innovation vs. Risk.

By Cameron Pearce, Consultant

Cameron Colour Granite Original-min

AI: Innovation vs. Risk  

 

AI can be a polarising subject regardless of industry context. While some are hailing the unprecedented advances of AI technologies, others are more sceptical about the implications of AI and the potential risks associated with it. Regardless of where you fall between these two camps, one thing is for certain; the pace of innovation within the world of AI presents one of the biggest challenges for organisations and regulators alike in ensuring the technology is being used responsibly and ethically.

 

Data is at the core of AI and every participant on the AI chain needs to be in control of how that data is being used in order for AI to remain a positive step forward on firm ground. Where there is a need to capture and store data, there comes a need for regulatory solutions to adapt, catch-up and future proof.

 

Today, we are seeing a wide range of regulatory solutions coming from different parts of the world, which is a problem in and of itself. How can you regulate AI tools that operate internationally, with a national regulator? The European Union’s AI Act, which passed its first vote in early June 2023 is a rules-based, highly prescriptive piece of legislation, banning things like real-time facial recognition. OpenAI lobbied the EU to water down the regulations, warning they would withdraw from Europe if the regulations became too onerous. Here we see the EU’s scales tipping in favour of mitigating risk. OpenAI may have been the first to suggest withdrawing but, if the EU becomes a tricky environment to operate in, other AI businesses may follow suit, especially if other jurisdictions take a lighter touch.

 

What about the other side of the coin, where innovation tips the scales? We’re seeing a lighter-touch approach coming out of the US, where big-tech reigns supreme and legislators are playing catch-up. Then comes the concern about the potential race to the bottom and creating the most attractive market for new AI companies to set-up in, at the cost of regulation and ultimately, protection.

 

All is not lost, however, as Lord Holmes’ commented in his recent talk at the London AI Summit. The UK has a strong history of regulating new technology, providing space for experimentation in a safe and responsible way. Lord Holmes cited the FCA’s Regulatory Sandbox, a methodology that has since been adopted in 50+ jurisdictions.

 

Market commentators and AI experts are expecting the impact of AI regulation to be comparable to the introduction of GDPR in 2016. This will create some profound implications for businesses, especially as the proposed potential penalties have a GDPR’ian feeling to them (maximum fines being €30million or 6% of turnover).

 

AI and recruitment:

 

As regulators race to implement methods to safeguard organisations and consumers, this has a knock-on effect on organisations and their legal teams. At Laurence Simons, we’ve seen an increase in the demand for AI-literate lawyers and data privacy specialists. As companies look to further progress in technological transformation and the digitalisation of their data, processes and operations, the risks have increased. We have advised on senior appointments across several sectors for individuals who can embed the right principals-based approach to AI, permeating the senior levels of the business and leading the way on a considered, harmonised, global approach to AI integration. Any organisation looking at their future, will need to think about the same considerations to stay ahead of the curve, or risk playing catch-up to potentially stringent regulations.

 

Top talent will always have an eye on the risk horizon, getting the guard rails in place for what future regulation could look like and taking a commercial view on what can be done to insulate the business as best as possible. Crucially however, as with compliance, ensuring there is a strong tone from the top on AI usage is vital. As with data law regulation, transparency and ‘explainability’ will underpin the regulatory approach to AI, ensuring all usage is well documented. In this vein, we expect to see AI audits climb the risk register at a rate of knots.

 

AI is also being used in the recruitment process itself. We are hearing of real-life examples of AI tools being used in the early stages to screen initial applications. An obvious benefit of this could be time saving, however, as businesses look to prioritise and promote diversity and inclusivity, we are seeing AI’s inability to remove bias. Its use in the recruitment context has been heralded as “pseudoscience”, that is, the belief the result is based on scientific method, but clearly in this situation, there is no human touch or consideration. In an age when more emphasis than ever is placed on recognising the benefits of diversity in the workplace, the use of AI and its lack of nuance in this area could undermine an organisation’s policies.

 

Concerns about AI’s use in the recruitment process continue to be at the centre of robust debate, but AI is also a topic of discussion when it comes to the candidates themselves.

 

The best talent has a unique blend of operational awareness and strategic nous. Fundamentally, they understand the implications of the tech (both positive and negative) and the processes needed to mitigate risk when AI tools are used in day-to-day business. Pro-AI business leaders will look for a healthy risk appetite from their lawyers when it comes to AI implementation. Whilst this tech remains new and innovative, the Navy SEALs’ quote “get comfortable being uncomfortable” springs to mind (this isn’t to say effective lawyers, in the age of AI, need to be Navy SEALs). In the future (and indeed, the present) more tech-savvy lawyers will be in higher demand, and those who are tech illiterate risk falling behind. We’ll see this not just in Technology sectors, but across the board as technology advancements are implemented across everyday workstreams.

 

Balancing innovation against risk is no new challenge but, in the context of AI, it is certainly a unique one. We have seen many businesses develop and be well underway in their approach to tackling AI governance internally, and this is essential. Being reactive only once the regulations are set to be enacted will leave you struggling to find the (already) limited talent, as well as playing catch-up on auditing all the AI tools your business units have picked up along the way. This is a very real problem, and we have seen several tangible examples of it happening, even in businesses with advanced AI Ethics Committees.

 

If you would like to discuss AI, what other businesses are doing in their approach to AI, or specifically about building a senior team of AI-literate lawyers and data privacy specialists, please feel free to get in touch.