Responsible Asset Owners Global Symposium

View Original

AI X IP Crime: the collaboration no one wanted

Posted on 9 May 2023 Mischon de Reya

"Humanity can enjoy a flourishing future with AI … Let's enjoy a long AI summer, not rush unprepared into a fall."

Those were the cautionary closing lines of a recent open letter from the Future of Life Institute. The letter called for a pause in the training of AI systems more powerful than GPT-4 (the AI system that powers ChatGPT). Google CEO, Sundar Pichai, has also warned about AI, stating that "It can be very harmful if deployed wrongly … and the technology is moving fast", while Elon Musk has cautioned that AI could lead to "civilization destruction". Of course, the same tech giants have contributed to the AI arms race by shifting corporate strategies and investing billions of dollars in a very short period of time, leading to an unprecedented proliferation of AI technologies.

Although a world-ending AI event like the one envisaged in 'The Terminator' may be far off, there are serious commercial consequences stemming from the abundance of AI tools which businesses should be aware of. One important issue is the ability of AI to enhance online IP crime.

As was highlighted at this year's Regional IP Crime Conference in Dubai (organised in cooperation with Interpol), the proliferation of AI tools makes it more challenging to tackle IP crime. Indeed, the European Union Intellectual Property Office (EUIPO) recently outlined how AI can be used to boost IP crime and circumvent certain safeguards. The following areas stand out as being particularly concerning for businesses. However, whilst AI is used to enhance criminal activities, the same technology can also be used to stop them.

  • Marketing and distribution of counterfeit goods

  • Live streaming of copyright protected digital content

  • Distribution of copyright protected digital content

  • Theft of a company's trade secrets

  • IP rights registration and services fraud

  • Cybersquatting and typosquatting

What can be done to counter these threats?

As criminals accumulate an AI arsenal, so too have law enforcement agencies, who are in a constant cat-and-mouse game to prevent and enforce against online IP crime. Interestingly, many of the same AI tools can be used on both sides.

For example, law enforcement can use computer vision for recognising infringement patterns, predicting future infringements, detecting the marketing of infringing goods, and detecting and analysing fraudulent logos. Authorities can also use natural language processing to identify and block phishing attacks, analyse fraudulent behaviour, and quickly recognize infringements. Machine learning can be used to detect fake online content, improve content recognition tools, and identify infringement patterns.

In addition, expert systems, which solve complex issues and imitate human decision-making, can be used by authorities to identify the best strategy for protecting a system from specific vulnerabilities.

Conclusion

The majority of these crimes are not new. However, whereas criminals of the past were forced to use basic software or even manual techniques to carry out their activities, modern offenders benefit from the anonymity, scalability, speed and user-friendliness of AI to enhance the frequency and effectiveness of their crimes.

Whilst international conferences facilitate cross-border cooperation, more knowledge-sharing is required by law enforcement agencies to seriously tackle these offences globally. Authorities, as well as businesses, must also upskill their workforce, provide training and familiarise themselves with various AI technologies to be able to effectively fight online criminals in the field of IP.

On a national level, the UK Government published its AI white paper last month, which sets out its proposed framework for regulating AI while encouraging innovation and unleashing the benefits of AI. As we discussed in our review of the white paper, it proposes high-level overarching principles for AI regulation, but no new legislation and no new regulator (though it may in the future introduce a statutory duty for existing regulators to have due regard to these principles). It also proposes that regulators work together to produce joint guidance for businesses, to encourage clarity and make it easier for businesses to comply whilst still developing innovative products. As noted in our report, the white paper's approach is flexible and pragmatic but lacks certainty. Furthermore, commentators have remarked that the white paper does not deal with important issues relating to the allocation of liability for AI, the risk of overlapping regulatory jurisdictions and the uneven enforcement powers across the different regulators.

It is encouraging to see that experts, authorities and governments are considering how best to manage the growth of AI. At the same time, businesses must also be proactive and take appropriate steps to ensure that they can prevent, or at least mitigate the risk of increasingly sophisticated and often complex cyber-attacks.  

Click here for more information on our cyber security and investigations practice, MDR Cyber.