Dear all,
Belgravia Law’s October Newsletter recaps new announcements from our firm and legal news from the UK and other jurisdictions. It provides updates and recent activities from our firm, legal news and trends, a case law digest and beyond-law insights.
We hope this is both useful and of interest to you and your colleagues.
Kind regards
Belgravia Law
The Online Trade Sanctions Interface (“OTSI”) is officially live, offering a streamlined application process for trade services sanctions licences. Moving forwards, OTSI will handle the processing of all new applications for licences to provide sanctioned trade services.
OTSI’s introduction marks a significant development in the distribution of responsibilities between the three licensing bodies within the Department for Business and Trade (“DBT”).
Application Requirements
Applicants must provide a clear definition of the services intended for licensing, demonstrating how these correspond with one or more categories of prohibited services under relevant sanctions regulations. Applications should reference these regulations carefully.
Applicants must also explain how the provision of these services would remain in line with the purposes of the sanctions. Specific activities, referred to as licensing ‘considerations’ or ‘grounds’, likely qualify for licences. If an application is made under these pre-defined grounds, it is important to show how the services fall within these considerations. Applications may also be made for services that do not fit within these grounds, provided they are consistent with the broader purpose of the sanctions.
Licensing Grounds
A comprehensive list of pre-defined licensing grounds is available in the statutory guidance for each sanctions regime. For professional and business services supplied under the UK Russia Regulations, grounds include:
Services essential for delivering humanitarian assistance
Services related to the production or distribution of food for civilian benefit
Medical and pharmaceutical services for civilian use
Civil society activities promoting democracy, human rights or the rule of law in Russia
Services necessary for non-Russian entities to divest from or wind down business operations in Russia
Services to a person connected with Russia by a UK parent company or UK subsidiary
Services necessary for the urgent prevention or mitigation of serious risks to human health, safety or infrastructure
Services ensuring critical energy supply
Legal advisory services may qualify for a licence if the relevant activity meets the licensing ground applicable under UK sanctions.
When a Licence Is Not Required
UK sanctions regulations apply to all individuals and businesses in the UK, as well as UK nationals and businesses operating abroad, under what is termed a “UK nexus”. A licence is not required to provide services that are either not prohibited by sanctions regulations or covered by an exception.
Who Is Covered by a Licence
A licence may provide authorisation for:
A business with a UK nexus, covering employees, members, partners, consultants, contractors, officers and directors
Named individuals with a UK nexus working for a business without a UK nexus
An individual, such as a sole trader
Third parties, such as legal advisers, may apply on behalf of businesses or individuals by uploading a letter of instruction.
Licence Renewal
A new licence application is required if renewal is needed or if there are changes to the details of the licence, such as the scope of services or parties involved.
Stay Updated
The updated gov.uk pages reflecting OTSI’s new licensing responsibilities can be accessed for various sanctions regimes, including:
Complying with professional and business services sanctions related to Russia
Providing professional and business services to a person connected with Russia
Russia sanctions: guidance
The UK Sanctions List
Trading under sanctions with Russia
UK sanctions relating to Venezuela
Venezuela sanctions: guidance
For more details, visit the OTSI online licence application service.
Aiteo Eastern E & P Company Limited v Shell Western Supply and Trading Limited & Others [2024] EWHC 1993 (Comm)
The recent Aiteo v Shell case has brought attention to the disclosure obligations of arbitrators and concerns over perceived bias. Aiteo Eastern E & P Company challenged four arbitration awards, relying on undisclosed ties between tribunal member Rt Hon Dame Elizabeth Gloster DBE and the law firm representing Shell, raising critical questions about impartiality in arbitration.
On Monday 28 October 2024, General Licence INT/2024/4671884 expired. A new General Licence (INT/2024/5334756) specifically covering legal services came into effect at 00:01 on Tuesday 29 October 2024.
The new General Licence, along with its reporting forms, is accessible on the Legal Services General Licence page on GOV.UK. Individuals intending to rely on this licence should review it thoroughly to understand the definitions, permissions and requirements for its usage. A publication notice highlights key changes to guide users of this licence.
The General Licence can be accessed through this link.
Jurisdiction: Sweden
The SCC Arbitration Institute (“SCC”) has published non-binding guidance on the use of AI in cases administered under the SCC rules. The guide acknowledges that AI is a transformative field and given its substantial benefits, the guide proposes how to maintain adaptability and versatility when using AI while still contributing to the development of global best practices. The SCC encourages that, when using AI, Arbitral Tribunals should consider factors such as confidentiality, quality, integrity and non-delegation of decision-making.
Jake Lowther, Specialist Counsel at SCC, stated:
“AI is the future! Its current application offers substantial benefits for arbitration users, particularly in terms of cost and time efficiency. We anticipate a dramatic increase in AI usage.”
Harnessing AI’s Potential
The guide adopts a light-touch approach, emphasising the importance of flexibility while fostering the development of best practices. The SCC advises arbitral tribunals and relevant participants in arbitration to keep the following key factors in mind:
Confidentiality: Certain AI applications may inadvertently compromise arbitration confidentiality. It is essential for participants, especially arbitral tribunals, to understand how any data input is processed when employing AI tools.
Quality: Tribunals must ensure that utilising AI does not compromise the quality of their decisions. Human oversight is necessary to ensure appropriate review and verification of AI outputs prior to their incorporation in arbitration.
Integrity: Transparency and accountability are vital for maintaining integrity. Tribunals are encouraged to disclose any use of AI in fact-finding, legal interpretation or application of law to ensure the parties' rights to be heard.
Non-delegation of Decision-Making Authority: While AI can assist in decision-making, it must not replace the tribunal’s role. Arbitral tribunals retain sole authority over decisions and the reasoning behind them.
The Guide to the use of AI in cases administered under the SCC Rules can be accessed here.
State of Libya v. Siba Plast – Decision of 1 October 2024
Overview of the Judgment
On 1 October 2024, the Paris Court of Appeal’s International Trade Chamber overturned the enforcement of a €280 million arbitral award against the State of Libya. The Court found that Libya had not been properly notified of the arbitration proceedings, leading to a breach of the principle of adversarial proceedings under French law. The decision emphasises the importance of procedural fairness in international arbitration, especially when dealing with State parties.
Case Background
The dispute stemmed from five commercial contracts concluded in 2012 between an Italian company and the Libyan National Transitional Council, acting on behalf of the Libyan State. The Italian company’s rights under the contracts were later assigned to Siba Plast, a Tunisian company. Siba Plast alleged that Libya failed to fulfill its contractual obligations and initiated ad hoc arbitration in Tunisia, invoking the arbitration clause in the contract amendments. In Libya’s absence, the arbitral tribunal issued an award in November 2014, ordering Libya to pay €280 million.
Procedural Missteps and Legal Issues
After obtaining an enforcement order in France in March 2017, Siba Plast sought to enforce the award by seizing Libyan state-held bank accounts. Libya appealed the enforcement, claiming it was unaware of the arbitration as it was not properly notified. Libya's appeal cited Articles 1520 and 1525 of the French Code of Civil Procedure (“FCCP”), focusing on the breach of adversarial proceedings, a fundamental principle under French law.
Court’s Decision and Reasoning
The Paris Court of Appeal ruled that Siba Plast had failed to ensure Libya was duly notified of the arbitration proceedings. The court noted two critical issues: the email addresses used for notifying Libya were incorrect and did not match the designated contacts under the contract; and the notification methods stipulated in the contract were only applicable to contractual matters and not to arbitration, which is considered separable from the underlying contract.
The Tunisian Arbitration Code, which governed the arbitration, did not allow for electronic communications for procedural matters. Even if it had, the email addresses used were invalid.
Given these deficiencies, the court determined that the State of Libya had been deprived of the opportunity to defend itself, violating the principle of adversarial proceedings. Consequently, the enforcement order was overturned and Siba Plast was ordered to pay Libya’s legal costs.
Practical Implications
Adherence to principles of adversarial proceedings is vital for the enforcement of arbitral awards in French courts. The ruling underscores that missteps regarding notifications can lead to the annulment or non-enforcement of arbitral awards. Parties should exercise caution when notifying State parties, ensuring that they comply with all contractual and procedural requirements to avoid similar outcomes. The State of Libya v. Siba Plast illustrates the strict approach French courts take towards procedural fairness in arbitration.
We are pleased to announce Benjamin Wells successfully undertook a trip to Bishkek commissioned by the Westminster Foundation for Democracy from 29 September to 4 October 2024. This trip was aimed at addressing legal and economic reforms within the framework of UK-Kyrgyz relations.
During his visit, Benjamin engaged with a diverse group of stakeholders, including government officials, local business leaders and legal professionals. The trip featured several key events:
Key Topics of Discussion
Roundtable Discussions: These forums provided a platform for in-depth dialogue on pressing issues such as legal reform, investment opportunities and the importance of strengthening bilateral relations. Participants discussed strategies to enhance co-operation between the UK and Kyrgyzstan, emphasising areas such as trade, governance and the rule of law.
Workshops and Presentations: Benjamin led workshops focusing on best practices in legal frameworks and economic development, sharing insights from UK experiences that could be beneficial for Kyrgyz reforms. His presentations highlighted the role of democratic institutions in fostering a conducive environment for business and investment.
Networking Receptions: Informal networking events allowed Benjamin to foster relationships with influential figures in the Kyrgyz legal and business communities. These interactions not only promoted dialogue but also facilitated collaboration between UK and Kyrgyz counterparts.
A comprehensive report detailing the findings and insights gathered during Benjamin’s trip will be issued. This report will include recommendations for potential UK support and mentoring initiatives tailored for Kyrgyz counterparts, developed in consultation with the British Embassy. It aims to provide actionable steps for enhancing legal frameworks and economic policies that align with international standards, ultimately strengthening the UK-Kyrgyz partnership.
Benjamin’s visit to Bishkek represents a significant step towards fostering robust UK-Kyrgyz relations and advancing meaningful legal and economic reforms that will benefit both nations.
Belgravia Law is actively monitoring developments in AI, exemplified by our Ceyda Ilgen’s participation in the Oxford Generative AI Summit held on 17-18 October 2024 at Jesus College, University of Oxford.
The event featured a series of keynotes, panels and lightning talks from distinguished experts across AI, business, government and media, providing a rich platform for knowledge exchange. Ceyda participated in discussions on the rapid evolution of AI technologies and their implications for various sectors, including legal services.
During the summit, Ceyda emphasised our firm’s keen interest in AI applications in arbitration, highlighting how we are not only exploring innovative uses of AI in dispute resolution but also monitoring compliance with emerging AI regulations. This includes preparing relevant contracts for startup companies to ensure they can navigate the legal landscape effectively while leveraging AI technologies.
Ceyda received positive feedback from attendees who appreciated the unique legal perspective Belgravia Law brings to the discussion, reinforcing our commitment to being at the forefront of AI regulation and its practical implications in the legal field.
Infrastructure Services Luxembourg SARL & Anor v. Kingdom of Spain; Border Timbers Ltd & Anor v. Republic of Zimbabwe [2024] EWCA Civ 1257
The Court of Appeal ruled that Spain and Zimbabwe, as contracting parties to the International Convention on the Settlement of Investment Disputes 1965 (the “ICSID Convention”), had submitted to the jurisdiction of the English courts through Article 54 of the ICSID Convention. Therefore, the states could not invoke state immunity to resist the registration of awards rendered by the International Centre for Settlement of Investment Disputes (“ICSID”).
Generative AI refers to algorithms capable of generating new content, such as text, images and music by learning patterns from existing data. Popular examples include ChatGPT, DALL-E and other large language models (“LLMs”).
Traditional AI focuses on classification and decision-making based on existing data, while generative AI creates new data and outputs by understanding patterns from the provided dataset.
Generative AI poses new challenges around intellectual property, privacy, content liability and data ownership. Concerns also include biases, misuse and security risks due to AI’s capability in generating synthetic content which may appear authentic.
Recent lawsuits have targeted organisations for alleged copyright infringement due to the unauthorised use of copyrighted works in training datasets. There have also been privacy-related claims over misuse of personal data and the inappropriate deployment of generative AI tools.
Regulators globally are increasingly focused on establishing frameworks for AI accountability and transparency. For example:
The EU AI Act proposes strict measures for high-risk AI systems
The UK is taking a sectoral approach to AI regulation with a white paper outlining guidelines for different industries
The US has emphasised voluntary guidelines with recent legislative developments at state levels on AI governance
Key ethical issues include:
The potential for AI-generated misinformation or deepfakes
The risk of reinforcing biases and discrimination present in datasets
The lack of accountability for the consequences of AI-generated content
Businesses should be aware of requirements such as:
Demonstrating transparency in AI models and disclosing the use of generative AI tools
Implementing robust data protection policies to secure training data and model outputs
Conducting AI impact assessments to mitigate risks related to discrimination, security and ethical concerns
Generative AI’s outputs raise questions about ownership. If AI-generated works are based on training data owned by third parties, issues of copyright infringement and originality come to the forefront. Recent legal cases explore whether AI-generated content can be copyrighted and who holds the rights in relation to it.
To safeguard their IP, businesses should:
Review the terms of use for generative AI models and third-party datasets
Implement internal IP policies to monitor AI-generated content and its sources
Secure appropriate licences for training data and validate AI output for potential IP conflicts
Human oversight remains crucial to ensuring generative AI outputs are accurate, ethical and legally compliant. Businesses are advised to implement review processes, especially in high-risk areas such as content creation, marketing and automated decision-making.
The EU AI Act introduces stringent compliance obligations for high-risk AI applications, which can include generative AI systems. These obligations entail risk management measures, transparency requirements and enhanced scrutiny of the training data used.
Businesses should establish cross-functional AI governance teams, engage in regular compliance audits, and stay updated on evolving regulatory standards. Legal teams should focus on risk assessments and scenario-planning based on the anticipated regulations.
Generative AI, commonly referred to as “GenAI”, is no longer a futuristic concept but a technological reality reshaping industries and redefining how business operations. It represents a class of artificial intelligence that can create new content based on the analysis of existing data.
This ability ranges from generating text, images and videos to crafting financial strategies and automating workflows. The integration of GenAI into business strategies opens up new avenues for innovation, efficiency and market competitiveness. It also introduces complex legal, regulatory and ethical considerations that must be navigated carefully.
GenAI’s commercial potential is vast. Marketing and content creation processes are experiencing a shift due to AI-driven tools which can produce tailored content with unprecedented speed and precision. Marketing teams now use GenAI models to generate everything from social media posts and personalised advertisements to customer support messages. The benefits extend beyond marketing, as manufacturers employ GenAI models to streamline product design and prototyping processes. In fields like automotive design, fashion and architecture, AI-generated prototypes enable quicker iterations and optimise results through advanced simulations.
The financial sector is another beneficiary of GenAI. Financial institutions leverage technology to generate insights, analyse market trends and predict future scenarios based on historical data. An illustration of its capabilities is evident in the customer support domain, where automated systems powered by GenAI have become capable of generating human-like responses to customer queries. Chatbots and virtual assistants, once limited to pre-scripted answers, are now adaptive, enhancing customer experiences through personalised and responsive interactions.
With such expansive use, businesses are recognising opportunities presented by GenAI. By automating labor-intensive tasks and augmenting human capabilities, companies achieve substantial improvements in efficiency and cost reduction. Moreover, data-driven decision-making becomes more accessible, allowing organisations to refine their strategies using insights generated from large datasets. However, GenAI’s implementation comes with legal implications and compliance challenges.
A pressing legal question concerns intellectual property rights. Determining ownership of AI-generated content has become a subject of legal scrutiny, raising concerns over whether such creations qualify for copyright protection. Additionally, using datasets for training models without proper consent could potentially infringe on existing copyrights, inviting legal complications. Privacy and data protection issues are also relevant given the extensive use of personal information in training GenAI systems. Businesses need to align their practices with stringent regulations like the GDPR and CCPA to safeguard personal data and avoid significant penalties.
When GenAI content causes harm or produces inaccurate information, establishing responsibility is challenging. Questions arise as to whether developers, operators or the AI model itself should be held accountable. These uncertainties make it essential for businesses to develop clear policies and frameworks for accountability and to ensure compliance with evolving regulations.
Regulatory developments are already underway, as policymakers attempt to establish comprehensive guidelines for the responsible use of AI technologies. For instance, the EU has introduced the EU AI Act, a framework that categorises AI applications based on their level of risk. High-risk applications face stricter compliance measures aimed at enhancing transparency, accountability and fairness. In contrast, the UK has adopted a sector-specific approach, setting guidelines tailored to various industries while maintaining oversight on critical areas. The United States is exploring policy developments with state-level regulations highlighting the growing focus on governance.
Generative AI raises ethical dilemmas. A significant issue is the potential for bias and discrimination. AI models trained on historical data can inadvertently perpetuate existing biases, posing risks in areas such as hiring, lending and criminal justice. Businesses should regularly audit their AI systems to identify and mitigate biases.
The rise of misinformation and deepfakes is another concern. With GenAI’s ability to create hyper-realistic content, the potential for spreading false information is high. To address this, companies should implement robust detection mechanisms and ethical guidelines to maintain trust in their generative AI applications.
Conclusion
Generative AI represents a paradigm shift in businesses operations, creating new possibilities for efficiency, innovation and strategic growth. However, the successful integration of GenAI requires careful consideration of legal, regulatory and ethical factors. Businesses that strike the right balance between technological advancements and responsible practices can derive significant benefits from generative AI, contributing to an innovative future within the boundaries of law and ethics.
For all enquiries please write to: contact@belgravia.law.
Download this newsletter in PDF