In this blog, we look at the steps the European Union is taking to regulate artificial intelligence.
In this blog, Inline Policy looks at how the UK, home to many promising AI start-ups, is seeking to balance certainty with flexibility in its regulatory framework.
As London Tech Week gets going, we take a look at the key debates in UK tech policy and recap where all of the major new regulatory proposals have got to.
The European Commission has proposed new rules providing compensation for damage caused by AI systems. Below, we summarise the two key instruments which users and providers of AI systems will need to comply with.
Inline’s previous blog explored the UK Government’s aspirations for the technology and digital sectors and its legislative plans to make the UK a global leader in the space. In this blog, we look at the organisations responsible for regulating the UK’s tech sector, focusing on their powers and for which areas they have responsibility. We highlight some of the regulatory issues that these regulators are dealing with and which we advise tech companies should monitor.
With the terms of the UK’s exit from the EU largely settled, the UK Government has begun to turn its attention to what it wishes to do with the powers that have been repatriated from the European Union. This blog explores the Government’s aspirations for the technology and digital sectors and its legislative plans to make the UK a global leader in this area.
In recent years, artificial intelligence (AI) has become embedded in many of the processes of business operations, public life, and politics. Yet as AI is increasingly becoming a part of people’s lives, suspicions have mounted as to whether AI is a force for good, or whether its algorithms create bad outcomes for some of those on the receiving end of its calculations. In a 2020 survey by KPMG, only 26% of UK citizens were willing to rely on information provided by an AI or to share data with an AI. To combat the festering mistrust in AI, the UK Government published its novel Algorithmic Transparency Standard in late November 2021. This blogpost introduces the standard, evaluates its potential, and points to the questions which remain open.
In the wake of the Covid-19 pandemic, enabling a seamless, contactless, traveller journey is becoming more a matter of necessity rather than an option. We provide an account of the regulatory challenges and opportunities for biometric technologies companies facilitating a seamless traveller experience.
As we explained in our previous blog, European policy makers are pondering whether to revise the 1985 Product Liability Directive to make it ‘future-proof’ and ensure it remains fit for purpose amidst the growth of new technologies. Both the European Commission and European Parliament have addressed the issue in various formats and within different frameworks, both as part of a broader revision of European product safety regulation and/or as part of a planned regulation on Artificial Intelligence – whose aim would be to address the legal challenges of new automated technologies.
The continued growth and application of new technologies raises new challenges for regulators and policymakers. Alongside new policy frameworks, existing regulations need to be re-evaluated to ensure that they remain proportionate, effective, fit-for-purpose and ‘future proof’. One such regulation is the Product Liability Directive, with growing calls for it to be reviewed.
The tech sector, as all other sectors of the economy, has been heavily impacted by the COVID-19 crisis, but not necessarily in a negative way. The pandemic could in fact represent an opportunity for five key tech sub-sectors to innovate their business models and show policy makers the potential of new technologies for good during (and beyond) global crises.
The EU has set great ambitions around artificial intelligence, seeking to accelerate innovation and foster a much more competitive environment. But as the example of the copyright directive shows, much can go wrong for Europe’s AI businesses if they do not pay attention to what will be proposed.
The European Union is working on a new regulatory framework for artificial intelligence that seeks to ensure better consumer protection, while enhancing Europe’s technological competitiveness. The risk is for it to become but a duplication of already-existing practices and regulations.
Facial recognition technology is controversial amongst consumers, and a lack of clear rules about how to apply it has caused concerns amongst both the public and regulators. However, the benefits in certain contexts are there for all to see, and the race is on between business and lawmakers to shape the regulatory landscape.
In the second of our regular Data Policy Tracker we cover the key political and regulatory changes, trends and developments impacting the data sector.
In the first of our new regular Data Policy Trackers we cover the key political and regulatory changes, trends and developments impacting the data sector.
Nine months after "GDPR day" our new briefing paper assesses the fallout of the new EU data protection regime, the emerging trends in regulation of data sharing and how industry is responding.
Rapid technological transformations driven by US and Chinese companies are posing a serious challenge to Europe's policymakers. Third way politics looks set to shape much of the regulatory response.
Governments and regulators are actively considering how competition policy should respond to the growth of the digital economy. A forthcoming report from the European Parliament provides an insight into the state of the debate in Brussels.