As we explained in our previous blog, European policy makers are pondering whether to revise the 1985 Product Liability Directive to make it ‘future-proof’ and ensure it remains fit for purpose amidst the growth of new technologies. Both the European Commission and European Parliament have addressed the issue in various formats and within different frameworks, both as part of a broader revision of European product safety regulation and/or as part of a planned regulation on Artificial Intelligence – whose aim would be to address the legal challenges of new automated technologies.
The continued growth and application of new technologies raises new challenges for regulators and policymakers. Alongside new policy frameworks, existing regulations need to be re-evaluated to ensure that they remain proportionate, effective, fit-for-purpose and ‘future proof’. One such regulation is the Product Liability Directive, with growing calls for it to be reviewed.
The tech sector, as all other sectors of the economy, has been heavily impacted by the COVID-19 crisis, but not necessarily in a negative way. The pandemic could in fact represent an opportunity for five key tech sub-sectors to innovate their business models and show policy makers the potential of new technologies for good during (and beyond) global crises.
The EU has set great ambitions around artificial intelligence, seeking to accelerate innovation and foster a much more competitive environment. But as the example of the copyright directive shows, much can go wrong for Europe’s AI businesses if they do not pay attention to what will be proposed.
The European Union is working on a new regulatory framework for artificial intelligence that seeks to ensure better consumer protection, while enhancing Europe’s technological competitiveness. The risk is for it to become but a duplication of already-existing practices and regulations.
Facial recognition technology is controversial amongst consumers, and a lack of clear rules about how to apply it has caused concerns amongst both the public and regulators. However, the benefits in certain contexts are there for all to see, and the race is on between business and lawmakers to shape the regulatory landscape.
In the second of our regular Data Policy Tracker we cover the key political and regulatory changes, trends and developments impacting the data sector.
In the first of our new regular Data Policy Trackers we cover the key political and regulatory changes, trends and developments impacting the data sector.
Nine months after "GDPR day" our new briefing paper assesses the fallout of the new EU data protection regime, the emerging trends in regulation of data sharing and how industry is responding.
Rapid technological transformations driven by US and Chinese companies are posing a serious challenge to Europe's policymakers. Third way politics looks set to shape much of the regulatory response.
Governments and regulators are actively considering how competition policy should respond to the growth of the digital economy. A forthcoming report from the European Parliament provides an insight into the state of the debate in Brussels.
The UK Government has engaged a panel to review competition in digital markets, and one of the key themes is the concentration of 'big tech'. With the panel tasked with consulting industry and reporting by early 2019, companies seeking to influence the panel's thinking need to get started as soon as possible.
Breaking up big tech has become the argument of choice by those concerned about the concentration of power and the practices of large multinationals dominating the digital sphere. But does it make sense?
MEPs ask thousands of questions to the European Commission each year and during the 2009-2014 term of the European Parliament, over 10,000 questions were tabled. At Inline, our job is to cut through the noise, so here are the five most important questions for the tech sector in 2018.
Another day, another report on artificial intelligence? Not quite.
Published today, the 180-page volume by the House of Lords’ Select Committee is more than just the latest contribution to the emerging debate about the opportunities and challenges of AI. Led by experienced lawyers such as Baron Clement-Jones and renowned scholars like Lord Anthony Giddens, former director of the London School of Economics, it might well prove influential both in the UK and beyond.
Business has long been convinced about the many opportunities offered by artificial intelligence (AI). Reports abound with estimates about the added value that applications powered by AI can create in the future. Literally everyone is on to it, from the dominant tech players in Silicon Valley all the way to established companies in the transport and utilities sectors. Even public authorities are joining the race. Countries as diverse as China, Canada, Germany and Singapore run significant programmes investing heavily in AI research capabilities or experimenting with early applications.
Robots are rapidly gaining public visibility as their development accelerates in conjunction with recent innovations in the domains of artificial intelligence, machine learning, machine-to-machine and machine-to-human interaction.