Multilateral

  • The Special Rapporteur on the situation of human rights in the Palestinian territories occupied since 1967 released a report that “investigates the corporate machinery” sustaining the war in Gaza and outlines international law mechanisms for holding the private sector - and its executives - accountable. Tech companies, including IBM, Microsoft, Google, and Amazon, are mentioned specifically.

  • In another important first step, the International Labour Organisation has committed to develop binding global standards on decent work in the platform economy. A majority of ILO member states backed the decision at the International Labour Conference. (Delegates from Switzerland, India, and the US opposed the move.) Global minimum labor standards in this area could be transformative for digital labor platforms, but much will depend on how the standards define what constitutes a labor platform and who qualifies as a platform worker. The final round of negotiations will take place at the 2026 International Labour Conference.

United States 

  • The MIT Tech Review outlines issues you may have missed in the White House’s AI Action Plan, including weakening the FTC’s oversight of AI development.

  • New York State has become the first US state to ask companies to disclose when AI contributes to mass layoffs. As of March, the state’s WARN (Worker Adjustment and Retraining Notification) system now asks employers to indicate “if technological innovation or automation” was a reason for job loss at least 90 days before a mass layoff or plant closure. This is significant because it is the first official step toward measuring and perhaps eventually regulating AI’s impact on the labor market. 

Europe

  • The European Commission published its final Code of Practice on General-Purpose AI. It sets out detailed expectations for companies around transparency, copyright, and measures to mitigate systemic risks. The code is voluntary, but signatories will gain a reduced administrative burden in exchange for disclosure on training data, establishing risk management frameworks, and avoiding unauthorized use of copyrighted content. Mistral, OpenAI, Anthropic, Microsoft, and Google have signed or signalled their willingness to sign the code. Meta has stated that it will not.

  • The UK’s Online Safety Act has come into force, requiring online platforms to check that all UK-based users are at least eighteen years old before granting access to “harmful” content. The Electronic Frontier Foundation has extensively criticized the law for eroding privacy, chilling speech, and undermining the safety of children.

  • France is urging action at the EU level; President Emmanuel Macron said he will ban social media for youth under 15 years old if progress is not made. Alongside Greece and Spain, France is spearheading efforts to get the EU to significantly limit the amount of time minors can spend online. France also intends to implement age verification on sites selling knives, similar to measures they currently apply to pornographic sites. 

  • Denmark is considering a proposed change to its copyright law that would ensure that everybody has a right to their own body, facial features, and voice. In what would be a first of its kind in Europe, the new law, once approved, would give people in Denmark the right to demand that online platforms remove content that includes their likeness if it is shared without their consent. The law also covers realistic, digitally generated imitations of an artist’s performance without consent. (Exceptions for parodies and satire have been included.) Tech platforms would be subject to steep fines if they do not comply. 

Keep Reading

No posts found