I principi globali per l’intelligenza artificiale per garantire un futuro sostenibile dell’editoria e del giornalismo

26 organizzazioni che rappresentano migliaia di professionisti e organizzazioni in ​​tutto il mondo, tra cui editrici di notizie, intrattenimento, riviste e libri e il settore dell’editoria accademica, hanno pubblicato i Principi Globali per l’Intelligenza Artificiale (AI). Primi nel loro genere, questi principi globali pionieristici forniscono una guida per lo sviluppo, l’implementazione e la regolamentazione dei sistemi e delle applicazioni di intelligenza artificiale (AI) per garantire che le opportunità di business e l’innovazione possano prosperare all’interno di un quadro etico e responsabile.

I Principi globali per l’intelligenza artificiale mirano a garantire la continua capacità degli editori di creare e diffondere contenuti di qualità, facilitando al contempo l’innovazione e lo sviluppo responsabile di sistemi di intelligenza artificiale affidabili.

I Principi affrontano i diversi temiA fondamentali dell’uso dell’IA dalla proprietà intellettuale, alla trasparenza, alla responsabilità, alla qualità e all’integrità, all’equità, alla sicurezza, al design e allo sviluppo sostenibile, i Principi globali per l’intelligenza artificiale segnano una collaborazione senza precedenti che salvaguarda gli interessi dei creatori di contenuti, degli editori e dei consumatori.

Nei Principi, le organizzazioni chiedono lo sviluppo e l’implementazione responsabile di sistemi e applicazioni di intelligenza artificiale, affermando che questi nuovi strumenti devono essere sviluppati solo in conformità con i principi e le leggi stabiliti che proteggono la proprietà intellettuale, i marchi, le relazioni con i consumatori e gli investimenti degli editori.

I Principi affermano esplicitamente che “l’appropriazione indebita indiscriminata della nostra proprietà intellettuale da parte dei sistemi di intelligenza artificiale è immorale, dannosa e costituisce una violazione dei nostri diritti protetti”.

Tra le altre cose, i principi globali sull’intelligenza artificiale stabiliscono che gli sviluppatori, gli operatori e gli utilizzatori di sistemi di intelligenza artificiale dovrebbero:

  • Rispettare i diritti di proprietà intellettuale proteggendo gli investimenti delle organizzazioni nei contenuti originali.
  • Sfruttare modelli di licenza efficienti che possano facilitare l’innovazione attraverso la formazione di sistemi di intelligenza artificiale affidabili e di alta qualità.
  • Fornire trasparenza granulare per consentire agli editori di far valere i propri diritti laddove i loro contenuti sono inclusi nei set di dati di addestramento.
  • Attribuire chiaramente il contenuto agli editori originali del contenuto.
  • Riconoscere il ruolo inestimabile degli editori nella generazione di contenuti di alta qualità per la formazione, ma anche per la presentazione e la sintesi.
  • Rispettare le leggi e i principi sulla concorrenza e garantire che i modelli di intelligenza artificiale non vengano utilizzati per scopi anticoncorrenziali.
  • Promuovere fonti di informazioni attendibili e affidabili e garantire che i contenuti generati dall’intelligenza artificiale siano accurati, corretti e completi.
  • Non travisare le opere originali.
  • Rispettare la privacy degli utenti che interagiscono con loro e divulgare completamente l’uso dei loro dati personali nella progettazione, nella formazione e nell’uso del sistema di intelligenza artificiale.
  • Allinearsi ai valori umani e operare in conformità con le leggi globali.

La versione completa dei Principi globali sull’intelligenza artificiale, reperibili online, approfondisce ciascuno dei punti sopra indicati in modo più dettagliato.

Le organizzazioni che aderiscono ai Global AI Principles includono:

  • AMI – Colombian News Media Association
  • Asociación de Entidades Periodísticas Argentinas (Adepa)
  • Association of Learned & Professional Society Publishers
  • Associação Nacional de Jornais (Brazilian Newspaper Association) (ANJ)
  • Czech Publishers’ Association
  • Danish Media Association
  • Digital Content Next
  • European Magazine Media Association
  • European Newspaper Publishers’ Association
  • European Publishers Council
  • FIPP
  • Grupo de Diarios América
  • Inter American Press Association
  • Korean Association of Newspapers
  • Magyar Lapkiadók Egyesülete (Hungarian Publishers’ Association)
  • NDP Nieuwsmedia
  • News/Media Alliance
  • News Media Association
  • News Media Canada
  • News Media Europe
  • News Media Finland
  • News Publishers’ Association
  • Nihon Shinbun Kyokai (The Japan Newspaper Publishers & Editors Association)
  • Professional Publishers Association
  • STM
  • World Association of News Publishers (WAN-IFRA)

Global Principles on Artificial Intelligence (AI)

Intellectual Property

1) Developers, operators, and deployers of AI systems must respect intellectual property rights, which protect the rights holders’ investments in original content. These rights include all applicable copyright, ancillary rights, and other legal protections, as well as contractual restrictions or limitations imposed by rightsholders on the access to and use of their content. Therefore, developers, operators, and deployers of AI systems—as well as legislators, regulators, and other parties involved in drafting laws and policies regulating AI—must respect the value of creators’ and owners’ proprietary content in order to protect the livelihoods of creators and rightsholders.

2) Publishers are entitled to negotiate for and receive adequate remuneration for use of their IP. AI system developers, operators, and deployers should not be crawling, ingesting, or using our proprietary creative content without express authorisation. Use of intellectual property by AI systems for training, surfacing, or synthesising is usually expressly prohibited in online terms and conditions of the rightsholders, and not covered by pre-existing licensing agreements. Where developers have been permitted to crawl content for one purpose (for example, indexing for search), they must seek express authorisation for use of the IP for other purposes, such as inclusion within LLMs. These agreements should also account for harms that AI systems may cause, or have already caused, to creators, owners, and the public.

3) Copyright and ancillary rights protect content creators and owners from the unlicensed use of their content. Like all other uses of protected works, use of protected works in AI systems is subject to compliance with the relevant laws concerning copyrights, ancillary rights, and permissions within protocols. To ensure that access to content for use in AI systems is lawful, including through appropriate licenses and permissions obtained from relevant rightsholders, it is essential that rightsholders are able effectively to enforce their rights, and where applicable, require attribution and remuneration.

4) Existing markets for licensing creators’ and rightsholders’ content should be recognised. Valuing publishers’ legitimate IP interests need not impede AI innovation because frameworks already exist to permit use in return for payment, including through licensing. We encourage efficient licensing models that can facilitate training of trustworthy and high-quality AI systems

Transparency

5) AI systems should provide granular transparency to creators, rightsholders, and users. It is essential that strong regulations are put in place to require developers of AI systems to keep detailed records of publisher works and associated metadata, alongside the legal basis on which they were accessed, and to make this information available to the extent necessary for publishers to enforce their rights where their content is included in training datasets. The obligation to keep accurate records should go back to the start of the AI development to provide a full chain of use regardless of the jurisdiction in which the training or testing may have taken place. Failure to keep detailed records should give rise to a presumption of use of the data in question. When datasets or applications developed by non-profit, research, or educational third parties are used to power commercial AI systems, this must be clearly disclosed so that publishers can enforce their rights. Where developers use AI tools as a component into the process of generating knowledge from knowledge, there should be transparency on the application of these tools, including appropriate and clear accountability and provenance mechanisms, as well as clear attribution where appropriate in accordance with the terms and conditions of the publishers of the original content. Without limiting and subject to paragraphs 6 and 9, AI developers should work with publishers to develop mutually acceptable attribution and navigation standards and formats. Users should also be provided with comprehensible information about how such systems operate to make judgments about system and output quality and trustworthiness.

Accountability

6) Providers and deployers of AI systems should cooperate to ensure accountability for system outputs. AI systems pose risks for competition and public trust in the quality and accuracy of informational and scientific content. This can be compounded by AI systems generating content that improperly attributes false information to publishers. Deployers of AI systems providing informational or scientific content should provide all essential and relevant information to ensure accountability and should not be shielded from liability for their outputs, including through limited liability regimes and safe harbours.

Quality and Integrity

7) Ensuring quality and integrity is fundamental to establishing trust in the application of AI tools and services. These values should be at the heart of the AI lifecycle, from the design and building of algorithms, to inputs used to train AI tools and services, to those used in the  practical application of AI. A fundamental principle of computing is that a process can only be as good or unbiased as the input used to teach the system (rubbish-in-rubbish-out). AI developers and deployers should recognise that publishers are an invaluable part of their supply chain, generating high-quality content for training, and also for surfacing and synthesising. Use of high-quality content upstream will contribute to high-quality outputs for downstream users.

Fairness

8) AI systems should not create, or risk creating, unfair market or competition outcomes. AI systems should be designed, trained, deployed, and used in a way that is compliant with the law, including competition laws and principles. Developers and deployers should also be required to ensure that AI models are not used for anti-competitive purposes. The deployment of AI systems by very large online platforms must not be used to entrench their market power, facilitate abuses of dominance, or exclude rivals from the marketplace. Platforms must adhere to the concept of non-discrimination when it comes to publishers exercising their right to choose how their content is used.

Safety

9) AI systems should be trustworthy. AI systems and models should be designed to promote trusted and reliable sources of information produced according to the same professional standards that apply to publishers and media companies. AI developers and deployers must use best efforts to ensure that AI generated content is accurate, correct and complete. Importantly, AI systems must ensure that original works are not misrepresented. This is necessary to preserve the value and integrity of original works, and to maintain public trust.

10) AI systems should be safe and address privacy risks. AI systems and models in particular should be designed to respect the privacy of users who interact with them. Collection and use of personal data in AI system design, training, and use should be lawful with full disclosure to users in an easily understandable manner. Systems should not reinforce biases or facilitate discrimination.

By Design

11) These principles should be incorporated by design into all AI systems, including general purpose AI systems, foundation models, and GAI systems. They should be significant elements of the design, and not considered as an afterthought or a minor concern to be addressed when convenient or when a third party brings a claim.

Sustainable Development

12) The multi-disciplinary nature of AI systems ideally positions them to address areas of global concern. AI systems bear the promise to benefit all humans, including future generations, but only to the extent they are aligned to human values and operate in accordance with global laws. Long-term funding and other incentives for suppliers of high-quality input data can help to align systems with societal aims and extract the most important, up-to-date, and actionable knowledge.