Risks

AI is associated with a range of risks for users, which require consideration of a series of precautions and recommendations. These can be broadly linked to different stages of using generative AI.

General

Responsibility
Blindly trusting generative AI without questioning its outputs can lead to errors. Individuals remain accountable for the professional or moral responsibility associated with their tasks.

Prompt

Data Protection / Confidentiality
Any content submitted to a generative AI tool may be accessed by the personnel of the owning company, or even more in the event of a data breach. It is therefore crucial to understand what can and cannot be shared with AI.

Generated Content

Transparency
Failing to disclose the use of generative AI in creating content or solutions can cause confusion or misunderstandings regarding the origin and nature of the results. Transparency regarding creation processes is essential.

Reliability
Content generated by generative AI should be regarded as fallible, as it may be biased or even incorrect. A critical stance toward AI-generated content is always necessary.

Intellectual Property
AI raises two types of challenges regarding intellectual property: the rights associated with the data used to train it, and the ownership of what it produces. It is important to clarify these aspects before use.

Usage

Sustainability
Generative AI consumes significant amounts of electricity, both for development and daily use. Its use should therefore be measured and responsible.

The section below provides a deeper examination of the notion of risks, to understand the associated challenges and the resulting recommendations.

 

The section below provides a more in-depth examination of the concept of risks, to better understand the associated challenges and the recommendations that arise from them.

 

I am working on my department’s budget. To save time, I copy financial data from the institution or employee data into a query, removing any reference to the University. Later, the AI tool suffers a cyberattack, and all the data is published online. Because I used a UNIGE email address to open my AI account and submit queries, anyone could deduce that the financial data belongs to UNIGE, thereby exposing the institution.

From a technical perspective, any content submitted to a generative AI tool is accessible to and potentially viewable by the company that owns the tool. While it is unlikely that the company is interested in individual query content—and most commit not to do so—users should assume that all their data or queries could become public information. On one hand, there is a risk of data leaks in the event of a security breach. On the other hand, there is no verifiable assurance regarding how these companies govern data, for instance in terms of storage or the use of queries to develop their models.

It is therefore important to understand that not all data have the same level of sensitivity, which depends on their nature.

Data can be classified into four main categories:

  • Anonymous: These data pose no issues because they are independent, either intrinsically dissociated from any individual or anonymized so that the link has been broken.

  • Ordinary: These are standard personal data, often requested when filling out a form or processing an employee’s or student’s file (e.g., age, gender, name, etc.). They allow identification of a person without discrimination.

  • Sensitive: These are personal data that, if disclosed, could be used to the individual’s detriment (e.g., religion, sick leave, insurance, etc.).

  • Secret or confidential: These data have strategic importance. Less often linked to individuals, they generally concern political, military, industrial, or financial information.

Each of these data categories requires an appropriate level of protection to mitigate potential risks.

I am responsible for recruitment and wish to draft personalized rejection letters to humanize the process, providing more detailed and specific feedback. Without disclosing it, I use an AI tool. A candidate recognizes the typical structure and style of generative AI and raises questions on a public forum. My initial intention is then overshadowed by a controversy regarding the University’s transparency in its communications.

The duty of transparency when using generative artificial intelligence is an essential responsibility in professional practices. This transparency requires clear communication about how these technologies operate, their capabilities, and their limitations. It also involves disclosing the use of generative AI in creating content or solutions, in order to avoid any confusion regarding the origin and nature of the results obtained.

I use ChatGPT to respond to frequently asked student questions, manage emails, and even draft administrative documents. After a few weeks, it becomes apparent that ChatGPT sometimes produces inaccurate or inappropriate responses. It may provide incorrect information about academic submission deadlines or generate confusing answers that result in numerous follow-ups.

Content generated by generative AI should be regarded as fallible (biased or incorrect) due to several factors:

  1. Temporal Limitations
    Most generative AI models are trained on datasets corresponding to a specific time period, without knowledge of events occurring before or after that period.

  2. Hallucination
    AI models use algorithms to detect patterns, combine elements, and produce content that appears to match the query. However, some patterns may be imperceptible or meaningless to humans. While this allows for the creation of original and creative content, because AI lacks critical judgment, it can combine individually correct pieces of information in a way that is ultimately false. Updates reduce the frequency of such errors, but the risk remains.

  3. Algorithmic Bias
    The quality of generated content depends on the quality of the data on which the model was trained. If the data are biased, the outputs will reflect these biases. Biases may also emerge depending on who is posing the question. Careless use can inadvertently reinforce stereotypes or social discrimination.

  4. Mode Collapse
    Instead of generating diverse results, the model repeatedly produces a very limited set of similar outputs. This may occur because it failed to capture the richness of the training data or reflects a lack of diversity within the data itself. This creates a blind spot in interpreting results: it is often harder to notice missing information than incorrect information.

 

I need to create illustrations for the promotion of an event and provide ideas and guidance to the AI that generates the images. I consider these creations the result of my own creative work and artistic direction. After the event, I discover that the same images have been reused in an advertising campaign. I contact the company that developed the AI, which, according to its terms of use, holds the intellectual property rights to the generated images.

AI relies on data, including works and texts that are not necessarily copyright-free, to generate new content. This may involve data submitted in a query as well as the underlying model. This ability to reproduce or transform copyrighted works without the explicit consent of the rights holders raises questions about potential copyright infringements. Furthermore, the use of AI raises the issue of ownership of the content it produces, which is generally secured when the tool is used under a paid license.

During team meetings, ideas and discussions are represented simply and effectively on a whiteboard. Attracted by AI, I replace this system and generate illustrative images for each new idea. After several attempts, I obtain a complex image intended to visually represent the idea, aiming to make discussions more dynamic and visual. However, this method quickly proves more distracting than useful and has a significant environmental impact.

Generative AI is a major consumer of electricity, both for its development and daily use. Each ChatGPT query consumes the same amount of energy as a 5W LED lamp running for one hour. The creation of GPT-3 generated over 500 tons of CO2, equivalent to 123 gasoline-powered cars in a year, and 1,287 MWh of electricity, equivalent to the monthly consumption of 1,467 American households. GPT-4 was developed with 500 times more parameters than GPT-3. The development and widespread use of AI have notable sustainability consequences. It is essential to consider the energy consumption of generative AI and to limit its use to high-value, specific applications.

 

RESPONSABILITY

All of the recommendations outlined above can be summarized in one principle: generative AI is merely a powerful computational tool designed to assist individuals in their tasks. Individuals remain fully responsible for the professional or moral obligations associated with their work; this responsibility cannot be delegated to generative AI, which should be regarded solely as a tool.