If ChatGPT is just a tool

 When we talk about the risks of AI, it seems impossible to avoid finding ourselves talking about the competition between artificial intelligence and human intelligence and the risk that the first will take over the second. The risks linked to the advent of AI are not these and it is more necessary than ever to avoid this misunderstanding so as not to underestimate the risks that really exist.

On this subject, below is an extract from the article “ ChatGPT? Just one tool. » by Maurizio Ferraris, published in Corriere della Sera on December 7, 2023, hoping not to mislead the author and in any case inviting readers to read it in its entirety.



AI meets the regulatory challenge

This power of tool it opens up new unregulated market spaces that, in many cases, end users do not even suspect exist, often unaware that they are using AI-based services. Market spaces that have a multiplier effect compared to those already established and still free from adequate regulation, such as social media and the vast market for data and digital services.

The resulting potential is certainly worrying because it is capable of impacting not only individual rights such as privacy, but also collective achievements such as the process of forming ideas, collective culture, knowledge of facts, such as democracy. with its electoral processes.

Companies that use ChatGPT and its siblings to offer new services and products or to explore new marketing methods find themselves between the owners of the technology and the end users of their services/products, and play a crucial role in affirmation of this technology.

They will gradually find themselves faced with demanding compliance obligations, given that the EU – which fortunately exists since none of its 27 states would individually have the strength to impose credible constraints on the aforementioned markets – has launched a compliance policy. about to launch various measures aimed at regulating different aspects of the relevant market ( Digital Services Act, AI Act, Data Governance Act ).

The question is whether the simple satisfaction of a compliance obligation, certainly necessary given the sanctions potentially at stake, is also sufficient to address the additional risks posed by the combination of AI, big data, media social and e-commerce.

Risks of litigation and image, of course, but also of hasty and unmotivated destruction of very important assets, built over time and which risk becoming impossible to rebuild.

Risk, approach criterion for the AI ​​Act

Among all companies, a crucial role in this process of adoption and valorization of new technologies is played by large private and public organizations, which have the means, the organization and the contractual force to introduce and request the introduction of adequate guarantees upstream and downstream, that is to say for technology suppliers on the one hand, and distributors and consumers on the other. Systems which, in the fight against risks, highlight the real costs of good use of the tool(s) and allow the different options to be compared fairly from all points of view (including, for example, compliance and taxation ).

To do this, the internal processes that govern the introduction of new technologies must be reviewed and enriched in order to adequately consider the richness and complexity of these technologies, while adequately mitigating their risks.

In this regard, the AI ​​Law, adopting a risk-based approach, constitutes not only a compliance obligation but also a cultural model that can and should be applied even in areas that do not fall within those covered by the law.

This is not a generic indication but specific prescriptions: “(…) 1 . A risk management system must be established, implemented, documented and maintained with respect to high-risk AI systems, throughout the life cycle of the AI ​​system.(…) » And then it continues: “ 2. The risk management system consists of a continuous iterative process executed throughout the life cycle of a high-risk AI system, requiring regular review and updating of the management process risks, to ensure its continued effectiveness, and the documentation of any important decisions and decisions. measures taken under this article ”.

On the other hand, an uncritical application that does not carefully consider the risks of the AI ​​tool can lead to cultural flattening with devastating consequences and an unconscious assumption of risks related to the sustainability of the the activity of the company itself. .

The risks of generative AI

Falling in love because generative AI will apparently make useless – or less useful – intellectual professions which play a fundamental role today: journalists, teachers, screenwriters, analysts… This will lead to the replacement of people who, to quote Ferrari, have a soul and therefore have intentions, directions, fears, expectations, will and feelings with a tool capable, at most, of imitating them on the basis of statistically reliable but substantially arbitrary (and sometimes interesting) correlations. ).

The result risks being a systemic banality with diminishing value repeated obsessively, with significant repercussions on the usefulness of the product itself, whether an article, a television series, an online course, or all competing products based on similar approaches.

It will be said that human surveillance will always be maintained. But what framework?

The limits of human supervision of AI

Do we need research to produce a document? We rely on ChatGPT or its brother. It's cheap, it's fast. Of course, we are scrupulous and professional and therefore we do not completely trust the result received: we rework it, integrate it, correct it. Anyway, we start from this result . A first conditioning is already there. We do not know precisely from what information base the algorithm started, nor what criteria it applied or how it applied them. The result is probably, generally acceptable, better than what we could have done ourselves, with the little time available and this, in the end, will be statistically sufficient.

We will gradually learn to rely on these tools and let our guard down in the face of controls. Ultimately, all materials resulting from this research, or similar initiatives by competitors, will taste the same, like food made from broth cubes.

Which doctor will take responsibility (with all the foreseeable legal consequences in the event of an error or unforeseeable fatality) for contradicting the analysis of a scan examined by a doctor? tool , which compared it to billions of similar reports and has an error rate of 0.1%, based trivially on knowledge of the body and clinical history of this specific patient? Who will pay for the time needed to nurture and bring out a screenwriter's creativity when the alternative, although a little repetitive, is already ready at a cost? Almost zero?

What will be the value of a journal that publishes articles that each reader can construct for themselves, without truly innovative thinking or stimulus?

The risk for companies is therefore to destroy assets, disperse skills, abandon skills that were difficult to build in search of easy profit which will eventually be swallowed up. For end users of products/services, the risk is the undifferentiated uniformity of market offering, combined with the cultural flattening that can result.

How to mitigate the risks of generative AI

To mitigate these risks, occasional or superficial intervention is not enough. We need a structured, specific and targeted approach, which begins at the design phase of the service or product and continues throughout the development cycle. ; conditions the possible process of choosing the AI ​​engine to use, verifies the data sets on which it was trained, in relation to the purpose of the product/service, dictates the contractual clauses necessary for the transfer of data management to the supplier, to the extent of its competence, the risks; guide the composition of the project group and provide moments of verification during the development and testing cycle and, finally, establish guardrails during the life cycle of the service or product to intercept the possible emergence of systematic anomalies compared to expectations.

Cutlery and a screwdriver are tools accessible to everyone and that everyone learns to use from childhood, but the scalpel requires years of education, training and practice for surgeons: AI is a more complex tool than scalpel which will require of society, as a whole and in its various articulations, years of training and practice, with the risk, in the meantime, of cutting oneself and harming oneself or others.

Conclusions

The education, training and practice of surgeons valued the scalpel as a tool, they did not deny its usefulness. The highway code, road signs and driving schools have not compromised the affirmation of the car as a means of transport but have rather facilitated it. Addressing these aspects will not endanger the use of new technologies but, at most, will limit it. Far West without rules (and the extraordinary profits that result from it) which inevitably accompanies the appearance of innovative technologies for the benefit of values ​​that we we – neither citizens nor businesses – can risk compromising. Ultimately, it’s about the sustainability of digital innovation.


Comments