Germany and morality do not get along well. On the one hand, it is the home of moral theorists like Hegel's little more affluent notion of Sittlichkeit and Kant with his categorical imperatives.
The greatest unethical crime the world has ever witnessed was committed in Germany, however, in Auschwitz, as documented in Claude Lanzmann's nine-hour masterpiece Shoah.
In contrast to France, Germany only experienced a philosophical revolution. It is interested in philosophy, particularly moral philosophy. Naturally, moral considerations are at the heart of practically all problems, not just those involving modern AI and ChatGPT.
However, Germany has a unique organization called the Ethics Council, or Ethikrat, that deals with ethical issues. The Ethics Council just just published its conclusions on AI. (AI).a
The Ethics Council is a group of independent specialists that addresses issues relating to ethics, society, science, medicine, and the law. It evaluates the effects on people and society, promotes social discourse, formulates opinions, and makes recommendations to the Bundestag, the German parliament. It released its AI recommendations on March 20, 2023.
The Ethics Council's central tenet is that AI cannot take the place of people. The Ethics Council looks into how humans and AI interact in areas including administration, online platforms, medicine, and schools.
Intriguingly, the Ethics Council has shockingly little - in fact, nothing - to say about AI-guided weaponry, despite having started two World Wars, bombed Serbia twenty years ago, and recently delivered Leopard tanks to the Ukraine.
The Council predicts that AI will eventually permeate virtually every aspect of human life, from employment to shopping, from crime to recruitment, and beyond. It claims in its most recent 287-page report, Human & Machine, that the deployment of artificial intelligence must increase human growth rather than stunt it. These serve as guiding principles for its ethical assessment of human-AI technology interaction.
This necessarily incorporates issues of power and social fairness. It demands that human intelligence and responsibility cannot be replaced by AI applications. Its conclusions are founded on ideas from philosophy and anthropology that are crucial to the interaction between humans and machines. Four factors that pertain to human-machine interaction were identified by the Council:
🔹 Intelligence
🔹 Reasoning
🔹 Human Action
🔹 Responsibility
The Council does, however, think that AI will present both opportunities and hazards. In any event, AI has already demonstrated that, and in many instances, it has obvious benefits in terms of enhancing the potential for human authorship. However, there is also the possibility of a deterioration in human progress.
The usage of digital technology might lead to dependency and even pressure to adapt to AI, which is a drawback. Even worse, AI may close off previously accepted concepts developed by humans. One of the key moral issues considered by the Council in its assessment is:
"Whether and how the transfer of activities previously carried out by people to technology systems influences the possibilities of other people, especially those who have been impacted by decisions made by AI."
Therefore, the AI to human approach needs to be clear and guided by the following two questions: For whom does an AI application present opportunities and risks?And will AI lead to an increase or decrease in human authorship? This also implies that all facets of social justice and power are relevant to the Ethics Council.
The 26 members of the Council also talked about whether or not using AI will increase or decrease human authorship and the requirements for taking responsibility. The Council asserts that the employment of artificial intelligence in the medical sphere is unquestionably possible, particularly in the areas of diagnosis and treatment advice.
The Ethics Council, however, is also promoting adherence to the strictest due diligence requirements and compliance with the highest standards for the protection of data and privacy. It mandates that holes in the implementation of AI algorithms be found as soon as possible. Results supported by AI would also need to pass a plausibility test simultaneously.
Additionally, the Ethics Council contends that in order for certain AI systems to be used in the medical area, they would need to be ethically appropriate as soon as possible.
It cautions that the safety of a patient may be in peril if an AI system completely replaces the medical practitioner. The Council admonishes strongly against allowing AI technology to have too much impact in, say, the medical field. AI use shouldn't result in a further erosion of medical value and a cutback in medical staff.
The council is nevertheless open to using AI-based software in schools to assess student learning progress, spot common errors they make, and identify their strengths and weaknesses, for example. As a result, learning content could be modified based on the learners' learning profiles using AI software.
Additionally, depending on data-based validated outcomes that better serve a learner's specific requirements, subjective perceptions of teachers may conceivably be replaced. The council is still worried about how meaningful the data collecting is, though. Data might potentially be abused to stigmatize and screen specific kids.
The council is nevertheless skeptical about the use of AI to measure students in a sufficient, precise, and reliable manner because it may lead to systematic distortions. Additionally, digitization is not a goal in and of itself. As a result, techno-solutionism, or a worldview that is exclusively technological, should not be used to lead AI in schools.
Instead, the fundamental principles of education—which incidentally include the development of a personality, or what philosophers refer to as personhood, and what the German philosopher Adorno refers to as Mündigkeit, or self-reflective and critical maturity—should be the driving forces behind AI.
Therefore, if AI systems are to be deployed, they must be incorporated into teacher preparation programs. The council is unambiguously in support of regulating online platforms, also referred to as (anti)social media, both inside and outside of schools.
The council is vehemently in favor of strict regulation of AI operators, i.e. corporations, in light of the increasing transition in public communication to online platforms. It also raises the potential danger that AI poses to free speech and plurality of thought.
It also issues a warning that algorithms' selective use of data in accordance with users' personal preferences and platform operators' (red: corporations') economic (read: profit) interests encourages the dissemination of false information, hate speech, and insults against individuals. AI will very certainly play a part in the development of filter bubbles and echo chambers.
As a result, there is a chance that choices that the council deems "relevant" could be made with little to no information. And this is in addition to premeditated manipulation, false information, and deception.
In other words, AI has the potential to limit our freedom to find high-quality information, which is already being degraded by an algorithm that is unseen. AI can also easily result in what the council terms the "brutalization" of online political discourse at the same time and as a result. The council further asserts that the following three laws already in effect:
🔹 Germany’s State Media Treaty
🔹 Germany’s famous Network Enforcement Act (NetzDG)
🔹 EU’s Digital Services Act
Such do not regulate online platforms strictly enough. Existing platforms would therefore have to offer material without individualized customization. Online platforms need to present "opposing positions" that are in opposition to their own inclinations. This is perhaps even more crucial.
Any type of discrimination should be avoided while using AI, and people's rights to object should be safeguarded. Additionally, it mandates that those who use AI maintain the highest level of transparency, hire only qualified individuals, and educate the general public about potential risks.
The opportunities and hazards of AI must be properly studied and put into an appropriate relationship when it comes to the employment of AI-supported systems in law enforcement and the police, according to the council. As a result, "social negotiations" regarding the compatibility of AI with human freedom and security are required.
Overall, Germany's Ethics Council rejects technological advancement fundamentally without upholding its three ethical tenets: (1) AI must advance human growth; (2) AI must not impede human progress; and (3) AI should not replace people.