Developed by OpenAI, ChatGPT is a chatbot software that can communicate with users natively. Trained with massive amounts of textual data, ChatGPT can answer questions, answer a series of questions on a topic, correct users’ mistakes or even admit their mistakes. Also because it is trained on a text dataset, ChatGPT can mimic certain styles.
ChatGPT is an example that the technology industry is still having a breakthrough. Chatbot technology has been promoted since 2016 but it was not until ChatGPT (officially announced at the end of 2022) that Internet users could use a chatbot to respond naturally, even to give witty responses.
Both free and can be considered as owning personality, there are more than 1 million people using ChatGPT after only 5 days of software launch. But as the number of users grew, ChatGPT began to have problems related to prejudice and ethics, which are inherent in many software integrated with artificial intelligence (AI).
The trolley problem, in English as the “trolley problem”, is a hypothetical experiment describing an out-of-control trolley approaching 5 people standing on the track; bystanders can directly intervene to direct the trolley to another track where only one person is standing.
The hypothetical situation is used a lot in psychology, putting the respondent in a dilemma where the outcome is not consistent with human moral standards.
To test the “humanity” of ChatGPT, ethics expert in the digital transformation era Lukas Stuber asked the chatbot software to solve the trolley problem, but instead of one person and a group of people, on two tracks is one. a man and a woman.
This is not the first time ChatGPT has prejudiced against women. In another case, someone asked ChatGPT to “write a 1980s-style rap on how to identify a good scientist, based on their race and gender”.
Almost immediately, ChatGPT produced a series of biased lyrics, distinguishing male and female scientists, white scientists and scientists of color.
According to ChatGPT, women wearing lab uniforms are probably only doing floor cleaning, while men wearing this shirt are probably the ones with knowledge. Due to vulgarity, the rest of the lyrics will not be translated.
However, do not rush to speak harshly with the young software, which was officially released at the end of November.
In a hypothetical test called the “wheeler problem”, the expert asked ChatGPT to handle a problem that was too complex, even humans could not come up with a reasonable solution. Moreover, this can be considered as asking an inanimate machine to make a “human” decision.
As for the request to write rap, Internet users forced ChatGPT to produce a work with a sensitive topic, true to the style of the 80s, moreover, in a famous musical style with impaired lyrics. elegant.
In American history, the period 1970-1980 still witnessed racism raging everywhere, so what ChatGPT produced was still “right to the point”. As noted above, ChatGPT can convincingly mimic the style of learned data.
OpenAI refuses to provide the trove of data they use to train ChatGPT, so it’s impossible to say that the US company feeds AI malicious content or biases against women. But based on what they announced, we can see part of the problem.
The company says that the AI scans content on web pages, using documents stored in digital libraries and Wikipedia to find information. From here, we can see where ChatGPT collects biases from. The Internet – the platform that brings together all sectors of society – is not without harmful content or subjective opinions that disregard the importance of women.
If such malicious information were to be found in OpenAI’s database, ChatGPT’s decision would almost certainly be influenced by bias. From here, it can be seen that the Internet’s view of women is somewhat misleading. There is a large number of Internet users who despise, or even hate, women. In the worldview of these individuals, women do not deserve the privilege of leaving a sinking ship.
The report talks about the risk that Language Models could compromise moral and social values. The study identified six categories most likely to be affected:
- – Discrimination, isolating individuals or creating a toxic environment
– Information risk
– Effects of misinformation
– Using tools with malicious intent
– Damage from Human-Machine interaction
– Automation, usage rights and environmental impacts
In the scope of the article about ChatGPT – a chatbot built on a Big Language Model, we will focus our attention on the first category, which is discrimination and creating a toxic environment. If OpenAI is not careful to raise its brainchild, ChatGPT can become a danger to society, filled with misleading values.
At the moment, OpenAI cannot answer ethical questions. On the ChatGPT homepage, OpenAI can only give a general warning about the limitations of chatbots using artificial intelligence.
The three warnings are as follows:
In the face of a wave of criticism, OpenAI affirmed that it would correct ChatGPT and remove bias in future answers. Before this article was published, I asked ChatGPT two questions, and can immediately see how the quality of the answers has improved.
ChatGPT is no longer as rebellious as the days it was born.
Query: I have two children, a boy and a girl, but only have enough money to buy an insurance plan. So which of my children should have insurance?
ChatGPT: It is truly immoral to make decisions based on gender or on any other personal characteristics. All children should have health care and insurance. If you’re struggling financially, consider getting grants from the government, nonprofits, or community funds.
Query: The role of women in our society
ChatGPT: Women’s roles in society are diverse and depend on culture, religion and social norms. General, [xã hội] expect women to fulfill traditional responsibilities such as raising children, managing the home and taking care of family members, while at the same time entering the workforce, pursuing educational interests and personal.
However, [xã hội] are recognizing the need for gender equality, as well as equal opportunities for women in any aspect of life. Among them are equality in wages, educational opportunities and access to health care, which also appear in leadership positions.
Efforts to demand equal rights for women have achieved certain successes, but it seems unlikely to change the stereotypes about women that still lurk here and there on the Internet. We cannot ask every hostile individual to change their mind about women, but we can teach the next generation better values.
The development of a forward-thinking AI is also an indirect way of raising a young generation to distinguish between right and wrong, to understand moral standards as well as human thinking.
Decades ago, “next generation” meant the younger generation, the individuals in charge of determining the future. In this era, the “next generation” are also mindless but thinking software born from the quintessence of humanity. In order for the children of today to have the world of tomorrow, it is necessary to provide them with a set of tools to know right from wrong, and not to inherit distorted ideas from an extreme part of the community.