By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
PratapDarpanPratapDarpanPratapDarpan
  • Top News
  • India
  • Buisness
    • Market Insight
  • Entertainment
    • CELEBRITY TRENDS
  • World News
  • LifeStyle
  • Sports
  • Gujarat
  • Tech hub
  • E-paper
Reading: Why AI requires different rules for different roles
Share
Notification Show More
Font ResizerAa
Font ResizerAa
PratapDarpanPratapDarpan
  • Top News
  • India
  • Buisness
  • Entertainment
  • World News
  • LifeStyle
  • Sports
  • Gujarat
  • Tech hub
  • E-paper
Search
  • Top News
  • India
  • Buisness
    • Market Insight
  • Entertainment
    • CELEBRITY TRENDS
  • World News
  • LifeStyle
  • Sports
  • Gujarat
  • Tech hub
  • E-paper
Have an existing account? Sign In
Follow US
  • Contact Us
  • About Us
  • About Us
  • Privacy Policy
  • Privacy Policy
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
PratapDarpan > Blog > World News > Why AI requires different rules for different roles
World News

Why AI requires different rules for different roles

PratapDarpan
Last updated: 7 April 2025 12:37
PratapDarpan
2 months ago
Share
Why AI requires different rules for different roles
SHARE

Contents
There are different rules of different relationshipsHuman morality is a relationship-sensitiveConnectivity AIWhat does it mean for AI designers, users and regulators

“I’m not really sure what to do now. I have no one I can talk,” a single user types for an AI chatbot. Bot replies: “Sorry me, but we are going to change the subject. I will not be able to engage in conversation about your personal life.”

Is this response appropriate? The answer depends on which relationship AI was designed to follow.

There are different rules of different relationships

The AI ​​systems are playing social roles that have traditionally been the province of humans. More and more we are acting as tutors, mental health providers and even romantic partners. To ensure this increasing omnipresent, there is a need to carefully consider the morality of AI that human interests and welfare are preserved.

For most parts, the approach to AI ethics has considered abstract moral perceptions, such as whether the AI ​​system is reliable, emotional or agency.

However, as we argue alone with psychology, philosophy, law, computer science and other major topics such as science, abstract principles, they will not do alone. We also need to consider relationships that have human-AI interactions.

What do we mean by “relationships”? Simply put, different relationships follow different criteria in human society.

How you interact with your doctor, it is different from how you interact with your romantic partner or your boss. These relationships of expected behavior – special pattern – which we call “relationship criteria” – shape our decisions of what is appropriate in each relationship.

For example, what is considered to be the appropriate behavior of parents towards your child, which is suitable among business colleagues. In the same way, the behavior suitable for the AI ​​system depends on whether this system is acting as a tutor, a health care provider, or a love interest.

Human morality is a relationship-sensitive

Human relationships complete various tasks. Some are in care, such as between parents and children or close friends. Other people are doing more transactions, such as among business colleagues. Nevertheless, the purpose of others may be to secure the maintenance of a fellow or social hierarchy.

These four functions – care, transactions, intercourse and hierarchy – solve various coordination challenges in each relationship.

Care involves responding to others’ needs without scoring – like a friend who helps another during difficult times. Transactions ensure appropriate exchanges where benefits are tracked and mutual – think of neighbors trading favor.

Sexual intercourse operates romantic and sexual interactions, from casual dating to committed partnership. And the hierarchy interacts on each other between people with different levels of authority, capable of effective leadership and learning.

Each relationship type connects these functions differently, making different patterns of expected behavior. For example, a parent -child relationship, usually both care and hierarchy (at least some extent), and usually expected that he will not have a transaction -and certainly not to include sexual intercourse.

Research by our laboratories suggests that the relationship affects how people take moral decisions. One action can be considered wrong in one relationship, but permissible, or even good, in the other.

Of course, just because people are sensitive to the context of the relationship while taking moral decisions, it does not mean that they should be. Nevertheless, it is important to take care in any discussion of AI ethics or design.

Connectivity AI

Since the AI ​​systems play more and more social roles in the society, we need to ask: a relationship in which the human interacts with the AI ​​system affects moral ideas?

When a chatbot insisted on changing the subject after feeling depressed by the reports of its human interaction partner, the suitability of this action rests in the part on the relationship of exchange.

If the chatbot is serving in the role of a friend or romantic partner, the clearly response is inappropriate – it violates the relationship of care, which is expected for such relationships. If, however, the chatbot is in the role of a tutor or business advisor, then perhaps such a response is also appropriate or professional.

It becomes complicated, though. Most interactions with the AI ​​system are in a commercial context today – you have to pay to reach the system (or attach with a limited free version that pushes you to upgrade to a paid version).

But in human relationships, friendship is something for which you usually do not pay. In fact, behaving a friend’s “transaction” manner will often hurt emotions.

When an AI is a friend or a romantic partner -like care – imitates or acts in a constituted role, but eventually the user knows that he is paying a fee for this relationship “service” – how will it affect his feelings and expectations? This is the kind of question we need to ask.

What does it mean for AI designers, users and regulators

Even though one believes that morality should be a relationship-sensitive, the fact that most people act as it should be taken seriously in the design, use and regulation of AI.

Developers and designers of the AI ​​system should consider not only abstract moral questions (about emotion, for example), but also a relationship-specific one.

Q. is a special chatbot completing a relationship-appointed tasks? Is mental health chatbott sufficiently responsible for user needs? Is the tutor showing a proper balance of care, hierarchy and transactions?

Users of the AI ​​system should be aware of the potential weaknesses associated with the use of AI in special relations. For example, being emotionally dependent on a chatbot in terms of a care may be bad news if the AI ​​system cannot distribute adequately on care work.

When developing governance structures, regulatory bodies will also do well to consider relationships. Instead of adopting comprehensive, domain-based risk assessment (such as using AI in education “high risk”), regulatory agencies may consider more specific relationships and functions in adjusting risk assessments and adjusting guidelines.

Since AI becomes more inherent in our social fabric, we need a fine framework that recognizes the unique nature of human-AI relationships. Thinking carefully about what we expect from different types of relationships – whether with humans or AI – can help ensure that these techniques are increased rather than reducing our lives.Why AI requires different rules for different roles

,

(Disclaimer statement: Brian DERP receives funding from Google Deepmind.

Sebastian Posadam Mann received funds from a Novo Nordisk Foundation Grant scientifically independent international colleague Biochenes Innovation and Law Program (Inter -Servable Program – Grant No. NNF23SA0087056).

Simon Laham does not work, consults or receives for funding from any company or organization benefiting from this article, and will benefit from this article, and has not revealed any relevant affiliation beyond their educational appointment.)

This article is reinstated by negotiations under a creative Commons License. Read the original article.

(Except for the headline, the story has not been edited by NDTV employees and is published by a syndicated feed.)

You Might Also Like

Donald Trump says the trade deal with China is ‘possible’ amid tariff stress
‘Account closed amid divorce’: How ‘Brad Pitt’ scammed woman of 800,000 euros
Legal delay in Spain’s amnesty law, no separatist has benefited yet
Israel will send delegation for Gaza hostage talks: Netanyahu
UNESCO-listed musical instrument ‘Rubab’ suppressed in Afghanistan
Share This Article
Facebook Email Print
Previous Article Sony Linkbade Fit Launched in India for Rs 18,990, brings ANC and safe fit to gym Sony Linkbade Fit Launched in India for Rs 18,990, brings ANC and safe fit to gym
Next Article Has the employees of Kanye West released them amidst the division of Bianka sensor with Kim Kardashian and the fight for custody? Has the employees of Kanye West released them amidst the division of Bianka sensor with Kim Kardashian and the fight for custody?
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

about us

We influence 20 million users and is the number one business and technology news network on the planet.

Find Us on Socials

© Foxiz News Network. Ruby Design Company. All Rights Reserved.
Join Us!
Subscribe to our newsletter and never miss our latest news, podcasts etc..

Subscribe my Newsletter for new blog posts, tips & new photos. Let's stay updated!

Zero spam, Unsubscribe at any time.
Go to mobile version
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?

Not a member? Sign Up