Description: The contents of the following article have been written solely on opinion basis & with the help of AI itself. I thank my co-author ChatGPT-3.5 whose contribution is of utmost value to me.

Humans sure weren’t kidding when they proposed the creation of a generative intelligence that just might supersede the creators themselves as it might have been achieved greater understanding, intellect & skill than humans themselves. AI is built on the idea of simulating human cognitive abilities using algorithms and data. It encompasses various subfields, such as machine learning, natural language processing, computer vision, robotics, and expert systems. Application of AI based tools in day-to-day activities of human life is illimitable. From healthcare, education, content writing, art, to cyber-security, defence, trade & commerce etc.
AI’s usage is unrestricted, which at the same time makes it belligerent for humans. AI’s limitless discharge in various fields is the reason the creators of AI are now worried about the systematic downfall of their civilization that for all practical purposes has been a hazard for other surviving natural elements as well.
Irony is that the challengers themselves are being challenged. The question arises whether the new age would forsee the conflict between the creators & the created. Can the human civilization survive yet another danger posed to them by themselves? If the new age of civilization were to be in the hands of AI then how would the prospects of rights, liberties, ethics, law, authority & power shape up in the hands of AI, that are clearly the virtues created by human civilization. Can AI enjoy rights & liberties just like humans? Are there any ethical constraints and values for AI? Is there a space for co-existence?
About rights & liberties

The question of whether AI should have rights like humans hinges on its lack of consciousness and sentience. Human rights are grounded in these qualities, ensuring dignity and protection for sentient beings. AI, designed for specific tasks based on algorithms and data, lacks intrinsic consciousness and subjective experience. Granting AI rights akin to humans could obscure moral priorities and pose legal and practical challenges in accountability and societal integration. Instead, ethical guidelines and regulatory frameworks should be developed to govern AI development and use responsibly.
These frameworks can address issues of fairness, transparency, and societal impact without conflating AI capabilities with human rights. As AI technology evolves, these considerations will be crucial in balancing innovation with ethical considerations and preserving human values and liberties.
AI & Ethics
AI can operate according to programmed ethical guidelines focusing on fairness, transparency, and accountability. These guidelines shape AI’s decision-making algorithms, addressing issues like safety and non-discrimination. However, AI lacks human ethical intuition, empathy, and contextual understanding. Ethical AI development requires ongoing research, regulation, and ethical discussions to align AI advancements with societal values while ensuring human oversight and responsibility. While AI can simulate ethical behavior, it fundamentally differs from human ethics, which are shaped by emotions, cultural norms, and complex social interactions.
Ethical considerations for AI integration encompass a range of principles aimed at ensuring that AI technologies are developed and used responsibly. Key areas include:
- AI systems should be transparent in their decision-making processes. Users should understand how AI makes decisions, and there should be mechanisms to hold developers and deployers accountable for the impacts of AI systems​.
- AI should be designed and deployed in ways that prevent bias and discrimination. This involves careful data selection, algorithm design, and continuous monitoring to ensure fairness across different groups and communities​​.
- AI systems often rely on large amounts of data, which raises concerns about privacy. It’s essential to protect personal data through robust encryption, secure storage, and by obtaining informed consent from users​​.
- Ensuring the safety and security of AI systems is critical to prevent misuse and unintended harm. This includes protecting AI systems from cyberattacks and ensuring they operate reliably under a variety of conditions​​.
- AI should enhance human capabilities and well-being, rather than replace humans or reduce human agency. This principle emphasizes designing AI systems that support and empower users​​.
- The development and deployment of AI should consider its environmental impact, including energy consumption and the lifecycle of AI hardware. Efforts should be made to minimize negative environmental effects​.
- Governments and organizations should establish clear ethical guidelines and regulatory frameworks for AI development and use. This helps ensure compliance with ethical standards and promotes public trust in AI technologies
By adhering to these principles, AI integration can be managed in a way that maximizes benefits while minimizing risks, ensuring a positive and equitable impact on society.
AI & Rights case study: South Korea Gumi City Council
In a recent incident in South Korea, a robot civil servant, working at the Gumi City Council, was found at the bottom of a stairwell, leading to speculations of the country’s first “robot suicide.” The robot, known as the “Robot Supervisor,” was responsible for various tasks such as delivering documents and providing information to residents.

Witnesses reported unusual behavior before its fall, raising concerns about the robot’s workload and the broader implications of integrating robots into daily tasks. This incident has sparked a nationwide debate & led the Gumi City Council to pause its plans for further robot adoption​.
This incident has been a pandora’s box for AI experts & theorists to discuss the various constraints an artificial life has. The concept of work ethics, workspace systems & labour rights for AI & its future will be the topic of discussion for many years to come. Who will draw the line of AI labour laws, rights & ethics & how will be in a limbo for now. But a general concern regarding the AI work culutre must be promoted & boundaries should be drawn where a systematic approach is implemented for AI work ethics similar to human labour. The question also arises whether are we as humans even ready to adopt & integrate.
Future of AI & humans: Possibility of a peaceful co-existence
AI & humans can coexist peacefully by adhering to ethical guidelines & ensuring responsible use. Key factors include transparency, collaboration, & public education about AI’s capabilities & limitations. Regulations & oversight are essential for safe & beneficial AI development. Addressing biases & supporting job transitions due to automation are crucial to avoid exacerbating inequalities. Human-centric design should prioritize well-being & serve human needs. By following these principles, AI can augment human abilities & contribute positively to society, fostering a harmonious relationship between humans & technology.